Jean-Michel DISCHLER

Professor an der Universität Strassburg.
E-mail: dischler -at- unistra -dot- fr
ICUBE, UMR CNRS-UDS 7357 (Laboratoire des sciences de l'ingénieur, de l'informatique et de l'imagerie) 
300 bd Sébastien Brant - BP 10413 - F-67412 Illkirch Cedex, 
Tel: (+33) 03 68 85 45 59 , Fax: (+33) 03 68 85 44 55

francaisenglish


UDS  CNRS  EG2014

Forschungsleiter im Labor ICUBE. Inhaber eines Lehrstuhls im Fachbereich Informatik an der Universitaet Strassburg seit September 2001.

J.M. Dischler war Vize-Direktor des Labors LSIIT (von 2005 bis 2008), Leiter des Masterstudiengangs: Informatique de l'Image et du Calcul Intensif (IICI) von 2005 bis 2009, und Mitglied des Wissenschafts- und Forschungsgremiums der Universitaet Louis Pasteur bis zu seiner Selbstaufloesung durch die Entstehung der Universitaet Strasbourg.  Von September 2002 bis August 2005, war er Dekan des Fachbereichs Informatik an der Fakultaet fuer Mathematik und Informatik.
J.M. Dischler is Vize-Direktor der Geometrie und Graphik Gruppe (informatique géométrique et graphique), IGG, leiter des Appearance and Motion teams, wo er Aktivitaeten in Rendering und Visualisierung animiert. Er war an der INRIA Lorraine Gruppe CALVI (CALcul scientifique et VIsualisation) von 2003 bis 2007 als Gast beteiligt. Dort baut er das Thema "direct volume rendering" auf. Er verlaesst dieses Projekt 2007 um sich Aktivitaeten im Bereich Texturen und Scanning zu widmen.
Er hat im Bereich Informatik an der Universitaet von Strasbourg in Januar 1996 promoviert. 2000 erhaelt er eine Habilitation von der Universitaet Limoges, wo er 5 Jahre lang als Dozent tätig war (1996-2001).  

Publications on ICube database
Publications on DBLP database

Teaching

data structures and algorithmics, computer graphics, visualization, object oriented programming, history of computer science.

Research 

texture modeling, acquisition, analysis and synthesis; 3D acquisition for cultural heritage, processing of scanned data; interactive display using GPUs; volume rendering.

Selected research projects




S siggraph
Multi-Scale Label-Map Extraction for Texture Synthesis, Siggraph 2016
Y. D. Lockerman, B. Sauvage, R. Allègre, J. M. Dischler, J. Dorsey, H. Rushmeier

Abstract. Texture synthesis is a well-established area, with many important applications in computer graphics and vision. However, despite their success, synthesis techniques are not used widely in practice because the creation of good exemplars remains challenging and extremely tedious. In this paper, we introduce an unsupervised method for analyzing texture content across multiple scales that  automatically extracts good exemplars from natural images. Unlike existing methods, which require extensive manual tuning, our method is fully automatic. This allows the user to focus on using texture palettes derived from their own images, rather than on manual interactions dictated by the needs of an underlying algorithm.
Most natural textures exhibit patterns at multiple scales that may vary according to the location (non-stationarity).  To handle such textures many synthesis algorithms rely on an analysis of the input and a guidance of the synthesis. Our new analysis is based on a labeling of texture patterns that is both (i) multi-scale and (ii) unsupervised that is, patterns are labeled at multiple scales, and the scales and the number of labeled clusters are selected automatically.
Our method works in two stages: the first builds a hierarchical extension of superpixels; the second labels the superpixels based on random walk in a graph of similarity between superpixels and a nonnegative matrix factorization. Our label-maps provide descriptors for pixels and regions that benefit state-of-the-art texture synthesis algorithms. We show several applications including guidance of non-stationary synthesis, content selection and texture painting. Our method is designed to treat large inputs and can scale to many megapixels. In addition to traditional exemplar inputs, our method can also handle natural images containing different textured regions..
SA2014picture Local random-phase noise for procedural texturing, Siggraph Asia 2014
G. Gilet, B. Sauvage, K. Vanhoey, J.-M. Dischler, D. Ghazanfarpour

Abstract. Local random-phase noise is an efficient noise model for procedural texturing. It is defined on a regular spatial grid by local noises, which are sums of cosines with random phase. Our model is versatile thanks to separate samplings in the spatial and spectral domains. Therefore, it encompasses Gabor noise and noise by Fourier series. A stratified spectral sampling allows for a faithful yet compact and efficient reproduction of an arbitrary power spectrum. Noise by example is therefore obtained faster than state-of-the-art techniques. As a second contribution we address texture by example and generate not only Gaussian patterns but also structured features present in the input. This is achieved by fixing the phase on some part of the spectrum. Generated textures are continuous and non-repetitive. Results show unprecedented framerates and a flexible visual result: users can modify noise parameters to interactively edit visual variants.
Siggraph Asia On-the-Fly Multi-Scale Infinite Texturing from Example, Siggraph Asia 2013
 K. Vanhoey, B. Sauvage, F. Larue and Jean-Michel Dischler

Abstract. In computer graphics, rendering visually detailed scenes is often achieved through texturing. We propose a method for on-the-fly non-periodic infinite texturing of surfaces based on a single image. Pattern repetition is avoided by defining patches within each texture whose content can be changed at runtime. In addition, we consistently manage multi-scale using one input image per represented scale. Undersampling artifacts are avoided by accounting for fine-scale features while colors are transferred between scales. Eventually, we allow for relief-enhanced rendering and provide a tool for intuitive creation of height maps. This is done using an ad-hoc local descriptor that measures feature self-similarity in order to propagate height values provided by the user for a few selected texels only. Thanks to the patch-based system, manipulated data are compact and our texturing approach is easy to implement on GPU. The multi-scale extension is capable of rendering finely detailed textures in real-time.
CGF2013 Robust Fitting on Poorly Sampled Data for Surface Light Field Rendering and Image Relighting, CGF Vol. 32(6), 2013
 K. Vanhoey, B. Sauvage, O. Génevaux, F. Larue and Jean-Michel Dischler

Abstract. 2D parametric color functions are widely used in Image-Based Rendering and Image Relighting. They make it possible to express the color of a point depending on a continuous directional parameter: the viewing or the incident light direction. Producing such functions from acquired data is promising but difficult. Indeed, an intensive acquisition process resulting in dense and uniform sampling is not always possible. Conversely, a simpler acquisition process results in sparse, scattered and noisy data on which parametric functions can hardly be fitted without introducing artifacts.
Within this context, we present two contributions. The first one is a robust least-squares based method for fitting 2D parametric color functions on sparse and scattered data. Our method works for any amount and distribution of acquired data, as well as for any function expressed as a linear combination of basis functions. We tested our fitting for both image-based rendering (surface light fields) and image relighting using polynomials and spherical harmonics. The second one is a statistical analysis to measure the robustness of any fitting method. This measure assesses a trade-off between precision of the fitting and stability w.r.t. input sampling conditions. This analysis along with visual results confirm that our fitting method is robust and reduces reconstruction artifacts for poorly sampled data while preserving the precision for a dense and uniform sampling.
VIZ
Pre-Integrated Volume Rendering with Non-Linear Gradient Interpolation, IEEE Vis 2010
Amel Guetat, Alexandre Ancel, Stephane Marchesin, and Jean-Michel Dischler
Abstract. Shading is an important feature for the comprehension of volume datasets, but is difficult to implement accurately. Current techniques based on pre-integrated direct volume rendering approximate the volume rendering integral by ignoring non-linear gradient variations between front and back samples, which might result in cumulated shading errors when gradient variations are important and / or when the illumination function features high frequencies. In this paper, we explore a simple approach for pre-integrated volume rendering with non-linear gradient interpolation between front and back samples. We consider that the gradient smoothly varies along a quadratic curve instead of a segment in-between consecutive samples. This not only allows us to compute more accurate shaded pre-integrated look-up tables, but also allows us to more efficiently process shading amplifying effects, based on gradient filtering. An interesting property is that the pre-integration tables we use remain two-dimensional as for usual pre-integrated classification. We conduct experiments using a full hardware approach with the Blinn-Phong illumination model as well as with a non-photorealistic illumination model.

Last update May 2016