Jean-Michel DISCHLER

Professor at the University of Strasbourg.
E-mail: dischler -at- unistra -dot- fr
ICUBE, UMR CNRS-UDS 7357 (Laboratoire des sciences de l'ingénieur, de l'informatique et de l'imagerie) 
300 bd Sébastien Brant - BP 10413 - F-67412 Illkirch Cedex, 
Tel: (+33) 03 68 85 45 59 , Fax: (+33) 03 68 85 44 55


 

UDS        CNRS            EG2014 

 

Jean-Michel Dischler holds the position of full Professor of Computer Science at Strasbourg University. He was leading the department of Computer Science at the Faculty for Mathematics and Computer Science, was vice-director of the LSIIT Lab until 2008 and responsible for the master degree in Image and Computing (IICI) until 2009. He is currently joint-director of the 3D Computer Graphics Group, IGG, in the ICUBE lab, where he is leading the rendering and visualization research team. His main research interests include texture acquisition, synthesis and rendering, high performance graphics, as well as (formely) direct volume rendering of voxel-data and synthesis of natural phenomena. He served on a regular basis in a number of program committees: Eurographics, EGSR, EG Parallel Graphics and Visualization, EG Symposium on natural phenomena, Eurovis, etc. He also served as associate editor of journals including Computer Graphics Forum  and The Visual Computer Journal. He was vice president of the French chapter of Eurographics and chaired the EG professional board for a decade. Currently, he is second-vice chair of the Eurographics association. He organized EG’2014 held in Strasbourg and will be organizing the upcoming co-located Symposium on Rending and High-Performance Graphics conference in 2019.

 

Funded research projects and developpments

ASTex

a C++ library for texture generation

Selected research results

 

Bi-Layer textures: a Model for Synthesis and Deformation of Composite Textures, CGF Vol.36(4), 2017

Geoffrey Guingo, Basile Sauvage, Jean-Michel Dischler, Marie-Paule Cani

Abstract. We propose a bi-layer representation for textures which is suitable for on-the-fly synthesis of unbounded textures from an input exemplar. The goal is to improve the variety of outputs while preserving plausible small-scale details. The insight is that many natural textures can be decomposed into a series of fine scale Gaussian patterns which have to be faithfully reproduced, and some non-homogeneous, larger scale structure which can be deformed to add variety. Our key contribution is a novel, bi-layer representation for such textures. It includes a model for spatially-varying Gaussian noise, together with a mechanism enabling synchronization with a structure layer. We propose an automatic method to instantiate our bi-layer model from an input exemplar. At the synthesis stage, the two layers are generated independently, synchronized and added, preserving the consistency of details even when the structure layer has been deformed to increase variety. We show on a variety of complex, real textures, that our method reduces repetition artifacts while preserving a coherent appearance.

S2016.png

Multi-Scale Label-Map Extraction for Texture Synthesis, Siggraph 2016
Y. D. Lockerman, B. Sauvage, R. Allègre, J. M. Dischler, J. Dorsey, H. Rushmeier

Abstract. Texture synthesis is a well-established area, with many important applications in computer graphics and vision. However, despite their success, synthesis techniques are not used widely in practice because the creation of good exemplars remains challenging and extremely tedious. In this paper, we introduce an unsupervised method for analyzing texture content across multiple scales that  automatically extracts good exemplars from natural images. Unlike existing methods, which require extensive manual tuning, our method is fully automatic. This allows the user to focus on using texture palettes derived from their own images, rather than on manual interactions dictated by the needs of an underlying algorithm.
Most natural textures exhibit patterns at multiple scales that may vary according to the location (non-stationarity).  To handle such textures many synthesis algorithms rely on an analysis of the input and a guidance of the synthesis. Our new analysis is based on a labeling of texture patterns that is both (i) multi-scale and (ii) unsupervised that is, patterns are labeled at multiple scales, and the scales and the number of labeled clusters are selected automatically.
Our method works in two stages: the first builds a hierarchical extension of superpixels; the second labels the superpixels based on random walk in a graph of similarity between superpixels and a nonnegative matrix factorization. Our label-maps provide descriptors for pixels and regions that benefit state-of-the-art texture synthesis algorithms. We show several applications including guidance of non-stationary synthesis, content selection and texture painting. Our method is designed to treat large inputs and can scale to many megapixels. In addition to traditional exemplar inputs, our method can also handle natural images containing different textured regions..

SA2014picture

Local random-phase noise for procedural texturing, Siggraph Asia 2014
G. Gilet, B. Sauvage, K. Vanhoey, J.-M. Dischler, D. Ghazanfarpour

Abstract. Local random-phase noise is an efficient noise model for procedural texturing. It is defined on a regular spatial grid by local noises, which are sums of cosines with random phase. Our model is versatile thanks to separate samplings in the spatial and spectral domains. Therefore, it encompasses Gabor noise and noise by Fourier series. A stratified spectral sampling allows for a faithful yet compact and efficient reproduction of an arbitrary power spectrum. Noise by example is therefore obtained faster than state-of-the-art techniques. As a second contribution we address texture by example and generate not only Gaussian patterns but also structured features present in the input. This is achieved by fixing the phase on some part of the spectrum. Generated textures are continuous and non-repetitive. Results show unprecedented framerates and a flexible visual result: users can modify noise parameters to interactively edit visual variants.

Siggraph Asia

On-the-Fly Multi-Scale Infinite Texturing from Example, Siggraph Asia 2013
 K. Vanhoey, B. Sauvage, F. Larue and Jean-Michel Dischler

Abstract. In computer graphics, rendering visually detailed scenes is often achieved through texturing. We propose a method for on-the-fly non-periodic infinite texturing of surfaces based on a single image. Pattern repetition is avoided by defining patches within each texture whose content can be changed at runtime. In addition, we consistently manage multi-scale using one input image per represented scale. Undersampling artifacts are avoided by accounting for fine-scale features while colors are transferred between scales. Eventually, we allow for relief-enhanced rendering and provide a tool for intuitive creation of height maps. This is done using an ad-hoc local descriptor that measures feature self-similarity in order to propagate height values provided by the user for a few selected texels only. Thanks to the patch-based system, manipulated data are compact and our texturing approach is easy to implement on GPU. The multi-scale extension is capable of rendering finely detailed textures in real-time.

CGF2013

Robust Fitting on Poorly Sampled Data for Surface Light Field Rendering and Image Relighting, CGF Vol. 32(6), 2013
 K. Vanhoey, B. Sauvage, O. Génevaux, F. Larue and Jean-Michel Dischler

Abstract. 2D parametric color functions are widely used in Image-Based Rendering and Image Relighting. They make it possible to express the color of a point depending on a continuous directional parameter: the viewing or the incident light direction. Producing such functions from acquired data is promising but difficult. Indeed, an intensive acquisition process resulting in dense and uniform sampling is not always possible. Conversely, a simpler acquisition process results in sparse, scattered and noisy data on which parametric functions can hardly be fitted without introducing artifacts.
Within this context, we present two contributions. The first one is a robust least-squares based method for fitting 2D parametric color functions on sparse and scattered data. Our method works for any amount and distribution of acquired data, as well as for any function expressed as a linear combination of basis functions. We tested our fitting for both image-based rendering (surface light fields) and image relighting using polynomials and spherical harmonics. The second one is a statistical analysis to measure the robustness of any fitting method. This measure assesses a trade-off between precision of the fitting and stability w.r.t. input sampling conditions. This analysis along with visual results confirm that our fitting method is robust and reduces reconstruction artifacts for poorly sampled data while preserving the precision for a dense and uniform sampling.

VIZ

Pre-Integrated Volume Rendering with Non-Linear Gradient Interpolation, IEEE Vis 2010
Amel Guetat, Alexandre Ancel, Stephane Marchesin, and Jean-Michel Dischler
Abstract. Shading is an important feature for the comprehension of volume datasets, but is difficult to implement accurately. Current techniques based on pre-integrated direct volume rendering approximate the volume rendering integral by ignoring non-linear gradient variations between front and back samples, which might result in cumulated shading errors when gradient variations are important and / or when the illumination function features high frequencies. In this paper, we explore a simple approach for pre-integrated volume rendering with non-linear gradient interpolation between front and back samples. We consider that the gradient smoothly varies along a quadratic curve instead of a segment in-between consecutive samples. This not only allows us to compute more accurate shaded pre-integrated look-up tables, but also allows us to more efficiently process shading amplifying effects, based on gradient filtering. An interesting property is that the pre-integration tables we use remain two-dimensional as for usual pre-integrated classification. We conduct experiments using a full hardware approach with the Blinn-Phong illumination model as well as with a non-photorealistic illumination model.

 

Last update August 2018