Jean-Michel DISCHLER

Professor at the University of Strasbourg.
E-mail: dischler -at- unistra -dot- fr
ICUBE , UMR CNRS-UDS 7357 (Laboratoire des sciences de l'ingénieur, de l'informatique et de l'imagerie) 
300 bd Sébastien Brant - BP 10413 - F-67412 Illkirch Cedex, 
Tel: (+33) 03 68 85 45 59 , Fax: (+33) 03 68 85 44 55


 

UDS         CNRS              CNRS              EG2014   EGSR2019   HPG2019  

 

Jean-Michel Dischler holds the position of full Professor of Computer Science at the University Strasbourg (unistra). In terms of local duties, he was leading the department of Computer Science at the Faculty for Mathematics and Computer Science, was vice-director of the former LSIIT Lab until 2008, responsible for the master degree in Image and Computing (IICI) until 2009, joint-director of the 3D Computer Graphics Group, IGG, in the new ICUBE lab, until 2018. As for now, he is leading the rendering and visualization research activities. He was a member of the former INRIA project CALVI (Scientific computing and Visualisation) that ended in 2010. Main research interests include texture synthesis and rendering, 3D acquisition, high performance graphics, direct volume rendering of voxel-data and procedural modeling of natural phenomena. He served on a regular basis in a number of program committees: Eurographics, Pacific Graphics, EGSR, EG Parallel Graphics and Visualization, EG Symposium on natural phenomena, Eurovis, etc., and chaired the Eurographics conference steering committee. He also served as associate editor of journals like Computer Graphics Forum and The Visual Computer Journal. He was co-founder and vice president of the French chapter of Eurographics. He is a fellow of the Eurographics association and chaired the EG professional board for a decade. In 2021, he became chairman of the Eurographics association. He organized major international conferences, with venue in Strasbourg : EG’2014 , as well as the Symposium on Rending (EGSR) and High-Performance Graphics (HPG) conferences in 2019.

 

Funded research projects and developpments

ASTex

a C++ library for texture generation

Selected research results

 

Cyclostationary Gaussian noise: theory and synthesis, CGF Vol.40(2), Eurographics 2021

Nicolas Lutz, Basile Sauvage and Jean-Michel Dischler

Abstract. Stationary Gaussian processes have been used for decades in the context of procedural noises to model and synthesize textures with no spatial organization. In this paper we investigate cyclostationary Gaussian processes, whose statistics are repeated periodically. It enables the modeling of noises having periodic spatial variations, which we call "cyclostationary Gaussian noises". We adapt to the cyclostationary context several stationary noises along with their synthesis algorithms: spot noise, Gabor noise, local random-phase noise, high-performance noise, and phasor noise. We exhibit real-time synthesis of a variety of visual patterns having periodic spatial variations.

Semi-Procedural Textures using Point Process Texture Basis Functions, CGF Vol.39(4), EGSR 2020

P. Guehl, R. Allegre, J.‐M. Dischler, B. Benes, E. Galin

Abstract. We introduce a novel semi‐procedural approach that avoids drawbacks of procedural textures and leverages advantages of data‐driven texture synthesis. We split synthesis in two parts: 1) structure synthesis, based on a procedural parametric model and 2) color details synthesis, being data‐driven. The procedural model consists of a generic Point Process Texture Basis Function (PPTBF), which extends sparse convolution noises by defining rich convolution kernels. They consist of a window function multiplied with a correlated statistical mixture of Gabor functions, both designed to encapsulate a large span of common spatial stochastic structures, including cells, cracks, grains, scratches, spots, stains, and waves. Parameters can be prescribed automatically by supplying binary structure exemplars. As for noise‐based Gaussian textures, the PPTBF is used as stand‐alone function, avoiding classification tasks that occur when handling multiple procedural assets. Because the PPTBF is based on a single set of parameters it allows for continuous transitions between different visual structures and an easy control over its visual characteristics. Color is consistently synthesized from the exemplar using a multiscale parallel texture synthesis by numbers, constrained by the PPTBF. The generated textures are parametric, infinite and avoid repetition. The data‐driven part is automatic and guarantees strong visual resemblance with inputs.

Procedural Physically based BRDF for Real-Time Rendering of Glints, CGF Vol.39(7), PG 2020

Xavier Chermain, Basile Sauvage, Jean-Michel Dischler, Carsten Dachsbacher

Abstract. Physically based rendering of glittering surfaces is a challenging problem in computer graphics. Several methods have proposed off-line solutions, but none is dedicated to high-performance graphics. In this work, we propose a novel physically based BRDF for real-time rendering of glints. Our model can reproduce the appearance of sparkling materials (rocks, rough plastics, glitter fabrics, etc.). Compared to the previous real-time method [Zirr and al. 2016], which is not physically based, our BRDF uses normalized NDFs and converges to the standard microfacet BRDF [Cook and Torrance 1982] for a large number of microfacets. Our method procedurally computes NDFs with hundreds of sharp lobes. It relies on a dictionary of 1D marginal distributions: at each location two of them are randomly picked and multiplied (to obtain a NDF), rotated (to increase the variety), and scaled (to control standard deviation/roughness). The dictionary is multiscale, does not depend on roughness, and has a low memory footprint (less than 1 MiB).

Bi-Layer textures: a Model for Synthesis and Deformation of Composite Textures, CGF Vol.36(4), EGSR 2017

Geoffrey Guingo, Basile Sauvage, Jean-Michel Dischler, Marie-Paule Cani

Abstract. We propose a bi-layer representation for textures which is suitable for on-the-fly synthesis of unbounded textures from an input exemplar. The goal is to improve the variety of outputs while preserving plausible small-scale details. The insight is that many natural textures can be decomposed into a series of fine scale Gaussian patterns which have to be faithfully reproduced, and some non-homogeneous, larger scale structure which can be deformed to add variety. Our key contribution is a novel, bi-layer representation for such textures. It includes a model for spatially-varying Gaussian noise, together with a mechanism enabling synchronization with a structure layer. We propose an automatic method to instantiate our bi-layer model from an input exemplar. At the synthesis stage, the two layers are generated independently, synchronized and added, preserving the consistency of details even when the structure layer has been deformed to increase variety. We show on a variety of complex, real textures, that our method reduces repetition artifacts while preserving a coherent appearance.

S2016.png

Multi-Scale Label-Map Extraction for Texture Synthesis , Siggraph 2016
Y. D. Lockerman, B. Sauvage, R. Allègre, J. M. Dischler, J. Dorsey, H. Rushmeier

Abstract. Texture synthesis is a well-established area, with many important applications in computer graphics and vision. However, despite their success, synthesis techniques are not used widely in practice because the creation of good exemplars remains challenging and extremely tedious. In this paper, we introduce an unsupervised method for analyzing texture content across multiple scales that automatically extracts good exemplars from natural images. Unlike existing methods, which require extensive manual tuning, our method is fully automatic. This allows the user to focus on using texture palettes derived from their own images, rather than on manual interactions dictated by the needs of an underlying algorithm.
Most natural textures exhibit patterns at multiple scales that may vary according to the location (non-stationarity). To handle such textures many synthesis algorithms rely on an analysis of the input and a guidance of the synthesis. Our new analysis is based on a labeling of texture patterns that is both (i) multi-scale and (ii) unsupervised that is, patterns are labeled at multiple scales, and the scales and the number of labeled clusters are selected automatically. Our method works in two stages: the first builds a hierarchical extension of superpixels; the second labels the superpixels based on random walk in a graph of similarity between superpixels and a nonnegative matrix factorization. Our label-maps provide descriptors for pixels and regions that benefit state-of-the-art texture synthesis algorithms. We show several applications including guidance of non-stationary synthesis, content selection and texture painting. Our method is designed to treat large inputs and can scale to many megapixels. In addition to traditional exemplar inputs, our method can also handle natural images containing different textured regions.

SA2014picture

Local random-phase noise for procedural texturing , Siggraph Asia 2014
G. Gilet, B. Sauvage, K. Vanhoey, J.-M. Dischler, D. Ghazanfarpour

Abstract. Local random-phase noise is an efficient noise model for procedural texturing. It is defined on a regular spatial grid by local noises, which are sums of cosines with random phase. Our model is versatile thanks to separate samplings in the spatial and spectral domains. Therefore, it encompasses Gabor noise and noise by Fourier series. A stratified spectral sampling allows for a faithful yet compact and efficient reproduction of an arbitrary power spectrum. Noise by example is therefore obtained faster than state-of-the-art techniques. As a second contribution we address texture by example and generate not only Gaussian patterns but also structured features present in the input. This is achieved by fixing the phase on some part of the spectrum. Generated textures are continuous and non-repetitive. Results show unprecedented framerates and a flexible visual result: users can modify noise parameters to interactively edit visual variants.

Siggraph Asia

On-the-Fly Multi-Scale Infinite Texturing from Example , Siggraph Asia 2013
 K. Vanhoey, B. Sauvage, F. Larue and Jean-Michel Dischler

Abstract. In computer graphics, rendering visually detailed scenes is often achieved through texturing. We propose a method for on-the-fly non-periodic infinite texturing of surfaces based on a single image. Pattern repetition is avoided by defining patches within each texture whose content can be changed at runtime. In addition, we consistently manage multi-scale using one input image per represented scale. Undersampling artifacts are avoided by accounting for fine-scale features while colors are transferred between scales. Eventually, we allow for relief-enhanced rendering and provide a tool for intuitive creation of height maps. This is done using an ad-hoc local descriptor that measures feature self-similarity in order to propagate height values provided by the user for a few selected texels only. Thanks to the patch-based system, manipulated data are compact and our texturing approach is easy to implement on GPU. The multi-scale extension is capable of rendering finely detailed textures in real-time.

CGF2013

Robust Fitting on Poorly Sampled Data for Surface Light Field Rendering and Image Relighting , CGF Vol. 32(6), 2013
 K. Vanhoey, B. Sauvage, O. Génevaux, F. Larue and Jean-Michel Dischler

Abstract. 2D parametric color functions are widely used in Image-Based Rendering and Image Relighting. They make it possible to express the color of a point depending on a continuous directional parameter: the viewing or the incident light direction. Producing such functions from acquired data is promising but difficult. Indeed, an intensive acquisition process resulting in dense and uniform sampling is not always possible. Conversely, a simpler acquisition process results in sparse, scattered and noisy data on which parametric functions can hardly be fitted without introducing artifacts.
Within this context, we present two contributions. The first one is a robust least-squares based method for fitting 2D parametric color functions on sparse and scattered data. Our method works for any amount and distribution of acquired data, as well as for any function expressed as a linear combination of basis functions. We tested our fitting for both image-based rendering (surface light fields) and image relighting using polynomials and spherical harmonics. The second one is a statistical analysis to measure the robustness of any fitting method. This measure assesses a trade-off between precision of the fitting and stability w.r.t. input sampling conditions. This analysis along with visual results confirm that our fitting method is robust and reduces reconstruction artifacts for poorly sampled data while preserving the precision for a dense and uniform sampling.

VIZ

Pre-Integrated Volume Rendering with Non-Linear Gradient Interpolation , IEEE Vis 2010
Amel Guetat, Alexandre Ancel, Stephane Marchesin, and Jean-Michel Dischler
Abstract. Shading is an important feature for the comprehension of volume datasets, but is difficult to implement accurately. Current techniques based on pre-integrated direct volume rendering approximate the volume rendering integral by ignoring non-linear gradient variations between front and back samples, which might result in cumulated shading errors when gradient variations are important and / or when the illumination function features high frequencies. In this paper, we explore a simple approach for pre-integrated volume rendering with non-linear gradient interpolation between front and back samples. We consider that the gradient smoothly varies along a quadratic curve instead of a segment in-between consecutive samples. This not only allows us to compute more accurate shaded pre-integrated look-up tables, but also allows us to more efficiently process shading amplifying effects, based on gradient filtering. An interesting property is that the pre-integration tables we use remain two-dimensional as for usual pre-integrated classification. We conduct experiments using a full hardware approach with the Blinn-Phong illumination model as well as with a non-photorealistic illumination model.

 

Last update January 2022