Invited Speakers

© photo by Nils Eisfeld


We are very pleased to announce the following keynote speakers:

Vision »

Capture and modelling of 3D humans in 3D scenes

Siyu Tang, ETH Zürich


In recent years, many high-quality datasets of 3D indoor scenes have emerged such as Replica and Gibson, which employ 3D scanning and reconstruction technologies to create digital 3D environments. Also, virtual robotic agents exist inside of 3D environments such as the Habitat simulator. These are used to develop scene understanding methods from embodied views, thus providing platforms for indoor robot navigation, AR/VR, computer games and many other applications. Despite this progress, a significant limitation of these environments is that they do not contain people. The reason such worlds contain no people is that there are no automated tools to generate realistic people interacting realistically with 3D scenes, and manually doing this requires significant artist effort. In this talk, I will showcase the latest works from my group towards capture and synthesis of realistic people interacting realistically with 3D scenes and objects.


Siyu Tang is an assistant professor in the Department of Computer Science at ETH Zürich since January 2020. She received an early career research grant to start her own research group at the Max Planck Institute for Intelligent Systems in November 2017. She was a postdoctoral researcher in the same institute, advised by Dr. Michael Black. She finished her PhD at the Max Planck Institute for Informatics and Saarland University in 2017, under the supervision of Professor Bernt Schiele. Before that, she received her Master’s degree in Media Informatics at RWTH Aachen University, advised by Prof. Bastian Leibe and her Bachelor degree in Computer Science at Zhejiang University, China. She has received several awards for her research, including the Best Paper Award at BMVC 2012 and 3DV 2020, an ELLIS PhD Award and a DAGM-MVTec Dissertation Award.

Data Visualisation »

Immersive Analytics in a Connected World

Tim Dwyer, Monash University (Melbourne, Australia)


We reflect on research from the Monash University Data Visualisation and Immersive Analytics lab since 2005, and with our colleagues elsewhere, which spans network visualisation, human-in-the-loop optmisation and immersive analytics.  Network visualisation has the potential to help people to better understand the highly-connected and complex world in which they live.  As researchers our group are both users of optimisation technologies, to create visualisations that are in some sense optimal, and also contributors to those technologies, seeking to create human-centred optimisation tools for many applications.  This leads into a discussion of our more recent work in the area of Immersive Analytics, which aims to bring data out of computer centres and into the world around us.


Professor Tim Dwyer is a co-editor of “Immersive Analytics”, which was published by Springer in 2018 and has had over 16,500 downloads to date. He received his PhD on “Two and a Half Dimensional Visualisation of Relational Networks” from the University of Sydney in 2005. He was a post-doctoral Research Fellow at Monash University from 2005 to 2008, Tim was also a Visiting Researcher at Microsoft Research USA until 2009. From 2009 to 2012, Tim was a Senior Software Development Engineer with the Visual Studio product group at Microsoft in the USA. Then, he returned to Monash as a Larkins Fellow where he now directs the Data Visualisation and Immersive Analytics Lab.

Modeling »

Generative Design Conversion to B-Rep: A Three Year Retrospective

Martin Marinov, Autodesk Geometry Modeling


Generative designs are the geometry outcomes of an automated design exploration process. Designers or engineers input design goals into the generative design software, along with parameters such as spatial requirements, physical loads, materials, manufacturing methods, and cost constraints. The software explores and optimizes many possible solutions to generate design alternatives matching the user requirements.

Several years ago we released novel technology to convert such generative designs into watertight, editable B-Reps. These B-Reps are compatible with most contemporary CAD software, and can be instantly integrated into complex assembly models. Hence, our fully automatic conversion enables workflows where the performance of these highly optimized designs can be easily simulated and tested, thus reducing the time to develop the final product. In contrast, building such CAD models from generative designs with manual tools typically takes several days, slowing down the product development considerably.

In this talk, we will first present the science and algorithms underpinning our conversion system; then we will leverage the data collected, through the use of the Autodesk Fusion Generative Design software, to illustrate the strengths and limitations of our approach. We will also note potential improvements to existing geometry processing techniques that could benefit generative design conversion. Finally, we will discuss “pain points” where the automatically synthesized CAD models fail to meet the expectations of a designer/engineer.


Martin Marinov is a Sr. Principal Engineer at the Autodesk Geometry Modeling organization, based in Cambridge, UK. He leads the development of the quad parameterization component ReForm, and the conversion of unstructured representations (meshes, level sets, etc) to structured CAD models consisting of trimmed surfaces and globally smooth T-Spline/subdivision surfaces. His work underpins features in Autodesk products such as Fusion 360, Inventor, AutoCAD, 3DS Max and Maya.
Martin joined Autodesk in 2006, where he has also contributed to the Autodesk Shape Manager geometry kernel and the T-Splines surfacing library. He completed his PhD in geometric modeling at the RWTH Aachen.