- Erika Lu, MIT Class of 2017, joins ARTEMiS as a student assistant
- Dr. Violeta Ivanova teaches visualization in Pakistan
- 3D modeling and texturing the astronaut for Human Motion in Microgravity
- ARTEMiS presents at SIGGRAPH 2013
- What’s your flavor of visualization?
- Visual artist Betsy Skrip joins MIT ARTEMiS
- ARTEMiS and ICAP collaborate in Lyon, France
- Creating the Lantos animation
This post outlines our process of creating the animation of intra-aural imaging with a new scanner developed by Lantos Technologies. The animation included both humanoid and mechanical models and we used a range of techniques that can be applied to other projects as well. For reference, here is the final animation:
Meet with AMPS and the Lantos team:
In this first meeting we made sure everyone was on the same page as to the content of the animation, the technical aspects of the intra-aural scanning process, and the level of detail required. We even watched a demo of the device.
Violeta created a storyboard that outlined the main story points of the animation and sent it to AMPS and the Lantos team. This catalyzed a discussion about misconceptions and rendering style. The storyboard takes the script to a visual level and serves as a guide for creating animations and camera movements later.
Build 3D models:
While Violeta puzzled out camera angles and story points, Krista created 3D models. The Lantos team provided a 3D Solidworks mesh of their device, which we imported to Maya. The human head (with an inner ear canal), and the CM membrane or inflatable balloon of the device had to be created from scratch.
To ensure the human head was symmetrical, only half was sculpted at a time. When the geometry on one side of the face changed, Maya would update the other side in real time. The two halves were eventually “sewn” together. This next screen shot illustrates the symmetrical modeling technique.
We made many attempts to create hair, eyelashes, and eyebrows, using Maya’s hair, fur, and paint effects tools, and we obtained some promising results, but we opted not to include hair in the final render. Without hair, the head looked like a plausible mannequin; with hair, the head blurred the lines between realism and abstraction. The end result of a head with hair looked like the image below, in the uncomfortable “uncanny valley” of nearly human, but not quite.
Create UV texture maps:
We texture mapped only the human head and eyes. We used procedural materials forÂ the other models, since they did not need a specific pattern or color like a human faces. Our head model had only a few nooks and crannies, so spherical projection mapping was ideal for creating a map without much overlap.
Here is an image of what the UV map of the head looked like without color:
Krista used a photo of her own face to simulate some of the skin color and texture, and touched up the eyes and nose with a paintbrush in Photoshop. The biggest challenge in UV mapping is to make sure there are no “seams”; you do this by making sure the color is the same on either side of any separate UV edges.
Here is what the final texture map of the face looked like:
For the human head model, we also used a subsurface scattering material, which mimics organic materials that are translucent near their surface. When light hits the model, some passes through and is scattered beneath the surface of the material: this is what makes ears glow red when you shine a light through them.
We used a variety of deformers to create animations of the device’s CM inserting, inflating, scanning, deflating, and retracting; including:
This is when Maya interpolates between two or more models with the same number of polygons, i.e., a sphere morphs into a cube. We used this deformer for inflating and deflating the CM membrane, morphing the ear canal, and opening the jaw.
This creates a “cage” or lattice around a model; deforming the cage will deform the model inside.Â We used this deformer to make the CM membrane conform to the shape of the ear canal during insertion and retraction.
We used this deformer to bend the deflated CM membrane.
Translations and rotations
These were used to animate movement of cameras and the device. To simplify the animation of the device, we only animated the CM membrane, and “parented” the device to this model, so that it followed its insertion and retraction movements.
We rendered the animation from multiple camera angles and included many render passes per camera: a beauty or color layer, a transparency layer, and a separate layer for the CM membrane and the probe. These layers were combined in After Effects. To save rendering time, we decided to exclude some render features, e.g., the final gather feature, which simulates light bouncing and reflecting off every object in a scene, giving realistically lit and colored results.
Here is an image with final gather on the left and without it on the right (the left took 37 seconds to render, the right took 7 seconds):
Composite images into the final animation:
We composited the renders from Maya and added effects in After Effects. This included changing the color of the CM membrane (when it turns orange or blinks a lighter color), creating transparency effects, adding light, and stylizing some of the renders with “cartoon” lines and fills.
Here is a screen shot of one of the After Effects compositions:
I try to apply colors like words that shape poems, like notes that shape music.- Joan Miro