The weather in Lyon is cold this winter – see below the frozen fountain at Place des Terreaux! – but the welcome for ARTEMiS is warm!
Counterclockwise from top: Olivier Rastello, Tom Rivière, and Frédéric Urien (left to right) in ICAP’s editing suite; Frédéric works in Unity; Olivier and Tom welcome ARTEMiS; Place des Terreaux in Lyon; frozen fountain – it’s cold in Lyon!]]>
In this first meeting we made sure everyone was on the same page as to the content of the animation, the technical aspects of the intra-aural scanning process, and the level of detail required. We even watched a demo of the device.
Violeta created a storyboard that outlined the main story points of the animation and sent it to AMPS and the Lantos team. This catalyzed a discussion about misconceptions and rendering style. The storyboard takes the script to a visual level and serves as a guide for creating animations and camera movements later.
While Violeta puzzled out camera angles and story points, Krista created 3D models. The Lantos team provided a 3D Solidworks mesh of their device, which we imported to Maya. The human head (with an inner ear canal), and the CM membrane or inflatable balloon of the device had to be created from scratch.
To ensure the human head was symmetrical, only half was sculpted at a time. When the geometry on one side of the face changed, Maya would update the other side in real time. The two halves were eventually “sewn” together. This next screen shot illustrates the symmetrical modeling technique.
We made many attempts to create hair, eyelashes, and eyebrows, using Maya’s hair, fur, and paint effects tools, and we obtained some promising results, but we opted not to include hair in the final render. Without hair, the head looked like a plausible mannequin; with hair, the head blurred the lines between realism and abstraction. The end result of a head with hair looked like the image below, in the uncomfortable “uncanny valley” of nearly human, but not quite.
We texture mapped only the human head and eyes. We used procedural materials for the other models, since they did not need a specific pattern or color like a human faces. Our head model had only a few nooks and crannies, so spherical projection mapping was ideal for creating a map without much overlap.
Here is an image of what the UV map of the head looked like without color:
Krista used a photo of her own face to simulate some of the skin color and texture, and touched up the eyes and nose with a paintbrush in Photoshop. The biggest challenge in UV mapping is to make sure there are no “seams”; you do this by making sure the color is the same on either side of any separate UV edges.
Here is what the final texture map of the face looked like:
For the human head model, we also used a subsurface scattering material, which mimics organic materials that are translucent near their surface. When light hits the model, some passes through and is scattered beneath the surface of the material: this is what makes ears glow red when you shine a light through them.
We used a variety of deformers to create animations of the device’s CM inserting, inflating, scanning, deflating, and retracting; including:
This is when Maya interpolates between two or more models with the same number of polygons, i.e., a sphere morphs into a cube. We used this deformer for inflating and deflating the CM membrane, morphing the ear canal, and opening the jaw.
This creates a “cage” or lattice around a model; deforming the cage will deform the model inside. We used this deformer to make the CM membrane conform to the shape of the ear canal during insertion and retraction.
We used this deformer to bend the deflated CM membrane.
Translations and rotations
These were used to animate movement of cameras and the device. To simplify the animation of the device, we only animated the CM membrane, and “parented” the device to this model, so that it followed its insertion and retraction movements.
We rendered the animation from multiple camera angles and included many render passes per camera: a beauty or color layer, a transparency layer, and a separate layer for the CM membrane and the probe. These layers were combined in After Effects. To save rendering time, we decided to exclude some render features, e.g., the final gather feature, which simulates light bouncing and reflecting off every object in a scene, giving realistically lit and colored results.
Here is an image with final gather on the left and without it on the right (the left took 37 seconds to render, the right took 7 seconds):
We composited the renders from Maya and added effects in After Effects. This included changing the color of the CM membrane (when it turns orange or blinks a lighter color), creating transparency effects, adding light, and stylizing some of the renders with “cartoon” lines and fills.
Here is a screen shot of one of the After Effects compositions:
The skier paintings were made with a digital workflow. This can save time; modern tablets simulate pen and paper drawing, and raster editors like Photoshop are more forgiving than traditional materials. This video demonstrates the painting process:
(Note that the video above is a real-time screen capture of Krista’s drawing and painting the skier. Feel free to fast forward, as needed for training purposes.)
To ensure accurate proportions, lines were traced directly from stills from the Women Ski Jumping USA video. The most challenging part of the render became the shading: using light and shadow to convincingly depict the form of the skier.
We used the medical illustration convention of an upper left light source. Cool shadows are in lower right areas, where this light would be occluded. At the bottom right edges of the skier, there are reflection highlights. This conveys a sense of a shiny, or reflective material. The sharper the highlight, the glossier the surface.
After Effects is not only a powerful effects tool (for color-correcting film scenes, creating explosions, or masking out unwanted objects, among many others), but also a powerful compositing and animating tool. For Slope, we used After Effects in all three roles. It has an intuitive, graphical user interface. The timeline is easily navigated; you can zoom in to individual frames for fine-scale changes, and zoom out to see the entire timeline:
Just like Maya and other animation softwares, you can adjust speed, acceleration, and deceleration of any object’s key frames, by tweaking curves in the AfterEffects graph editor. To get a smooth deceleration for the skier in Slope, the graph editor helped to make her deceleration came to an eased stop:
In addition to tinkering with speeds (for spatial variables like rotation, position and scale), you can also adjust the values themselves over time (this is used by default for temporal variables like opacity, color corrections and other visual effects). Here’s a graph editor view showing change in opacity from 85 to 100%:
After Effects integrates very well with tools such as Illustrator and Photoshop. For example, importing a Photoshop image into AfterEffects creates a composition with nested layers inside, corresponding to the Photoshop file’s layers. This feature can be used to animate paintings by flickering highlight layers on and off or by moving foreground objects faster than background objects (a parallax effect).
To create the effect of lines being drawn on graph paper, you can import pre-drawn Bezier curves into After Effects. Here’s a set of curves drawn first in Illustrator:
Using the Effects -> Generate -> Stroke feature, you can copy and paste the curves into a solid layer in After Effects, and use key frames to time their drawing.
Here’s a screenshot of the working After Effects file, including many layers of images, sounds, and solids (containing the drawn stroke lines) for each scene:
As the river ages, its meanders become more pronounced and loopy. The inside bends accrete successions of sandy point bar beds (shown as yellowy repeating curves), while the outside bends carve deeper and deeper into steep cliffs. The meanders in the river can become so circular, that eventually the loop pinches off, leaving an oxbow lake. The newly straightened river then begins the increasing-sinuosity process all over again, and the lake often dries up or fills with silt.
After further reading on fluvial deposition and looking at many a research image, we realized that we could not have such a lovely, loopy, meandering stream as we had originally imagined, if we also wanted to have a lot of sandy, well-defined bedforms. Like all our visualizations, the 3D fluvial model has to balance artistic value with scientific accuracy.
Maya has some fun (and useful) preset render settings for quick renders. To find them, right-click on a new render layer, and then attributes:
..and then open up one of the many presets: ambient occlusion, luminance depth or normal map:
Maya’s normal map preset changes the color of the object, depending on the angle between each point on the surface (its normal) and the position of the camera. Different facing ratios between object surfaces and the camera are assigned different colors: