OVERVIEW: A+T Motion Composite Workflow

Software tutorials of interest.
Post Reply
Site Admin
Posts: 68
Joined: Wed Jan 27, 2010 7:33 pm

OVERVIEW: A+T Motion Composite Workflow

#1 Post by jstenner »

For now this is a dumping ground for my notes regarding a workflow for CG work using the RED, Maya (with VRay as renderer) and Nuke. As I get time and our setup solidifies, I'll clean this up and make it more of a tutorial.

Shoot your footage using the RED

We'll shoot at 5K or 6 K since our final output is 4K DCI. That way our frame ratio is consistently 1.90:1. That will give us plenty of extra resolution for reframing shots, etc. Remember, if your source footage is not of the proper frame ratio, you'll need to reframe along the way to avoid black bars. NO BLACK BARS, and don't change the frame ratio somewhere along the way in the following workflow.

Color Correction

Open your footage in Redcine-X Pro and do an initial color correction if you want maximum control. With the advent of RED's IPP2, though, for a basic initial grade, you can go straight to FCPX (or Premiere, if you must).

Copy your shot(s) from your Red Mag and place them in your working directory on Titaniumz-share (our SAN/RAID). Using FCPX (v.10.4.5 as of Spring 2019) import your footage choosing to "Leave files in place" and work directly with the RED RAW files (uncheck "Create optimized media").
For each clip, set your Color Space to REDWideGamutRGB and Gamma Space to Log3G10. This returns the footage to its RAW, full spectrum state.
Next, we need to map our full spectrum RAW video to one of the Rec709 color spaces (unless we're working in Rec2020 - not likely in Spring 2020). These "output transforms" are provided by RED. Since they aren't installed by default in your FAC302 Mac Pro home area, you'll need to install them yourself. Here's how: navigate to the Titaniumz-share/RED/ directory and prepare to copy the entire IPP2 Cubes SDR Core V1_13 folder to the appropriate directory in your local account. You can find this directory via the Settings pane in FCPX:
This gives you access to the Camera LUT interface. Select it and choose "Reveal in Finder" to open the install directory for Camera LUTs:
Copy the IPP2 Cubes SDR Core V1_13 folder into this directory:
Now, you can choose from among the basic Rec709 output transforms to get your footage in a base mode for editing:
Before (RAW -> REDWideGamutRGB and Log3G10):
After (with NO CONTRAST Soft LUT applied):

If you need to edit a timeline, you can create a project in FCP-X and work directly with the corrected Red media and it will pick up your adjustments. Just don't try to edit at full resolution! This is optional, of course, if we're just doing a single CG/VFX scene (i.e. no editing required). If you are working on a series of shots that need to be edited, once you're done with the edit you'll want to export any clips that need CG (individually) for further work.

Motion Tracking

If your shot requires motion tracking, match moving or stabilization, you'll need to create a lower resolution (HD) image sequence and load it into SynthEyes for processing. Note, for stabilization, you might want to work with the high res footage to maintain image quality! Check your scene scale (match Maya = cm by default) and make sure your Y axis is UP for Maya. Export your tracked footage as a Maya ASCII file and copy the file to your Maya Project scenes directory. You'll also want to place your sequence stills in a folder in the Maya sourceimages directory and relink them (within Maya) to the SynthEyes camera image plane. Be sure to enable "image sequence" in the image plane settings in Maya or else you'll only see a single still image plane when looking through your SynthEyes created camera.


Open your scene in Maya and check everything out. If your camera is upside down, it's probably because you didn't set SynthEyes to Y axis is UP for Maya. Go back to SynthEyes, change the scene settings, and re-export the scene. You also want to match the scene scale (default Maya scene scale is centimeters) if your scene is out of whack, scale wise.

If your image plane is washed out it is because it is an sRGB image being displayed as Linear. Select the camera with the image plane, use the Attribute Editor to find the imagePlaneShape. Select Attributes and VRay->Texture Input Gamma. Look at the bottom under Extra VRay Attributes and Enable input texture correction and choose Gamma 2.20.

Place your CG objects and check to be sure they "stick" to the ground, or wherever you're placing them. You may need to constrain them to the SynthEyes generated locators.

To create a shadow-catching ground plane, create a poly plane, name it something reasonable like, GroundPlane and using Hypershade, assign it a VRayMtlWrapper and set Matte Properties to Matte Surface and Alpha Contribution to -1. Enable Shadows and Affect Alpha (so shadows are included in alpha). Receive GI allows shadows to be colored by GI, but causes the matte knockout to be less than fully transparent, so uncheck that. Then in Base Material, attach a VRay Mtl of some sort.

Create a VRayLightDome and attach your spherical HDR image taken at the site.

If you render out now, you'll probably get a pretty noisy image. As a good starting point, use VRay Universal Settings:
http://docs.chaosgroup.com/display/VRAY ... y+Settings#

Here's a pretty good discussion of this wrt HDRI dome lighting and various settings:
http://www.peterguthrie.net/blog/2014/7 ... -interiors

I set “Minimum Shading Rate” under Render Settings->VRay->Image sampler to 6
Your hdr image is probably not in the optimal color space, so select VRayLightDomeShape1->file1 node and chose Attributes->VRay->texture input gamma to gamma the HDR image. That brightens up the lighting contribution to the scene.

Tune to your hearts content. You can use the same technique with the VRayMtlWrapper and alpha matte to create blockout geometry for your scene. Use VRay RT renderer for fine adjustments. Use the History capabilities of the VRay Render Buffer to compare shots.

Refine your lighting by adding any fill or other lighting might be needed to get the right look.

Work on your UV placement using UVLayout. Texture your objects using Mari. For more info, see the tutorials here:

Intro to Mari
Beginner's Guide to Mari
Mari with Maya intro
Maya to Mari Workflow
UVLayout Video tutorials
My Mari Playlist

For more, see the course wiki Resources page.


Remember, what you see in your render buffer is what gets written to file. Make sure it's right before batch rendering your scene!
Be sure to turn off your image plane before batch rendering (set it to Display->NONE).
Be sure to set your VRayLightDomeShape1->Options to “invisible” so it doesn’t render in your final image.
Options.png (28.14 KiB) Viewed 10390 times
It’s not a bad idea to make a new Render Layer and turn these off as Overrides. That way you can quickly jump back and forth to views with and without these items visible. It's just more convenient.

In Render Settings, be sure to set your output type to “exr (multichannel)”
Click Image Format Options and look here for OpenEXR Options:
http://docs.chaosgroup.com/display/VRAY ... at+Options#
I usually set it to 32 bit (Full Float), Scanline Zip compression

Go to your Render Settings -> Render Elements tab and be sure you have whatever render passes you want included in the EXR added to the pane on the right.
You'll probably want GI, Reflection, Refraction, Shadows. Read up on these. The MaterialID and ObjectID passes are good for isolating geometry in compositing! You may want to make an ambient occlusion pass with VRay Dirt.

Render Using the Deadline Render Farm

In Maya select the Deadline shelf tab. Click on the Yellow icon to submit the job.
Make sure your scene path and output path are accessible by all render slaves/clients. That means they should probably be on the Titaniumz-share.
It's usually best to submit the scene file with the batch.
Hit, submit job!

You can monitor the job with DeadlineMonitor7.
If you're getting errors, it's probably from a machine that doesn't have an account logged in (student or render account). If so, and it's causing your job to fail, you'll need to select specific machines as slaves in the submission dialog. Go back and try again.


Once your files are rendered it's compositing time.

I'll add more here later, but generally, the goal is to use Nuke's shuffle nodes to extract the passes from the EXRs you rendered from Maya and then use merge nodes to bring them all back together into a final image. Then, we'll use another merge node to comp the CG work over the top of a high quality version of our video. At various points along the way, above, we can insert color grading nodes and other modifiers to take control over our image. Here's the first of two tutorials that go over this in a basic way:

More later.
The master of disaster!

Site Admin
Posts: 68
Joined: Wed Jan 27, 2010 7:33 pm

VRay Render Elements

#2 Post by jstenner »

VRay allows one to separate components of an image so they can be individually adjusted/enhanced during compositing. These are called "Render Elements" and can be assigned in the Maya Render Settings dialog.

Here's an overview of the categories of element types:
If you want to separate a general "reflection" element into its "raw" and "filter" components, you can do that, but you'll have to remember how to reassemble the pass in Nuke. Here's a cheat sheet:
The master of disaster!

Post Reply