Linear Workflow and Gamma. or, Color Management are the most mysterious and confusing two subjects in the CG world, nobody can claim that it’s easy To understand even professionals, in fact, everybody has a way of implementing LWF in their pipeline.

However most people will agree on one thing, working in a Linear pipeline is the most correct workflow “Mathematically” you can get it all, and that it’s necessary for photo-realistic rendering and most importantly for correct composite if you are onto compositing and color correction .

Once you have an actual working system that works — then you will have literally total and full control over pretty much everything in your projects Before & After rendering, add to that working in a physically accurate environment close to what happens in the real world.

If you agree with me that 1+1=2 not 3 or 15 — this Article  will be very valuable for you and your work.

linear workflow math

There is so much to cover on this topic, but I will not geek out so much, I just want you to understand what is it for, what value it will bring to your projects and how to setup ‘ in this case 3ds max and Vray ‘ the proper way.

Here. We. Go.

Photography

Digital Photography

I always go back to real world examples to understand what’s going on in 3d applications, as they operate the some!

Think about Photography for a moment, Current cameras “SLR”—have the ability to Record and shoot in Raw ( Some Cheap Cameras Shoot Even video in Raw Like this little beauty—Blackmagic Pocket  ), Now. Why is that? of course, so that they do the post production later on for Exposure and White balance etc…

So Photographers don’t worry it all about losing data.or, getting blackness in their shots, as long as their Cameras are set on Raw mode — of course, there is some planning as it depends if they are shooting an indoor or exterior scene, as sometimes even with Raw, they need to shoot couple images to have necessary range to work with.

Understanding Linear Workflow

A Raw file from a DSLR is 12-bit and it can go to 16-bit, 3d digital software can produce Raw files  up to 32-bit which is a dream for photographers, maybe one day!

The great advantage about Raw is that you can recover details on blown-outs areas like this,

Result of 8bits images vs 12bits image after one color operation

Other than that it’s impossible to recover those details, this is actually normal we have much more range in raw (12bits) vs jpg (8bits).

The same thing magically happens in 3d applications, you can take your shots in 3ds max using V-ray, then take the resulting Raw Data ( in this case EXR files ) to your Favorite compositing host and spell some magic there ( Post Production and Color Grading! ).

Of course all done in a correct workflow from 3ds max, vray to nuke — which you are going to learn more about in a minute, think about it as if it’s a Switch from Auto mode in a dslr to manual, to get more freedom and control and make the image go the way you want.

So What is this Linear Workflow all about?

Linear workflow in simple terms is the operation of a correct processing of lights, color and textures inside 3d and compositing applications.

Here is the process from start to finish,

Linear Space workflow map

 

Step.1 Loading and editing files.

Step.2 Processing textures inside a linear platform ( Rendering )

Step.3 Post production ( Ex. Multi-pass compositing )

If this sound foggy still, keep reading.

Why bother learning about this workflow?

Understanding Linear Workflow

First of all, I would like you to know a fact.

We, unfortunately, can’t avoid this! that is why I wanted this Post, to be my first contribution here at cgalter, as this is so important and Crucial for a correct 3d visualization, and we will be always referring to it, as we go along.

This subject changes everything we used to know about 3d applications, especially if you are a 3d visualizer and you care so much about having a neutral and realistic color values in your scenes.

Probably the major reason why we should start learning about this process, is genuinely because of the color operation we apply to the jpeg textures while we are in the 3d application or later on in compositing software.

Although 3d applications. Or, 2d applications ( Ex. Photoshop, you can switch it to linear as well ), work in a linear space ” Gamma = 1 ” , the 8bit, jpeg. or, video, we import to those apps have already a gamma Encoded (i.e., attached, or Assigned) of 2.2, which means the 3d app is going to use the image as it is with that gamma value of 2.2 and this is when all the problems starts!

What are basically those problems?

The Awesome 3d guru Zap Andersson (known as MasterZap) the guy behind LinearWorkFlow.com, said in a thread (Reply 20) the following which explains it all in my opinion,

If you render on a “standard” computer monitor (i.e. an sRGB monitor which has a practical gamma of 2.2 on average) with no regard to gamma anywhere in your workflow (i.e. pretending the monitor has gamma=1, like unfortunately most software defaults to), then when you think you are making something twice as bright, you are actually making it almost 5 times as bright.

—MasterZap

Zap Andersson, LinearworkFlow.com.

Which means that all of our old use of light in our scene was totally wrong and fake, so fake that, 3d software try so hard to fix it, but it still looks terrible.

And you know what! by using those tricks to get a fake result, we end up bumping our rendering time 10 times, as those algorithms tend to spend so much time ( poor algo! ) tweaking and fixing those Errors. to again, look terrible.

When we add a Vray light with 10 as an intensity with no regards to gamma then basically we are working with a clamped value, not the real 10 value which when we decide to change it to 20 later on it will be inaccurate and  give as un-neutral and blown-outs areas in a dark scene!

So always keep in mind Two things when working inside 3ds max.

1. | Color management (Gamma) 

Every time you load an image,Video, Texture ( png, jpeg…) with a gamma of 2.2 ( For Display on the monitor), Make sure to de-gamma those files and tell 3ds max that “Hey, those files have a gamma of 2.2!) of course, this can be done only once.

2. | Color Mapping (Vray

(“Color mapping has the task of re-mapping the image values to be suitable for display purposes.”quote from v-ray Help spot3d.com*).

This is so important, you can set everything to default in color mapping and you are done, since it’s gonna change only the very bright area and make it less bright by clamping the colors so that they don’t exceed over-bright Values like “ 1 ” the case for an (Exponential type) etc…now what you don’t want is the tone mapping to decide which information you’re not going to need, don’t worry about that for now as this will be explained in the linear setup video guide.

I know you are wondering right now, about those Errors, so here we go,

Gamma

a. |  Blown-out highlights ( Clipping while rendering and while compositing )

b. | Dark scenes ( Very Dark )

c. | Unrealistic Motion Blur

There is many other errors but those are the ones you get right away.

What is Floating Point “Linear”?

Working in Floating Point gives us the ability to store a wide range of values, which means that we can work much more—accurately when we perform Color operations, In this goes for both, while your doing calculation in a 3d software (3ds Max, Maya, Cinema4d, Blender)and on a compositing processor (After effects, Nuke, Fusion…)

gradient Non Linear

gradient Non-Linear

gradient Linear

gradient Linear

As you can see in the two examples above, “gradient Linear”  has much more value, the range is almost lossless which means much more flexibility in post processing, as in, “gradient Non-Linear ” the black color jumps so quickly to the white color, which gives us very small range of information to work with in post and that is the difference between working in linear form versus non-linear.

When I first saw the gradient I had no idea what it was about, so I want you to get it as it’s important, I prepared this example for you and pardon my creativity!

example for linear float

have a look at the following example to further understand this approach,

In the Result Num.03 Above, You can see After only one Operation The image is almost completely damaged, The Details disappeared as well, on the other hand, Result Num.02, The Image seem to work just fine. Or, to be more accurate — we still can see those beautiful Highlights on the Sky and mountains (We can Even go back to the some exposure that we had in the original Footage).

When I figured out what I was missing! Celebrated that Day, True Story.

Working in Floating point, Is it Important?

For me absolutely. But – It Depends on what your Goal is, you want some quick results and don’t care about Post, later on, then Exporting jpg From your host package is just fine! but you still working in a linear manner while rendering, just Beware — Post processing has to be done While it’s on the Frame Buffer ( Full Float ) then Export 8 bits.

If you want to get  full Control over your Render after processing the calculation and Preserve The Processed Float Data from your camera ( Vray )  for later use, Then my friend,  > Export EXR files > “Untouched Lossless Raw” .

Things I don’t like About Linear Workflow?

Large Files. The thing that bothers me the most working with EXR is that it’s sizy! but hey, storage is cheap. This is so important, working in full float will eat lots of your storage capacity ( becomes heavy sometimes ), I remember we had 60 gigs of float data only for a 2 minutes animation for a project I’ve worked on, You don’t want to run out of space so consider getting enough memory.

Power. Working in Floating point will definitely need a powerful computer, Especially when you have sequences and multiple channels, a good decent computer is highly needed. ” but that doesn’t mean you can’t use what you have already, it just might. or, might not give you real-time feedback depending on your hardware setup.

In Conclusion

To sum up, Keep in Mind Compositing ( Which is the Core Technique Used here At CGalter ) Should only be done in a Full Float Linear Space, Last but not Least As Visualizers, Always Refer to Photography while producing 3d Visualization.

Take your time to digest this process in your mind, it’s ok it takes time.

PS : Everything here is my personal two cents on this Subject, This is something that worked for me 100%, Hopefully, it will work for you as well.


 

FURTHER DISCUSSION (Videos)

1. | Understanding Linear Workflow – Walkthrough

2. | Painting in Linear “32bits” over Non-Linear “8bits“. (Soon to be released)

3. | Digital Negatives Similarities – Photography Compared to CG. (Soon to be released)

 


 

WHAT IS NEXT?

How to set up a Correct Linear Pipeline? You’re going to learn how to setup 3ds max, Vray, and nuke. ( Gamma Setup – 3ds max and Vray ).

USEFUL LINKS AND RESOURCES

  • Master Zap. Learned a lot From zap, he is my go-to when it comes to this subject! Check out his blog he got some inter-a-sting posts! Thank you Zap.
  • Ruffstuffcg. Linear Workflow with V-Ray in 3ds max.
  • Bit-depth. This is a video about codecs, but it goes into Bit-depth as well in a very simple way.
  • Bit-depth. Bit-depth Terminology and concept.
  • Raw Files. Raw From a photography perspective.
  • Wikipedia. Gamma correction, this is for people who wants to geek out!
  • Seazo. This article  from the awesome Seazo explains Gamma with great examples.
  • Cambridge In Colour. This is my reference for Gamma Correction when I feel lost!
  • Gamma FAQIf you really want to go deep about gamma!