Tool Development: Resolve, Fusion, Custom Code #3: Saturation
So far in this series, we’ve built an exposure tool that treats the RGB channels equally, followed by an RGB mixer that introduced the idea of modifying colour channels independently. In this post, we’re staying in that same realm of colour manipulation—but this time, we’re going to derive a faux colour channel and manipulate that directly. This will allow us to rebuild Resolve’s saturation control from scratch and explore why this type of saturation isn’t always ideal.
What is Saturation, Anyway?
Before we dive into building our saturation slider, let’s unpack what saturation actually means.
It’s important to understand that a saturation channel doesn’t natively exist in image data—there’s no hard-coded value for “saturation” the way there is for red, green, and blue. Instead, saturation is something we define. That’s why every saturation tool behaves a little differently: it all comes down to how the developer decided to extract and manipulate that idea of saturation.
In DaVinci Resolve, the built-in saturation control uses a luminance-based method. It calculates the perceived brightness (luminance) of each pixel using a weighted sum of the RGB values—these weights are called luminance coefficients. Once that luminance is calculated, it’s subtracted from the original RGB pixel to isolate what’s left: the chroma, or colour information. This chroma can then be multiplied or scaled to increase or decrease saturation.
Here’s the formula used to derive luminance using Rec. 709 coefficients:
Luminance = Red * 0.2126 + Green * 0.7152 + Blue * 0.0722
And to extract chroma, we subtract luminance from the original RGB values:
Chroma = RGB - Luminance
Once we have chroma isolated, we can scale it to create our saturation effect.
1. The Colour Page
Let’s now implement this in the Colour Page of Resolve.
Step 1: Isolate Chroma
Start by creating a Layer Mixer node—this is essential for isolating the chroma portion of the image.
To model the formula chroma = RGB - Luminance, set the Layer Mixer’s composite mode to Subtract. Right-click the mixer, go to Composite Mode, and choose Subtract. This will subtract the output of the bottom node from the top node.
At this point, your viewer should show a black image. That’s expected: we’re subtracting the image from itself, so the result is zero. Now we need to adjust the bottom branch to become our luminance-only image.
Select the node feeding into the bottom layer of the mixer, go to the RGB Mixer, and check Monochrome. You’ll see that the sliders for Red in Red, Green in Green, and Blue in Blue are already populated with the Rec. 709 coefficients. This turns your image grayscale based on the exact luminance coefficients we need - so this is now our luminance-only node.
Now, the output of the Layer Mixer is your chroma-only image—likely very dark, since chroma alone carries relatively little information. But it’s isolated now, and ready to be adjusted.
Our isolated chroma, created via a layer mixer which takes our input state image and subtracts our luminance-state image. You can see hints of the isolated chroma such as the vibrant red patch towards screen-left.
Step 2: Recombine Luminance and Chroma
To get back to your original image, we need to add the luminance back into the chroma. Create a new serial node after your subtract Layer Mixer. From there, generate another Layer Mixer. Delete the automatically connected node (tip: disconnect it from the Layer Mixer first, or it will delete both). Then set this Layer Mixer’s mode to Add.
Pipe the luminance-only node into the lower input of the new Layer Mixer. You should now see your original image restored—because we’ve essentially just done a simple equation: RGB - Luminance + Luminance = RGB.
But here’s the key: the “RGB - luminance” part gives us our isolated chroma component—and now we can modify it.
The node between the two mixers is where we control the saturation adjustment - since this will now only affect our isolated chroma branch. By increasing or decreasing that gain, we create a fully custom saturation control—based entirely on our derived colour “channel,” and completely independent from Resolve’s built-in saturation tools.
Our reconstructed image - now with luminance and chroma split apart to allow for independent control.
Bonus: Redefining What “Saturation” Means
Now that we’ve built a functional saturation tool, let’s push it further. Try going back to the RGB Mixer, check Preserve Luminance, and start playing with the Red in Red, Green in Green, and Blue in Blue sliders. What you’re doing here is redefining how luminance—and therefore chroma—is calculated.
As we’ve discussed, there’s no universally correct definition of saturation. The Rec. 709 coefficients are simply a standard—not a rule. So by adjusting these sliders, you’re creating entirely new definitions for what counts as colour vs luminance. Some of these may look strange, others might yield interesting, unexpected results.
This exercise illustrates a key truth: every saturation slider is user-made, not native to the system.
2. The Fusion Page
As you might expect by now, the Fusion implementation is relatively straightforward—and once again, we can handle the entire process using a single Custom Tool node. We'll also take advantage of the Inter tabs within the Custom Tool to store intermediate steps, keeping our final formula clean and easy to follow.
Start by creating a Custom Tool node, then go to the Inter tab. In the first cell (labeled Intermediate 1), we’ll store our luminance calculation. You can reference this value later in your expressions using i1. (The next slots—i2, i3, etc.—are also available if we need to store more values.)
Inside Intermediate 1, input the full luminance formula using the Rec. 709 coefficients:
r1 * 0.2126 + g1 * 0.7152 + b1 * 0.0722
Just to recap—this line is doing exactly what the Monochrome checkbox does in the RGB Mixer on the Colour Page (though to slightly better decimal point precision): it turns every channel into a weighted combination of red, green, and blue. Since we’ve now stored that formula in i1, we can simply type i1 into the Red, Green, and Blue channels under the Channels tab of the Custom Tool. This will output our luminance-only, monochrome image.
Now, to turn this into a saturation control, all we need to do is plug this luminance-state image into our saturation formula, add a slider to control the strength, and we’re good to go.
In the Red, Green, and Blue channel expressions, the saturation formulas in each of the three channels will look like this:
((r1 - i1) * n1) + i1 ((g1 - i1) * n1) + i1 ((b1 - i1) * n1) + i1
To recap, here’s what’s happening in our formula:
We start by taking the original channel—like r1 for Red—and subtracting our luminance value (i1). This isolates the chroma component of that channel. We then multiply that chroma by a user-facing control, n1. This control acts like a gain slider—because under the hood, that’s all a saturation control really is: a simple multiplier on chroma.
Finally, we add i1 (our luminance-only image) back to the gained chroma. This rebuilds the full image but now with controllable colour intensity.
So when n1 is set to 1, we have a “do-nothing” state:
(chroma * 1) + luminance = original image
As we increase or decrease n1, we’re effectively scaling the strength of the colour while preserving the luminance—giving us a clean, custom saturation tool.
3. Custom Code
Let’s now set up the DCTL version. We’ll start with our standard boilerplate, define our first slider, and call our saturation function to bring it all together.
DEFINE_UI_PARAMS(sat, Saturation, DCTLUI_SLIDER_FLOAT, 1, 0, 2, 0.001) __DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B){ float3 out = make_float3(p_R, p_G, p_B); out = Saturation(out, sat); return out; }
Now we just need to create a new function that includes our saturation formulas. Here’s how we can define it:
__DEVICE__ float3 Saturation(float in, float saturation){ float luminance = in.x * 0.2126 + in.y * 0.7152 + in.z * 0.0722; in.x = ((in.x - luminance) * saturation) + luminance; in.y = ((in.y - luminance) * saturation) + luminance; in.z = ((in.z - luminance) * saturation) + luminance; return in; } DEFINE_UI_PARAMS(sat, Saturation, DCTLUI_SLIDER_FLOAT, 1, 0, 2, 0.001) __DEVICE__ float3 transform(int p_Width, int p_Height, int p_X, int p_Y, float p_R, float p_G, float p_B){ float3 out = make_float3(p_R, p_G, p_B); out = Saturation(out, sat); return out; }
And that’s our saturation tool—built in three unique ways.
At the start of this post, I mentioned that this method of saturation isn’t necessarily optimal. So let’s talk about why.
The Problem With Bright Saturation
One of the key goals of aesthetic colour grading is to model colours in a way that feels grounded in the real world. That doesn’t mean our grades can’t be bold, vibrant, or stylised—but they should generally respect certain principles of how light, material, and human perception interact.
So, when we increase saturation in an image, what are we really trying to do?
In most cases, when we increase saturation, what we’re really trying to do is make it feel as if the scene itself was more colourful at the time it was captured—as if the objects in the frame were naturally more vibrant. If that were true, those materials would absorb more ambient light. This is a crucial detail. As objects become more colourful, they appear darker—not brighter—directly as a result of their increased colourfulness.
Our eyes, trained by a lifetime of experience in the real world, instinctively understand this relationship. So when we use a method like the one we just built—which has a side effect of making colours more visibly bright as they become saturated—it creates a kind of perceptual mismatch. Even if we don’t consciously spot the issue, our brain registers that something doesn’t quite add up. It breaks the visual logic we’re hardwired to expect.
That doesn’t mean we can’t have bright, colourful images—we absolutely can. But there’s a key relationship between light and colour—and tools like this saturation model, which unfortunately lives front-and-centre on the primary Colour Page in Resolve, can easily distort that relationship. It’s often the default choice for many beginner colourists and editors, simply because it’s the most accessible. The goal is to make vibrant objects feel like they’re reflecting light from their environment—not emitting it themselves. When that balance tips too far, things start to look artificial, or even unnatural. Objects and surfaces start to look self-illuminated, instead of responding to the ambient light and colour around them. Even at a subtle level - this triggers a kind of cognitive dissonance - something our brains quietly register as “off.” It’s often what people mean when they say an image looks video-y: it feels constructed, synthetic, and unconvincing. In short, it looks untruthful.
That’s why this style of saturation often results in images that look garish—overly bright and overly colourful. Think of how an RGB LED looks: it’s both bright and saturated because it’s a light source. But everyday objects—skin, fabric, wood, walls—should never feel as if they’re glowing from within. Yet that’s exactly how they can start to feel when this kind of saturation is applied.
Turned all the way up, our saturation tool gives colours a harsh, emissive quality—and become unpleasant to look at.
Why Gain Makes It Worse
And there’s another issue layered on top: the use of gain itself.
Gain isn’t the ideal operation for controlling saturation. It multiplies values—meaning high values get pushed far more aggressively than mid or low-range ones. So if you use this method to subtly enhance skin tones, which tend to be low-to-moderate in saturation, anything already saturated in the frame—like car lights or neon signs—can get completely blown out. The strong become too strong, and the balance of the image breaks down.
So while this implementation is a great learning exercise, it’s not particularly practical for shaping saturation in high-quality grading work. But by building it from the ground up, you've hopefully gained a deeper understanding of how saturation can be constructed—and just as importantly, where its weaknesses lie.
Why Desaturation Actually Works
That being said, this method is actually quite effective for desaturation, since it essentially returns the image to a luminance-only state, based on our defined luminance coefficients. When colour is stripped from the equation, our cognitive expectations shift. An image that may have looked unnatural or visually jarring when saturated can suddenly feel perfectly acceptable in black and white.
Why? Because it’s the relationship between light and colour that our visual system understands most deeply. Once colour is removed entirely, that relationship is no longer present to be broken—so our brains don’t object. In fact, black and white images represent such a clear departure from everyday visual experience that our brains immediately accept the shift. We’re no longer expecting realism; we’ve entered a stylised, abstracted visual space, and we adjust instantly.
This idea reminds me of something Walter Murch discusses in In the Blink of an Eye (1992). He uses an analogy about bees and spatial disorientation to illustrate a principle in film editing: if a beehive is moved just a few inches from its original location, the bees become confused and disoriented, unable to find their way home. But if the hive is moved to an entirely new location, the bees immediately recognise the change and reorient themselves without issue.
I think this concept also applies nicely to colour grading—and to saturation in particular. When the visual language of an image is slightly off—like overly saturated colours that break the laws of light and material—we enter an uncanny space that’s hard to settle into. But when we fully desaturate the image, that shift is so extreme and intentional that our brains accept it without question. It no longer feels wrong—it simply feels different.
Final Thoughts
Returning briefly to the maths behind our saturation tool: because it's tied directly to luminance, this method preserves tonal contrast extremely well during desaturation. So while it may have limitations when increasing saturation, it actually performs quite nicely when reducing it. That’s a useful property to keep in mind when working on black and white grades or more muted looks.
From here, I’d encourage you to explore more advanced saturation models: using better curve-based functions, or redefining saturation itself in more perceptually accurate ways. The possibilities are wide open—and now you have the tools to begin experimenting with them.