DSLR Color space

Rookie on the color front with perhaps a rookie question - why would a DSLR grade camera ever need to work in sRGB color space? It seems the dominant theory from capture to post process is to capture in raw, then post process to the point of output and then adjust color space at the tail end of the process. If color space is adjusted at the tail end based on output, why would you ever want to capture in a more restrictive color space?

Well…perhaps not such a rookie question after all. Any takers?

RAW has no color space (its essentially Grayscale data). You can encode the image (after rendering) into essentially any color space you wish. You can do this in a RAW converter or let the camera do it, in which case, you have no control over the rendering and only a few options for encoding (usually sRGB or Adobe RGB (1998)). Why encode into sRGB? Well its a dumbed down mode for sure but for many images where the scene gamut is low, it can work fine. A digital camera (and scanner for that matter) doesn’t really have a gamut limitation. What you place in front of it is key. Take the same camera system and shoot a very colorful scene (large gamut) and a gray card. Same sensor, same processing etc. The gray card would easily fit within the gamut of sRGB.

Why encode into a restrictive color space? Don’t know, no reason really. I use Adobe Camera RAW and encode into a space based on the feedback of the Histogram. If I see saturation clipping in say Adobe RGB (1998), I’ll then select ProPhoto RGB. But there are scenes that easily fit even in sRGB so I tend to use the smallest gamut encoding space that fully encodes the scene gamut.

Aren’t all files essentially grayscale data? The same thing could be said for an RGB TIFF file, could it not? It’s just 3 grayscale images, right?

Saying that there is no colour space seems odd to me. If you pointed the camera at a green wall and took a photo, it would be recording CCD data for the visual colour of that wall. So you could say that the colour space is the colour space of the camera itself (or CCD). If you had placed a reference chart in the frame, you could generate a profile for that particular lighting condition, and ideally that would be the best colour space to use. Then convert to a working space from there.

The biggest problem with profiling cameras (as I understand it) is that you don’t have control over the light source (unless you are in a studio). So you would need to photograph a test chart every time you took a shot, or have several profiles for various lighting conditions. Neither of which is ideal.

But what do I know… I still shoot Tri-X and print in the darkroom :wink:

This reference being made out of what and what is the gamut of the target?

A RAW file isn’t anything we’d even begin to call an image. The RAW is a mosaic of monochromatic pixels. Demosaicing has to happen first then we have color.

Sidestepping the philosophical problems and trying to avoid the obvious friction going on, let me clarify a few things. (I’m not much of an expert, but I know a few things. If any experts wish to chime in, please do.)

RAW definitely has a colour space. There is colour information in a RAW file or we’d never get colour in the end.

This seems plainly false.

Yes, there is a reason. If all the colours in your scene are within the sRGB gamut, then, by converting into sRGB, you are not restricting anything. Rather, you are capturing the colour that is in your scene with more precision. You see, no matter what the size of your profile is, it still uses the same amount of data to capture all the colour information (unless you go up to 16bit, 32bit, etc.). So, the bigger your profile’s gamut, the more that data spreads out. In other words, the same amount of sampling spreads out over a larger volume. Therefore, a smaller gamut profile will sample finer differences in colour; a larger gamut profile will sample coarser differences in colour.

You will capture more colour information of your scene by using a smaller profile, so long as the colour in your scene does not exceed the profile’s gamut. With a larger profile, a lot of data is going to waste because it is reserved for those colours beyond what is in your scene.

Start out here and we’ll then talk about encoding color spaces after demosaicing Grayscale data INTO color:

color.org/ICC_white_paper_20 … basics.pdf

Scanners and digital cameras have a color mixing function, not a color gamut.

One of the best posts summing this up is from Richard Lyons on the ColorSync list way back in 1999 (you can search the archives for other points on the subject from Bruce Fraser and others…)

IF scene gamut can fit within sRGB, I use that. I use ACR’s clipping histogram to show me which of the four encoding color spaces best fit the scene gamut. I do this in 16-bit no matter the encoding color space.

[quote]

For a good Podcast on the internal color space Lightroom uses after demosaicing (and why):

photoshopnews.com/2006/07/07/lig … -8-posted/

I don’t wish to get pedantic, so let me just say this. By colour gamut, I simply mean the range of colours that a device can capture or reproduce. That means all colour devices have colour gamuts. I think my comments are compatible with the document and quote you provided.

I’m still wrapping my head around a lot of this, (working on profiles for my drum scanner right now) so let me know if you think this is off or not:

A camera, because of its flexibility with exposure, settings, filters, etc… as well as raw manipulations, is capable of capturing a very wide gamut. But the entire gamut would never be captured in a single photograph. The settings you use to take a pic of some vivid purple flowers are not going to be the same as the ones you use to take an infrared pic of some nice greenery, etc…

But the more important point someone was trying to make was that the actual gamut does not matter, because any single little change to the settings (pre or post capture) will change the end file.

So you can make many variations of the same scene, with the same camera, even before you get to photoshop! But that’s not all that changes; any little change in the scene or lighting would change the image/“profile” as well.

So all these changing variables makes the use of a profile, in the same way one would use one for a scanner, not very practical for most situations. That said, there are a number of exceptions.

If you have control over all those variables, like in a studio with pretty fixed lighting setup, then you for sure should go the profile route. Or if you’re using your camera like a scanner and capturing artwork and whatnot. But those are the obvious situations.

The questions I have, and I’m sure a lot of us, are how valuable are profiles in the situations that are more variable.

I know there are different ways to use profiles (different workflows), so that also makes talking about them difficult.

Important questions:
Are there ways where using them would be bad? ie. what should be avoided?

Is there a way to use a profile where you can properly define or correct its color peculiarities, but still use it in a variety of uncontrolled situations?
Im thinking of the way that new plugin is supposed to correct a DSLRs color to be more like real life. The name of the plugin escapes me right now but I’m sure most of you know what I mean.

And, if there was a way to use such a camera profile:
what is the workflow,
what are the complications involved with the various shooting situations (when could you not use it)
and how would one go about creating such a profile.

I think these questions are more useful than the theory of color gamut etc…
Anyone have any ideas?

-mikeH

Your comments and questions are good, but they demand a whole book to answer. I wonder if there are any photographers out there implementing an ICC workflow successfully.

The biggest hurdle I see is that the software for digital photography is still pretty limited (in terms of utilizing an ICC workflow). From what I can tell, some major improvements could be made to colour managing a RAW workflow. For now, you have to rely on the manufacturer’s/rendering software’s own proprietary colour management system, which will also force you into a standard working space.

The biggest issue with camera profiles is the methods in which today’s technology treats them. They are handled like a scanner which can work with digital camera in a very controlled situation. Then you have to take into account whatever RAW to rendered RGB color is happening outside your control (Podcast anyone?).

In the end, few users want scene referred colorimetric representations of this linear data. The easiest solution for most photographers working in all shooting/lighting situation of capture is to use something along the lines of ACR, Lightroom or Aperture where you render the image as you wish visually, the preview and numbers you see in the application is the numbers you get once you have a rendered and encoded file. ACR does this the best so far (Aperture is working in Adobe RGB (1998), Lightroom in ProPhoto RGB primaries). There are color tweaking tools (Calibrate tab) that is much like a user defined profile editor. Thls leaves the user to produce preferred, output referred color files on a calibrated and profiled display and doesn’t require any handling of the “camera profiles” (quote since none of these products use ICC profiles).

Not at all different with how we handle scans of color negs or the good old Kodak PhotoCD YCC color conversions of the past.

GretagMacbeth’s ProfileMaker has a great little Digital Camera module that allows you to tweak a profile’s perceptual rendering intent to acheive your preferred visual styling. This is a great solution for all those who recognize that they do not want a colorimetric transformation. However, as you mention, no RAW converters support custom profiles, so this tool is ultimately useless. Furthermore, they lock you out of the rendering process.

What would be great is if RAW converters allowed the user to customize its rendering and encoding process with ICC profiles. This, of course, does not mean the profiles have to perform colorimetric conversions, but rather they would perform a perceptual conversion that has been custom designed by the photographer.

We just aren’t there yet.

Well, I use my drum scanner for bigger work, and my dSLRs for small stuff, so I haven’t bothered with a lot of raw (I hear the gasp from a lot of you)… so I wan’t really aware of this issue. I think the idea of RAW is complicated enough for a lot of users that maybe they thought it best to hold off on the whole issue of custom color. Esp since most users probably wouldn’t get it to work right anyway (considering the amount of knowledge and tinkering it would take to set up such a workflow).

but…, as far as RAW converters not having the ability to use custom profiles, this really isn’t any different than using scanner software that knows nothing of ICC, or, on the other end, making an RGB profile for an epson printer (CMYK) that goes through Epson’s driver. In other words, just because part of the process isn’t totally open doesn’t mean you can’t use custom profiles, right?

In a related note, i’ve been working on a couple profiles to use on my scanned images that I either made before my regular profile, or they are negatives etc. Maintaining the gamut of the original (ektachrome etc…) was the original goal --im realizing that that isn’t as important as a lot of people make it sound–, but also just for aesthetics and experiments such as saturation levels.

I’ve experimented with using these profiles, which so far look very good on my scanned images, on my already-processed&corrected dSLR canon images, and so far they do not work as well at all. Which is interesting to me because, in theory, both types of images are “correct” and were corrected and pleasing on the same monitor.

My hypothesis, at this point, is a combination of the dSLR not capturing color the same as chrome (of course! ie a custom profile would help here) and the dSLR files being more saturated data-wise, due to their nature and the fact that they are usually in sRGB.

Dealing with saturated RBG numbers is something I can deal with, the question I have for you guys is how you think a dSLR captures color differently than film? Are there any known issues that I might be able to manufacture into my current profiles for pleasing (not colorimetric) results? Such as “magentas are too blue,” or “cyans are not dark enough”, those are just examples, I really have no clue.

-mikeH

You’re dealing with RAW, you’re just letting the camera do the conversions. There was a RAW file, it was rendered using some fix settings and then the RAW was discarded.

Essentially yes. You need a profile to define the RGB numbers however. The drum scanner is still creating an RGB document and its useful to know the color space to preview it on screen and eventually convert it to another color space. As for the Epson, again a conversion is taking place but you may not have control over it.

As you’ve seen, profiling a scanner isn’t that difficult since its scanning using a fixed illuminant and the target you used to build the profile is the gamut of what you’re scanning (a piece of film).

DSLRs do indeed capture color in a vastly different way than film (it’s a linear encoded capture unlike film, the gamut boundaries are undefined, its not even color at one point but rather a file that contains the intensity of a photon upon a sensor).

I was thinking about this too as I wrote my last post. Scanners have CCDs too and must capture spectral data in a similar way as digital cameras, but we only see a bunch of RGB data in the end. So, something very similar to the RAW to RGB rendering/encoding step must be occurring, yet we still profile our scanners. Perhaps this means that scanning software ought to give us more control as well. A first step may be giving us RAW data from the scanner’s CCD.

Returning to digital cameras, since the RAW workflow has opened up the conversion process, it would be nice if this process could be customizable. RAW converters could provide a canned render/encode conversion as they do now, while also giving more options to those with the knowledge and the tools.

I’m not following you here. Perhaps a more step-by-step description would help me to understand what you’ve done.

If you’re relying on your camera’s RAW to RGB conversion, every camera is going to be different in terms of how it is different from film (if you follow). Different films are different too. Different RAW converters convert RAW data differently too. So, one cannot really say how dSLRs are different from film, in terms of the appearance of your final image.

My suggestion would be to shoot RAW and learn to use Adobe Camera Raw. It’s the best way to ‘colour manage’ right now. You can adjust your calibration controls and save different settings for certain types of images and looks.

Scanners use a trilinear CCD to capture true color (three sensors, three filters). Digital cameras use a single sensor, produce a RAW file that has to undergo demosicing to produce the color you get with the scanner. One process is interpolating the color information (a lot).

Drum scanners use PMTs, not CCDS, but that is of no import.

Just because (most) digital cameras have to “demosiac” doesn’t mean the process is that different from a trilinear CCD; each sensor still has its own filter, right?

I believe a lot of people do get raw data from the scanner. They use that to profile with and therefore use the raw settings whenever they scan.

You can create different profiles for applying different changes to the color or tone of the images. I think that is the best analogy to the raw conversion changes you can make on the DCam side of things.

It’s easier with a scanner because more of the variables are fixed, such as the lighting.

So… using the suggested raw workflow then really has no place for generating custom profiles, shooting the targets, etc…? Just use ACR with different calibrations and settings?

Exactly!

I think custom profiles do (should) have a place in the RAW workflow, but RAW converters do not support it properly yet (as far as I know). Therefore, you are probably better off NOT using custom profiles (for now). ACR’s calibration settings is your best option. This may still involve shooting targets, like the ColorChecker DC.