Mystery Meat Images and a 3D Workflow

I have an idea for a workflow in the 3D industry using color management, but wanted to run it past a few people for feedback.

Within the architectural computer graphics industry none of the 3D applications support color management. Within the applications, scenes are able to be rendered to emulate real-world scenes often using rendering engines that emulate real-world lighting. In some cases this lighting is even physically accurate. In a normal workflow the colors and textures are all applied to a scene and the render previewed within the non-color managed 3D application. Once complete, it is saved to a standard image format, like JPG, TIF etc. and opened in photoshop for additional editing.

The problem is that when you open this image into Photoshop, when color management is being used, the RGB values of course have no reference without a profile.

Normally when you receive a mystery meat file, you would try assigning sRGB or AdobeRGB profiles and see which looks the best. I am thinking that on a display that has a gamut that closely resembles sRGB, or a high gamut display that resembles AdobeRGB, this would be the best approximation, but can we get it closer? These are the two ideas I had:

  1. As all of the images are being previewed, using a hopefully profiles/calibrated display, could you assign the monitor profile to the image and then convert to a linear working space (AdobeRGB etc.). My thinking is that the monitor profile represents exactly what the user would have seen on their display when editing the colors in the non-color managed environment. I’m not suggesting the image ever be sent elsewhere using the monitor profile, or edited in this space, but only to assign some meaning to the RGB values. I know this practice is not generally a good idea, but given these particular circumstances, I wonder if this would work?

  2. As rendering in 3D is really virtual photography, in theory we should be able to use a “virtual” ColorChecker SG to create a camera profile that would be unique to the particular lighting in that scene. You would render a scene with the color checker and then without, just as you would with a photograph a scene with a camera. The hitch in this solution is of course premised on two things. 1) Being able to create materials within the rendering application that replicate the physical properties (reflectively and color) of a real color checker and 2) that the rendering engine being used is able to render physically accurate lighting. I’m not sure if this solution is possible given the only way to assign color in a 3D app is with RGB values, there are no LAB color pickers.

I’m curious to know everyone’s thoughts on this.

I found some pretty interesting articles on another approach to this, but the workflow is too complicated to be used in production: … arance.pdf

A fascinating idea. I hope others chime in on this as well.

If you’re just looking to assign some values to the 3D image in Photoshop, the most reasonable choice would be sRGB. Assigning the monitor profile to the image would not get you much since you are already viewing your image through the monitor profile (and you mentioned that you wouldn’t be sending it anywhere.)

Would these apps faithfully reproduce all the reddish/yellow of indoor tungsten lamps in indoor scenes? Do they show green kitchens under florescent lights? That kind of accuracy would be pretty cool, but rather counter to the expectation that things are going to look good.

I meant you would not send the image outside with a monitor profile assigned, but the final image certainly does get pushed out to clients and print shops for reproduction. That is why I was thinking Untagged RGB -> Monitor Profile -> AdobeRGB -> Final Output profile

Yes, they can simulate exact lighting and material conditions. To the point that you can use tools to sample exact lux levels on any surface within a scene. You can even use specific lighting IES files to define the exact emmision pattern from certain lighting fixtured. Materials can be defined with litterally pages of variables to exactly replicate real world conditions. … id=5659302

These are the two most used physcially accurate engines. The latter link used the mental ray engine (