I have an idea for a workflow in the 3D industry using color management, but wanted to run it past a few people for feedback.
Within the architectural computer graphics industry none of the 3D applications support color management. Within the applications, scenes are able to be rendered to emulate real-world scenes often using rendering engines that emulate real-world lighting. In some cases this lighting is even physically accurate. In a normal workflow the colors and textures are all applied to a scene and the render previewed within the non-color managed 3D application. Once complete, it is saved to a standard image format, like JPG, TIF etc. and opened in photoshop for additional editing.
The problem is that when you open this image into Photoshop, when color management is being used, the RGB values of course have no reference without a profile.
Normally when you receive a mystery meat file, you would try assigning sRGB or AdobeRGB profiles and see which looks the best. I am thinking that on a display that has a gamut that closely resembles sRGB, or a high gamut display that resembles AdobeRGB, this would be the best approximation, but can we get it closer? These are the two ideas I had:
As all of the images are being previewed, using a hopefully profiles/calibrated display, could you assign the monitor profile to the image and then convert to a linear working space (AdobeRGB etc.). My thinking is that the monitor profile represents exactly what the user would have seen on their display when editing the colors in the non-color managed environment. I’m not suggesting the image ever be sent elsewhere using the monitor profile, or edited in this space, but only to assign some meaning to the RGB values. I know this practice is not generally a good idea, but given these particular circumstances, I wonder if this would work?
As rendering in 3D is really virtual photography, in theory we should be able to use a “virtual” ColorChecker SG to create a camera profile that would be unique to the particular lighting in that scene. You would render a scene with the color checker and then without, just as you would with a photograph a scene with a camera. The hitch in this solution is of course premised on two things. 1) Being able to create materials within the rendering application that replicate the physical properties (reflectively and color) of a real color checker and 2) that the rendering engine being used is able to render physically accurate lighting. I’m not sure if this solution is possible given the only way to assign color in a 3D app is with RGB values, there are no LAB color pickers.
I’m curious to know everyone’s thoughts on this.
I found some pretty interesting articles on another approach to this, but the workflow is too complicated to be used in production: