Netflix: As old as filmmaking itself, the practice of compositing—or putting characters in front of a background that isn’t truly there—has always been a hassle. Netflix has developed a new method that uses machine learning to handle some of the labor-intensive tasks, but it necessitates lighting actors in a bright magenta.
Chroma keying, in which actors stand in front of a vividly colored background that can be immediately recognized and changed into anything from a weather map to a battle with Thanos, was for many years the easiest method of compositing. The background is a translucent “alpha” channel that has been altered along with the red, green, and blue channels, and the foreground is described as being “matted”.
Although it’s quick and inexpensive, there are some drawbacks to this, including issues with transparent items, small details like hair, and of course anything that is the same color as the background. However, attempts to replace it with more expensive and complex techniques (such a light field camera) have failed because it typically works well enough.
Netflix researchers are attempting it, though, with a blend of the old and modern that may result in easy, flawless compositing, at the cost of a terrible on-set lighting setup.
Their “Magenta Green Screen” achieves astounding results by, effectively, sandwiching the actors in illumination, as explained in a recently published research. Bright green (actively illuminated, not a backdrop) is behind them while a combination of red and blue is in front of them, creating starkly contrasted colors.
actors in front of a green screen with magenta lighting.
Even the most experienced post-production artist could recoil at the final on-set appearance. Although they could need a little pumping up here and there, your actors should typically be illuminated brilliantly with reasonably natural light, making their appearance in front of the camera seem fairly realistic. However, if they are only illuminated by red and blue light, it dramatically alters their appearance because, of course, regular light does not have a significant portion of its spectrum removed.
However, the method is also brilliant in that it makes it easier to distinguish between the two by making the foreground only red/blue and the backdrop only green. Red, blue, and alpha are instead captured by a conventional camera that would typically record those colors. As a result, the mattes that are produced are very accurate and don’t have any artifacts from having to separate a full-spectrum input from a limited-spectrum key backdrop.
Of course, it appears that they have simply replaced one challenge with another: compositing is now simple, but it is tough to add the green channel to the magenta-lit objects.
Since themes and compositions vary, it must be done methodically and adaptably, but a “naive” linear approach to injecting green yields a washed-out, yellowish appearance. How is it automatable? AI saves the day!
The team used their own training data—basically “rehearsal” shots of similar scenarios illuminated normally—to train a machine learning model. Given patches of the full-spectrum image to compare to the magenta-lit ones, the convolutional neural network creates a method for swiftly reconstructing the missing green channel in a more clever way than a basic algorithm.
While a more complex ML model generates colors that are extremely similar to the real world, a simple approach delivers subpar results (top).
As a result, the color may be restored in post-production fairly effectively (it’s “virtually indistinguishable” from an in-camera ground truth), but the actors and set still have to be illuminated in an awful way. Imagine performing in front of a bright, inhuman light. Actors already lament how weird working in front of a green screen is.
However, the article offers the option of “time-multiplexing” the illumination, which entails repeatedly turning on and off the magenta/green lighting. The framerate at which most films and TV shows are shot makes this irritating and potentially dangerous, but if the light is switched on more frequently, at 144 times per second, it appears “nearly constant.”
However, doing so necessitates a challenging synchronization with the camera, which can only record light during the fleeting intervals when the scene is magenta. Additionally, they must take into account motion caused by missing frames.
This is still quite experimental, as you can tell. But it’s also an intriguing way to apply cutting-edge technology to a persistent issue in media production. This would not have been feasible five years ago, and even while it might or might not be used on set, it is definitely worth attempting.