Whether it's in burn-in benchmarks like Furmark, or games like The Witcher III, one way developers have shown off their application's visual fidelity for a long time, is hair. Fur, fluff, and luscious locks are extremely costly to render well, as so many follicles must be handled with their own physics and transparencies. Often hair is prebaked, but we've seen more natural looking hairpieces in recent years and that is set to continue with the next-generation of graphics cards. Nvidia has been working with some intriguing developers to create hair that's created and given real volume by AI.
This represents a new development in Nvidia's "Hairworks" technology, which has continually pushed the bar for hair believability in games over the past few years. It involves a neural network, which researchers had to teach to render hair correctly and it does so, with (crucially) much less graphical power than traditional hair rendering techniques.
The neural network creates tens of thousands of 2D images of the hair and then links them together to create a 3D image and it can do so in milliseconds. It can then look at video and render the hair to react accordingly. Effectively not creating a 3D head of hair, but effectively thousands upon thousands of highly-detailed sprites.
It's not a system that works perfectly, nor perfectly with every hair type, but it's an interesting progression of technologies which should lead to far greater hair fidelity in the future -- if developers support it. We also need graphics card companies to support it, but considering Nvidia is likely to use some of its AI power cores in the next-gen GPU, expect this sort of technology to become much more common place next-gen. Thanks PCGamesN.