In PC parlance, when we talk about graphics cards we think of what Nvidia and AMD sell, these addon boards for desktops that you can put in, and these video cards accelerates the graphical rendering capabilities of games. Today, Nvidia has a lineup of real-time raytracing capable chips.
But also recent times, these GPU makers are into what’s more generally known as accelerated computing. General-purpose CPUs can do a lot of things fast, but GPUs are designed to do certain things in a massively parallel kind of way. This is why they can quickly render graphics when CPUs cannot. This is also how machine learning training and prediction can quickly be processed in a massively parallel kind of way, and slowly when using general-purpose CPUs.
Said Nvidia lineup that has raytracing also has dedicated “tensor cores” which accelerates machine learning computing.
And what I’m saying is, the two is actually becoming the same: the workload of rendering realistic or desirable graphics quickly, and the ability of compute complex machine learning models.
This is most recently exemplary in smartphone cameras, or specifically, Google’s HDR magic that enables its Pixel phones to take great photos using fewer lenses or smaller sensors. More recently, Google announced its low-light Night Mode system which allows ordinary phone cameras (currently the APK only runs on Pixel phones) from 2 years ago to take amazing low-light photos which renders the results clearly with low enough noise and smearing to be fully visible. It is all done via machine learning prediction on a series of photographs recomposited via a HDR-like process.
In fact, HDR is kind of like machine learning, just algorithmic. Augment it with a neural network and voila.
It opens eyes, because imagine if the compute power is there to take this low-light tech into real time: make videos in which you have computational night vision? No need for a flash ever? It’s pretty nuts. You can even build in protection against drastic contrast/level changes. Such kind of augmented vision obviously have a ton of use in everyday life and in filming, but also obviously military applications.
Now, it’s software magic at work. But ultimately these breakthroughs are coupled with improvements in ML hardware as well, which means it could be possible as Nvidia and AMD continue to bring out more powerful ML and video hardware. I wonder if this still means we will have different components (either as chips or parts of SoC) to handle video rendering and AI workloads, and if things like Google’s HDR voodoo will lead us down yet another path of customized compute hardware.