Machine Learning Articles of the Week: Occulus Rift Occluded Face Reconstruction, Low-Precision Deep Neural Networks, Numerically Precise Floating Point Code Synthesis, and Learned Terrain Traversal for CGI
I’ve been catching up with some of the SIGGRAPH entries this year and there are quite a few that are simple but effective applications of machine learning to graphics problems. I suspect this is the new trend in graphics papers but it’s refreshing to see interesting applications of machine learning that aren’t using a multilayer deep learning architectures with bayesian hyperparameter optimization and new custom gradient descent algorithms.
Why are Eight Bits Enough for Deep Neural Networks?
When neural networks are often implemented in double precision (or more) due to concerns about numerical stability, performance, and sometimes correctness, is left on the table. Consider an architecture where you have 8 bit weights instead of 64. While on a standard x86 you may not have different computational performance, you can have big savings from faster memory management and increased cache locality. Pete Warden explores this and more in the context of deep learning architectures.
Facial Performance Sensing
Virtual reality environments would be much more intimate if you could experience other people with their facial expressions in realtime. While a 3d map of the face is not hard to capture with proper rigging, wearing an Occulus Rift or other head mounted display occludes sensors from capturing facial expressions. The authors merge strain gauge and depth data with linear regression to estimate a user’s facial map occluded by an Occulus Rift.
Synthesis for Floating-Point Expressions
Compile efficient floating point code from real numbered math written in a lisp-like language. This is pretty exciting research direction where numerical solvers may be optimized for both performance and numerical stability. I’m hoping this turns into the “Stochastic Superoptimization” of floating point math.
Dynamic Terrain Traversal Skills Using Reinforcement Learning
Animating characters is hard. What if you could train a model to learn how to animate? This paper looks into this by using reinforcement learning to train both a dog and a piped to navigate terrain by alternating between running and jumping.