From Pixar to GovTech: The inside story of Singapore’s AI whizz

After nearly a decade at Pixar, GovTech’s Chong Jiayi is leading a team of experts to solve hard problems in robotics and artificial intelligence

Human artists may be more creative than machines, but when it comes to animating characters and objects at scale in a feature-length film, computer algorithms still rule the day.

That’s why animation studios and visual effects powerhouses such as Pixar and Industrial Light & Magic employ teams of computer scientists and physicists to create programs that do the heavy lifting, such as simulating the threads of wheels and the creases of human skin, for tens of thousands of frames.

Chong Jiayi, a distinguished engineer at Singapore’s Government Technology Agency (GovTech), spent much of his career doing just that. At Pixar, where he was technical director, Chong developed an effects simulation system that was eventually used in all the company’s productions.

Specifically, the system was deployed to animate the movement of human muscles and robots in Wall-E, among other advanced visual effects, using a numerical technique called Finite Element Method, which is also used by aerospace companies to perform stress testing of aircraft wings.

The animations created by Chong’s system appeared to be as lifelike as the work of veteran animators in a comparison test initiated by Pixar’s leadership at the time, but the apex of Chong’s work was for Brave, the 2012 animated fantasy film.

On the virtual set, he cracked the challenge of animating skin sliding, where human skin slides over the bone as it is pulled. He presented his work at Siggraph, the leading global conference on computer graphics, and was eventually awarded two patents, one of which was for simulating the flow of the river in the same movie.

“The director at the time wanted to simulate a realistic river with two characters splashing, along with their interactions with cloth,” Chong says. “It’s a difficult problem called two-way coupling where you have two different kinds of material interacting with one another. We wrote a fully distributed, parallelised computational fluid dynamics [CFD] solver to compute the physics between them efficiently.”

The computing demand of the CFD simulation was massive. Chong’s team required the compute power of a simulation farm that ran round the clock, with each frame taking as long as three hours to compute. “We were solving really hard physics problems on Google-like scale – and with a hard deadline too.”

But after Steve Jobs – Pixar’s former CEO who took a seat on Disney’s board after the company was acquired by the entertainment giant – passed away in 2011, the firm became “more corporate”, Chong says, prompting his peers, many of whom were top computer scientists, to part ways with Pixar for Silicon Valley bigwigs such as Apple, Facebook and Google.

Intrigued by deep learning

Around that time, Chong was introduced to deep learning by a friend who led the machine learning work that went into Face ID, Apple’s facial recognition system. After reviewing lecture videos by Stanford University on convolutional neural networks, Chong became intrigued.

“It’s easy for us to pick up deep learning because the foundation is the same,” Chong says. “The math is the same, so it’s really simple for us to go into things like robotics and graphics. At the time, I started a company to develop a 2D animation tool and used a lot of machine learning to automate animation.”

Then, the CEO of DeepMotion – a startup that specialises in applying deep reinforcement learning to graphics and computer animation – came knocking.

Lured by the prospect of working for a startup in the pre-seed funding stage, he joined DeepMotion as technical director while continuing to build up his own company, eventually partnering with DeepMotion to publish his 2D animation tool.

But two years into the job, he had to return to Singapore urgently for family reasons. At that time, he met with Chan Cheow Hoe, the government’s chief digital technology officer, at a GovTech conference in San Francisco, who convinced him to take up his current role.

Robotic advances

Today, Chong leads a team charged with advancing the use of robots in unknown environments that the robots may be encountering for the first time. “The challenge is to get robots to function properly, safely, robustly and at scale in extremely unstructured environments, because that’s where they are most useful,” he says.

To tackle the challenge, Chong’s team is developing a robotic technology stack to power a robot that can climb stairs and traverse difficult terrain. The robot is also fitted with Beyond Visual Line of Sight (BVLOS) capabilities usually found in drones, enabling it to be piloted remotely. “It’s now robust enough and works over 4G and 5G networks,” he adds.

Another aspect of GovTech’s robotic technology stack is simultaneous localisation and mapping (Slam), which helps robots navigate unfamiliar areas by reconstructing the 3D environment around them.

While the problem is “mostly solved”, thanks to the work of other experts in the field, Chong’s team is working on what is known as Visual Slam, which uses commodity cameras fitted on robots, instead of the more expensive and bulky Lidar laser-sensing technology, to map the environment.

Chong says the technology will let government agencies deploy four-legged robots in areas hazardous to people. These could be structurally unsound buildings or places with toxic gases. “The key is we can now do 3D reconstructions of places that drones can’t reach – and at scale from a cost and technical prospective,” he says.

Read more about AI in APAC

Read more on Artificial intelligence, automation and robotics

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.