The past several years at the Consumer Electronics Show, six-year-old AI startup Eyeris has had a hotel suite to explain how the car of the future will work. This year, the Palo Alto-based company rolled out a Tesla Model S in front of the Las Vegas Westgate, emblazoned with the company logo, to take visitors inside the future of car awareness, if you will.
“Today, it’s just following your eyes,” says Modar Alaoui, founder and CEO of the startup, explaining from the back seat how today’s “driver monitoring system,” or “DMS,” functions. Cameras in the dash are observing eye patterns to detect if you’re drowsy, so they can prompt you to pay attention. He tilts his head back in his seat to reveal the weak spot. “They may know that you’re dozing off, or they may just lose track of you, they’re not very good,” he says of DMS.
“We’re going to see the entire body posture of the driver, and track their movements.” What’s more, the future goes beyond DMS, it blankets the cabin in video tracking. “The future is In-vehicle Scene Understanding,” he says, what’s known as “ISU,” an emerging field of machine learning in computer vision for cars he hopes to profit from in a big way.
Eyeris has initial agreements with a couple of car makers, though he can’t reveal which, to use its software and chips, packaged into cameras, to have much greater insight into what’s happening than just the driver’s eyes.
Using a plethora of neural networks, including the popular “ResNet 50” convolutional neural network, up to six cameras in the cabin will see the entire body posture of the driver as well as the bodies of passengers, and the surfaces of seats and cabin walls, even the shape of the coffee cup a back-seat passenger holds.
Three cameras are mounted along the front of the cabin, on the posts on either side, and one above the rearview mirror, along with a camera seated on the steering wheel. Two more are on the posts between front and back seats, aimed at the passengers.
The feeds, in the case of this demo car, can be seen in a tablet mounted on the dashboard to the right of the steering wheel. The driver and passengers are mapped by lines and dots like a constellation map of the night sky.
“Automobile makers want to know everything that’s happening in the cabin, not just how the driver is doing, but if you are holding a coffee cup,” he explains. ISU is a component of how the car should move on the road, and not just because of the state of the driver, he explains. How fast the car should break might be affected by whether back-seat passengers are holding coffee cups, to avoid spillage.
The result is a pile of analytics. Alaoui says his convolutional neural networks and other models can attach a class label to “every pixel in the vehicle” such as a person, “along with their corresponding regions and contours for greater interior scene understanding accuracy.” He calls the technology “Interior Image Segmentation.” The study of people, in particular, is supposed to take into account behavior and facial expressions that reveal the emotion of the driver and passengers. Every surface of the car, including footwells, door panels, the center stack, can be observed by the cameras.
To test the system, Eyeris has had 3,186 subjects riding in test settings, gathering 10 million images from the cabin to train the neural nets. “This is the largest database of information on the patterns in the cabin interior in the world,” he states proudly. There’s no specific benchmark for that figure, but the years of development work convince Alaoui, 38, that his data set is without equal. Planned future studies will include many more subjects and will rise into the billions of images.
The approach used to be just software, but Eyeris has now developed its own semiconductor, an application-specific integrated circuit, or ASIC, specially designed to run deep learning models. The chip is manufactured in 10-nanometer process technology, and has been qualified for the auto industry’s “AEC-Q100” standard. The chip can achieve 10 trillion operations per second, or TOPS, within a thermal envelope of 7 watts. That’s fairly competitive versus other chips for doing machine learning. Asked about chip startups dedicated to machine learning, such as Efinix and Cornami and Flex Logix, Alaoui rolls his eyes. “Yes, yes, I know about all of them, we’ve seen fifty of these companies; none of them are shipping today,” forcing Eyeris to come up with its own circuitry. And none, so far, are automotive industry certified like Eyeris.
The camera systems, Alaoui expects, will “be accessible to even entry-level vehicles,” says Alaoui. Eyeris plans to sell its product within the budget of current DMS systems, providing far greater functionality for the same money. Pricing for its offering, and for the cost to the consumer, will increase depending on the number of cameras installed in the car and the complexity of monitoring functions.
Alaoui has been funding the operation with bootstrapped cash and with income from running proof-of-concept studies for prospective clients. It’s expected the company may land professional investment funds sometime in the near future.
More CES 2019 coverage:
- CES 2019: Ford demos cellular V2X with Qualcomm chipset
- CES 2019: Nvidia’s new GeForce RTX 2060 is just $349
- CES 2019: Voice activated trash disposal
- CES 2019: HP, Acer, and Asus unveil new laptops
- CES 2019: 5G, AI, design and data collide
- CES 2019: Robotic suitcases back (and … maybe better?)
Article source: https://www.zdnet.com/article/ces-2019-ai-startup-eyeris-will-know-your-car-from-the-inside-out/#ftag=RSSbaffb68