Your next car may be the most powerful computer you own
Beyond the backup Camera: Bird’s eye-view and more
Most of us are familiar with the soon-to-be-mandatory backup cameras that provide an invaluable aid for getting in and out of parking spaces and driveways. But high-end vehicles now feature quite a few more cameras, in front to scout traffic and detect lane lines, on the sides to help them avoid other vehicles, and all-around to provide a very cool “bird’s-eye” view of the car and its surroundings.
While the sensors in these cameras are typically fairly standard, the systems require a lot of innovation in image processing to make them effective. For example, Infiniti’s bird’s-eye, 360-degree, simulated view stitches together images from four ultra-wide-angle cameras (on the side mirrors, grille, and license holder) and then corrects the substantial distortion to provide a more-or-less natural-looking view of the car from above. This makes it much easier to park accurately and to maneuver in tight spaces, as you can see in this promotional video:
Even the now commonplace backup cameras are getting upgraded, thanks to the availability of more computing power and an assist from machine learning software. Many of the image processors coupled to these cameras are now augmented with object recognition to help prevent collisions with pedestrians, and some are integrated with rear cross-traffic sensors as well. Silicon vendors have been racing to outdo each other with auto-specific application processors and architectures — easily visible by attending an Embedded Vision Alliance event.
Mobileye, the leading provider of both after-market and OEM camera-based vehicle safety systems, has both a custom vision processing chip, the SeeQ2, and board, the EyeQ2, inside its 500-series add-on vehicle safety cameras. Interestingly, the system’s image sensor is only VGA resolution, but it features very-high dynamic range, to allow for operation in tricky lighting conditions. In parallel, Intel has just snapped up vision chip startup Movidius, with automotive expected to be a key market for its high-performance, low-power, Myriad family of chips.
The after-market Mobileye systems only serve to warn the driver, and aim to provide at least two seconds of advance notice of a potential accident. Cameras tied into automated safety systems have the additional need to accurately estimate object distances in addition to their position and motion, since they directly control braking and possibly other car functions. This is often accomplished by aligning the images from multiple cameras and using software to compute the depth of objects based on the disparity of where they appear in each camera’s image. However, that approach is far from foolproof. Most of the time there is at least one additional radar or lidar that has its data fused in real time with the vision data to achieve better results in a wider-variety of conditions.
Cameras for style and fun

Radar and lidar used to augment machine vision
While cameras are currently the only way to perform certain important functions like tracking lane lines, for other tasks like collision avoidance, they aren’t always the best solution. They can be fooled by some high-contrast scenes (which may be what happened in the now infamous Florida Tesla crash), can’t always estimate the distance to other vehicles or objects accurately, and don’t do well in poor weather. For that reason, almost all autonomous vehicle projects also feature one or more non-visual ways to “see” objects in the world around them — typically either radar or lidar.
Tesla has just shifted its primary sensing system from its Mobileye-designed cameras to its in-vehicle radar, after a heavy investment in advanced signal processing to help keep the radar from getting confused by metallic objects and other edge cases. Many current “self-driving” car projects, including Google’s, rely on lidar, which is harder to fool, but so far is still larger and more expensive than radar or cameras. Velodyne, the leading maker of automotive lidar, expects prices to continue to fall, though. So expect to see at least some use of radar, and eventually lidar, in nearly every new car in a few years.
An exception to the typical use of radar or lidar for autonomous vehicles is Nvidia’s DAVE2, which essentially taught itself to drive using a neural network trained in the cloud using only camera data from real cars, and accompanying time-synced steering data. While its goals, so far at least, are much more limited, and research-oriented, than those of the car companies, it’s impressive that it’s able to drive correctly on a variety of roads after just a few months of learning, and using only vision input.
A supercomputer in your trunk

The low-end CPU that controls your engine, or your entertainment system, isn’t up to the task of automatically navigating your car through traffic. The result has been innovation in what are essentially portable supercomputers. Nvidia’s Drive PX 2 is showing up at the high-end in fully-autonomous test vehicles (others like the Google cars have several traditional computers crammed into their trunks). Nvidia has now released a compact, low-power, version of Drive for basic automated safety functions, while its larger siblings are designed for more-complex autonomous applications.
When most of us think of computing in our car, we think about the infotainment system — which in itself has become quite a technology hotbed. But increasingly, the real computer power in your car will be used for the AI-assisted, vision and spatial sensing systems that help you to drive, or help the car to drive you. Just like the once-far-fetched idea of using your car battery to power your house has become a possibility with Tesla’s battery systems, it may be that at some point you’ll be running your high-end games on your car’s GPU while it sits in the garage, and streaming them to your TV.
We’re doing a special Rolling Update series this week on emerging car tech; stay tuned for more in-depth coverage as the week goes on.
0 comments
Post a Comment