February 27, 2020
| By: Bonnie C. Baker
Blogger, Maxim Integrated
Some years down the road, you may summon an autonomous vehicle to pick you up and drive you to work. While you're catching up on emails or perhaps maybe a nap during the ride, you trust that the car's sensors and cameras will guide it safely and accurately to your destination.
To be effective, autonomous vehicle capabilities must exceed those of human drivers. How it this possible? How can these vehicles see in the light, in the dark, in poor weather conditions, and, further pushing the envelope, in dark and inclement weather? Building a reliable self-driving capability requires designing a detection system that combines a variety of sensors and computing…a system that "sees" the vehicle's environment even better than humans can.
The key to this system's design diversity lies in different sensor types, redundancy, and overlapping sensors to increase the car's vison accuracy. The primary autonomous vehicle sensors are camera, radar, and LiDAR (light imaging, detection, and ranging) (Figure 1). Through technology fusion electronics, these diverse sensors work together to provide the car with visual surrounding maps by detecting the shape, speed, and distance of objects in view. The technology fusion of the individual sensors is larger than the summation of each individual sensor.
Figure 1. Autonomous vehicles require many sets of "eyes," courtesy of cameras, radar, and LiDAR sensors.
In Figure 1, cameras (blue) give the human occupant a feeling of confidence by "seeing" it all in color. Input camera data adds to the autonomous vehicle's map by providing unsurpassed visual detail. The radar signal effectively operates in poor weather, adding a dimension that humans may not be able to provide. LiDAR uses light in short bursts and "looks" and times the reflections. Combine all three data output results with an on-board processor.
Does the Camera Lie?
Cameras create a visual representation of the world. Autonomous vehicles rely on cameras in the front, rear, left, and right so that a 360-degree view is possible. Some cameras have a wide field of view (120 degrees) for shorter ranges. Others focus on a narrow view, providing long-range visuals. Some cars have fish-eye cameras, providing a panoramic view that gives a full picture of what's behind the vehicle.
Though cameras provide accurate visuals, they have their restrictions. They distinguish details of the surrounding environment; however, the distance determination requires two dimensions in order to start calculating exact location. I suppose each car could have two cameras like two eyes focused on the item of interest with humans, but that may be redundant and ineffective as we are going to add other sensors for the autonomous vehicle's technology fusion disposal. An additional and critical shortcoming is that it is difficult for camera-based sensors to detect objects in low-visibility conditions, like in rain, fog, or at night.
Radar sensors supplement low-visibility cameras. Radar has been around since the beginning of time for aircraft, weather formations, and detecting ships. The technology transmits radio waves in pulses. Once the waves hit an object, they bounce and return to the receiver. This provides speed and location object data.
Radar sensors also surround the car, detecting objects at every angle. While this sensor technology determines speed and distance, it can't distinguish between different vehicle types. Together, the radar and camera sensors provide data that is sufficient for lower levels of autonomy, but they don't cover all situations and fall short of the human experience. That's why we need LiDAR.
Focus with LiDAR Sensors
LiDAR is a critical optical technology that executes autonomous vehicle distance sensing. LiDAR sensors help the complete the picture. LiDAR performance for automotive applications is highly dependent on the optical front end as well as how the signal is transmitted through the signal chain and then processed.
The LiDAR sensor measures distances by pulsing lasers and counting the time of the return signal. LiDAR provides a 3D view for self-driving cars. The returning signal provides shape and depth to surrounding cars and pedestrians. LiDAR, like radar, works in low-light conditions.
Why Transimpedance Amplifiers Are Essential to LiDAR
In LiDAR applications, the transimpedance amplifier (TIA), with its unique bandwidth and noise specifications, is central to the signal chain, enabling accurate detection of fixed or moving objects. An effective TIA must have wide enough bandwidth to capture all of the details in different road conditions, with noise that is as low as possible to avoid distorting the signal that's being received.
Maxim has transimpedance amplifiers (TIA1) that when partnered with an avalanche photodiode senses the returning light. The signal is further conditioned with comparators (COMP1). As outlined in the diagram in Figure 2, which depicts a vehicle's laser/receiver system, TIA2 and COMP2 provide the initial laser transmission time. This system consists of multiple photodiodes, a laser diode, and supporting electronics to transmit and receive the light signal.
Figure 2. LiDAR system with the TIA and COMP optical receiver system.
In Figure 2, the laser driver initiates a laser light pulse transmission towards the object. The laser light travels to a glass plate that reflects the laser signal back to the MCU to establish the initial transmit time and through to the transmitter optics (specified lenses). The emitted pulse reaches the object and reflects to the D1, TIA1 receiver system. Once past the receiver optics and mirror reflection, the light impinges on an InGaAs photodiode (D1), a highly sensitive semiconductor electronic device with an optical sensitivity bandwidth of 1310nm to 1550nm.
The light impending on D1 may be bright or dim, depending on the distance traveled. Additionally, there may be contaminants in the atmosphere, and to confuse the system, there may be interfering phantom lights. The DPIN2 converts light to ampere current (ID1), which travels on the transimpedance amplifier #2 (TIA1).
The TIAs (MAX40660 or MAX40661) and COMPs (MAX40025 or MAX40026) are an integral part of the LiDAR signal receiver circuitry. The MAX40660 and MAX40661 are TIAs for optical distance measurement receivers in LiDAR applications.
Figure 3. MAX40660/MAX40661 transimpedance amplifiers (TIA1 and TIA2) for automotive LiDAR.
Low noise, high gain, low group delay, and fast recovery from overload make these TIAs appropriate for distance-measurement applications. Important features include 2.38pA input-referred noise density, an internal input clamp, pin-selectable 25kΩ and 50kΩ transimpedance, and wide bandwidth of 490MHz for the MAX40660 with 0.5pF input capacitance and 25kΩ transimpedance and 160MHz bandwidth for the MAX40661 with 10pF input capacitance.
The MAX40025 and MAX40026 are comparators (COMP) that stabilize the TIA optical signal. These devices are high-speed comparators with a typical propagation delay of 280ps.
Figure 4. MAX40025/MAX40026 high-speed comparators (COMP1 and COMP2) for automotive LiDAR.
The MAX40025 package is in a 1.218mm x 0.818mm, 6-bump wafer-level package (WLP), while the MAX40026 is available in a 2mm x 2mm 8-pin TDFN side-wettable package and meets AEC-Q100 automotive qualification requirements.
The technology fusion of cameras, radar, and LiDAR systems creates a near-human vision system for the autonomous car. Although these three sensors provide differing data and information, the computing power of the autonomous car fuses the three perspectives together. Special focus on the optical front-end operation in this article shows how LiDAR is a strong player in the trio. The LiDAR-ready TIA and comparator highlighted provide unprecedented levels of system accuracy. And with this accuracy should come greater confidence in the capabilities of the self-driving cars that may soon make your commute a more productive or relaxing one.