Contrary to what many think, writing programs for full self-driving cars isn’t all that hard. There are plenty of open-source sensors and tools available. In fact, even a high school student could buy toy cars online and program them in just a few weeks. So, why don’t we have more advanced self-driving cars on the roads today?
Well, the main issue isn’t really the computer programs; it’s the reliability of the technology, which mostly comes down to the sensors. Without reliable sensors, the car can’t know for sure what’s happening on the road. That’s why we still don’t see many self-driving cars out there.
Modern cars come with sensors like radar, cameras, ultrasound, and infrared. But, here’s the catch: none of them are quite good enough for advanced self-driving. LiDAR, though, could change that and take us to the next level of self-driving cars.
Now, let’s talk about radar. Radar sensors in cars rely on something called the Doppler effect. They use it to tell moving objects from stationary ones, even if they’re at the same distance. This works great for stuff like adaptive cruise control, which helps cars keep a safe distance from each other. But, it’s not enough for full self-driving. Radar can’t tell a rock on the road from the road itself. Plus, radar has its limits. It’s not great at spotting soft things and doesn’t give us super-detailed pictures due to its long wavelength, so it can’t handle high-level self-driving on its own.
Cameras, on the other hand, use visible light to see. They’ve gotten way better at capturing sharp images, but they still mainly see things in 2D. A great challenge is to extract 3D information from 2D images without distance data. Binocular Stereo Vision is the mainstream technology of Camera-based sensors. Around 2015, deep learning algorithms have improved things a lot. They’ve boosted the accuracy of tasks like recognizing objects and their positions. However, even with all these improvements, cameras alone can’t provide the reliable 3D information we need for truly safe self-driving. That’s where other sensors could come in to help Camera. In autonomous vehicle scenarios, technologies such as TOF (Time of Fly) are developed to calculate the distance using laser pulses. At home, structured light technology is affordable for gamers using infrared. The breakthroughs linked to virtual reality (VR) and augmented reality (AR) technologies are really exciting and intriguing for 3d Camera technologies. At the same time, we’ll need smarter deep-learning models to identify things like traffic lights and road signs better.
Ultrasound sensors are cool because they can detect soft stuff, but they’re only useful at short distances and low speeds. So, they’re great for parking but not for high-speed self-driving.
Infrared sensors also have their perks, like helping cars brake automatically when they detect pedestrians. But they’re best for low-speed situations and won’t cut it for advanced self-driving.
This is where LiDAR shines. It gives us super-precise real-time distance data and helps the car figure out where it is exactly. It’s also great at spotting things that aren’t moving, and it works in the dark. LiDar signals are points cloud with distance information, it needs a camera to capture the texture information and fusion the sensor data together. There are some challenges, like making it work at longer distances and making it cheaper. Traditional LiDAR systems use moving parts like motors and mirrors, which can be expensive and tricky to maintain. However, newer ideas like phased arrays or flash-based tech could make LiDAR much cheaper and more reliable. The challenge is to modulate the high-frequency laser pulses, capture a huge amount of data at a very high speed, and then fusion with the camera. Leading LiDar companies are racing on this field.
Sensors are one piece of the puzzle for high-level self-driving cars, but they’re a crucial one. With exciting advancements in LiDAR technology on the horizon, we might be closer than we think to having truly advanced self-driving cars.