When will we have self-driving cars on the road?
Updated: Mar 24, 2021
Emma Prévot explains what self-driving cars are and how they work. In particular, she examines why fully-autonomous cars are not on the road yet.
2020 was defined as the future of self-driving cars but we still don’t have them in our street, apart from undergoing trials.
Previous predictions on the advent of driverless cars
It’s the end of 2020, a truly abnormal and unprecedented year. It was expected that we would see and experience the arrival of driverless cars: this year was supposed to be the future of self-driving cars.
In 2015, the Guardian predicted that in 2020 we would have become a “permanent backseat driver”. According to The Atlantic in 2018, Google’s Waymo did what was defined as “The Most Important Self-Driving Car Announcement Yet”. Waymo stated that autonomous vehicles would have transformed urban life by 2020 and up to one million people would be experiencing a ride in fully autonomous cars every day.
Even Elon Musk estimated that “we will have more than one million robotaxis on the road,” and “we’ll have over a million cars with full self-driving, software… everything” by the middle of 2020, when drivers will not have to pay attention to the road.
The year is here, and we made it to the end of 2020, but autonomous cars are still out of reach, with only a few trial programs in place.
Let’s take a step back to explain what driverless cars involve.
What are self-driving cars and how do they work?
An autonomous car is a vehicle capable of operating without any human involvement. It senses the environment through a variety of sensors like radar, lidar, sonar, GPS and many more to send appropriate instructions to the controls of the car, like braking or accelerating.
An example and explanation of radar positioning from here
The Society of Automotive Engineers (SAE) currently defines 6 levels of driving automation ranging from Level 0 (no automation at all) to Level 5 (full driving automation).
The six levels of driving automation:
Level 0: The human provides the "dynamic driving task", although there may be some automatic feature like emergency braking.
Level 1: Either steering or accelerating (cruise control) is automated, but not both.
Level 2: (involving advanced driver assistance systems or also referred to as ADAS). The automation system in the car can assist with both steering and accelerating, but the human driver is still responsible for most of the safety-critical functions and environment monitoring. Tesla Autopilot qualifies as Level 2.
Level 3: It’s the first level of true automation, where the car itself monitors the environment.
Honda claims it will be the first automaker to mass-produce vehicles with level 3 autonomous capabilities, which will be launched in Japan. The launch is expected to take place in March 2021. Last year, Audi unveiled its most autonomous car, model A8, but due to safety concerns and the lack of a legal framework, it won’t be equipped with the Level 3 partial automation system, which Audi has termed the Traffic Jam Pilot.
Level 4: The system can intervene if something goes wrong, but a human still has the option to manually override.
This type of vehicle can operate on the roads using self-driving mode, but only within a limited area (generally urban areas where you have lower allowed maximum speeds) due to a huge lack of legislation and poor infrastructures. Due to these reasons, existing vehicles are oriented towards ridesharing. For instance, French company Navya is selling Autonom Shuttle and the smaller Autonom Cab in the USA. Navya self-driving vehicles have also already been used in other international airports in Paris, Frankfurt and Christchurch . Recently their shuttle has also been used in Jacksonville, Florida, to transport medical supplies and COVID-19 tests to a nearby Mayo Clinic. EasyMile, another French company, produces a self-driving shuttle called EZ10, which currently operates in airports, college campuses, factories, and business parks.
Level 5: These vehicles do not require human attention and won’t even contain steering wheels or acceleration/braking pedals. Prototypes of fully autonomous cars are currently undergoing tests all over the world but are not available yet.
So, what we define as driverless cars possess Level 5 technology which is not yet available.
Why is it taking so long?
There are several reasons why it is taking longer than expected to get self-driving cars on the road.
Firstly, these vehicles require a lot of training data. These autonomous vehicles rely on Artificial Intelligence technologies. The 2010s were a great decade for AI, with major advances in translation, speech generation, computer vision, game-playing and many more. AI used to have a really hard time recognising a table in a photo, but now it’s a trivial function. The embrace of AI fostered the general optimism seen in the predictions cited at the beginning of the article. However, when it came to self-driving cars, those advances proved to be insufficient still. What is needed is a huge quantity of training data which would be used to teach the computer good driving behavior. Nevertheless, some events are too infrequent and there remains insufficient records, for example data on car accidents.
Moreover, there remains weak regulatory standards. Current standards for existing vehicles assume the presence of a human driver to take over in an emergency. Germany aims to become the first country in the world to have driverless vehicles operating on the streets and not just in special test areas. The country is reportedly working on a broad framework to regulate Level 4 autonomous vehicles. Currently, the legislation is being reviewed by the transportation ministry and other government departments and should be ready for summer 2021. That explains why the new Mercedes S-class, engineered to provide Level 3 and Level 4 self-driving capability, will be available only in Germany and will not be launched until the second half of 2021.
Another reason is social acceptability: there have been many accidents involving automated cars, as shown in an example here. If people are not involved in the introduction of autonomous vehicles, we may face a mass rejection of AV use. Not to mention the fact that these cars can be hacked, which can engender trust issues.
Finally, autonomous vehicles will have to deal with humans on the road and with the infrastructures for many years. Drivers don’t always play by the rules: they use instinct; they use eye contact with other drivers and they double park or pass other cars where it is dangerous or forbidden.
With all the problems companies are facing with driverless cars, it may be easier for them to instead launch self-driving vehicles for goods transportation.
Self-driving vehicles are not just used to transport humans. They are already transporting goods. Autonomous delivery startup Nuro, rated as the first most promising American AI company in 2019 by Forbes (it is no surprise that the second most-promising American AI company is also a self-driving car company), is now allowed to launch commercial driverless services on public roads in California. Due to the general advances in automated deliveries, Walmart has partnered up with General Motors (GM) to test battery-powered Chevrolet Volts in Arizona and start autonomous and driverless deliveries in the US. Both Navya and EasyMile, cited above, have already launched self-driving trucks.
In June 2020, Amazon acquired Zoox, a self-driving technology firm, and then unveiled their first self-driving robotaxi in mid-December.
When, if not 2020, will we see driverless vehicles in the roads?
In late October 2020, Tesla launched a beta test of a “full self-driving” system. This worried drivers, pedestrians and also Tesla car owners. However, Elon Musk has said that drivers should always remain vigilant. We might think that humans are not ready to leave the steering wheel and become a backseat driver. On the other hand, the biggest Tech companies are investing a lot in self-driving vehicles (more insights here).
We now know for sure that 2020 wasn’t the year of driverless vehicles and it’s nearly impossible and somehow utopian to make concrete predictions on when we will actually see Level 5 autonomous cars on the road. There’s still a lot of developments to be made in many aspects. One of the most interesting discussions within the general AI space is the one concerning ethical dilemmas. When we are driving, we make a lot of moral and risky decisions; we push the brake to save a pedestrian illegally crossing the road, we exceed the speed limits even if we know it might be dangerous… So what should an autonomous car do? Should the car save the passenger or a group of kids crossing the street? It may seem obvious that there should be a universal moral code but that’s not easy at all. The largest survey on machine ethics reveals that driver’s ethics vary between countries. Countries with strong Confucian or Islamic religions were far less likely to sacrifice older lives to save younger one; people from countries with strong government institutions oftenly chose to hit people crossing the road illegally.
What would you do? Would you buy a car programmed to sacrifice the passengers to save a group of pedestrians? We might have to answer these ethical questions very soon…
The UCL Finance and Technology Review (UCL FTR) is the official publication of the UCL FinTech Society. We aim to publish opinions from the student body and industry experts with accuracy and journalistic integrity. While every care is taken to ensure that the information posted on this publication is correct, UCL FTR can accept no liability for any consequential loss or damage arising as a result of using the information printed. Opinions expressed in individual articles do not necessarily represent the views of the editorial team, society, Students’ Union UCL or University College London. This applies to all content posted on the UCL FTR website and related social media pages.