Skip to main content

From dream to reality, technologies behind autonomous cars

Published on December 09, 2021

4 minutes

In the 150 years that have elapsed since the pioneering efforts of Karl Benz and Henry Ford, cars have come a long way. With an estimated billion cars on the world’s roads and roughly 70 million of them being produced every year, the car is a defining part of human society. Designated “automobile” from the Greek autos for "self" and the Latin mobilis for “moveable”, the idea of a car which cannot only power, but also drive itself has always been a vision for the industry. Now, advances in technology are finally starting to make this vision a reality.

What are the challenges on the way to full autonomous driving?

At present, cars are piloted by humans who can recognize, process, and respond to a variety of different external conditions. Human drivers read standardized signals such as traffic lights and road signage and – importantly – can improvise when these are non-functional or unavailable. Human drivers can also respond to and in many cases communicate with other human drivers around them, reacting appropriately in order to avoid accidents. Humans also operate the mechanics of a car using parts such as the steering wheel or the brake.



For several years, this last element has no longer represented a serious barrier to autonomous driving: from automatic transmission to power steering, solutions reducing human involvement in the mechanics of driving are tried and tested. Until recently, however, replicating the full breadth of the human senses and the speed with which human drivers process information and respond to it has looked insurmountable. This has changed due to the range of sensors available and the extent to which they can be miniaturized, the rapid increase in computer processing power and the development of artificial intelligence, as well as the prospect of robust high-speed communication via 5G. Combined, these could enable cars to read, understand and respond to their surroundings – e.g. to identify and avoid nearby vehicles, pedestrians, or other hazards – and to do so with a higher degree of reliability and accuracy than humans.

Which technologies are required to make autonomous cars work?

Primarily, autonomous cars will require three kinds of powerful computer chips: sensing chips, processing chips, and actuation chips.

Sensing chips – The eyes and ears of the car

Already today, sensors in cars have a wide range of purposes from saving lives (e.g. collision warning systems) through to comfort features like the ability to detect rain for automatic wiper activation. Autonomous cars will need this and much more to be able to see the road and reliably identify other cars, pedestrians, and hazards in all conditions. LiDAR sensors (Light Detection and Ranging, basically radars based on light instead of radio waves) and cameras are the main enablers of autonomous driving functions, but a lot of other micro-electromechanical systems (MEMS) will be required to make self-driving cars a reality, including accelerometers, proximity sensors, and gyroscopes.

Processing chips – The brain of the car

As sensors gather huge amounts of data, a matching array of processors and memory chips will be required to handle this data and derive driving functions from it. These functions will also rely on communication chips (5G) allowing autonomous cars to exchange information with one another and with the cloud – information the processors will also have to interpret. This could range from the distance between three cars on the same stretch of road, their velocity, and their acceleration or deceleration through to road-surface temperature readings and weather alerts.

Control chips – The heart of the car

From transmission control to engine monitoring, braking and steering assistance, electronic components have been present in our cars for at least 30 years now and no longer represent a major technological barrier. Nevertheless, fully autonomous cars without steering wheels and hand brakes will need even more actuation functions with several layers of safety and redundancy.

What is the current state of play?

There is a broadly accepted classification of automation levels as follows:

  • L0: No automation, driver controls all aspects of driving
  • L1: Driver assistance: driver uses systems for steering or acceleration/deceleration in some modes
  • L2: Partial automation, driver uses systems for both steering and acceleration/deceleration in some modes
  • L3: Conditional automation, driver uses automated driving system for all aspects of driving in some modes but is ready to intervene if required
  • L4: High automation: driver uses automated driving system for all aspects of driving in some modes and is ready to intervene if required; however, the vehicle can stop itself safely without intervention
  • L5: Full automation, driver relies wholly on automated driving system at all times

At present, the Tesla Model 3 is broadly accepted as fulfilling the criteria for L2, while the Audi A8 is considered by some to be L3; others contest this and state there are currently no series-production cars at L3. The Waymo Google Prototype has L4 and L5 features, but is still a long way away from production at scale.