Join the conversation
We love to hear from our customers. Connect with us today and let’s start a conversation.
...searching
As self-driving cars creep ever closer to fruition, we ask: is the world ready? From endless programming parameters to the “valley of degraded supervision”, explore the pros and pitfalls of greater automotive autonomy.
Autonomous cars are coming – whether you like it or not. But don’t just take our word for it: in November 2023 the British government took its first steps towards gearing up for machine-driven motors. Designed as a “legal framework for the safe deployment of self-driving vehicles in Great Britain”, the 2024 Automated Vehicles Act paves the way for what’s to come, though you could argue it raises more questions than it answers.
How do we definitively test and define the safety of such a complex system? How can a machine account for a myriad of undefined variables when making decisions? And, though the systems might be ready, can humans be trusted to use them responsibly?
Keep reading to find out whether you should be excited or apprehensive towards the prospect of greater vehicular autonomy.
Are you better than the average driver? Decades of research says you probably think so. As early as the 1980s studies highlighted that up to 8 in 10 drivers thought that they were safer than the average motorist. How then, if we harbour this illusion of driving superiority, will we ever be happy for a machine to take the wheel?
Thorough and comprehensive testing is one way to raise consumer confidence. It’s also one of the key stipulations in the Government’s 2024 Bill “AVs will achieve a level of safety equivalent to, or higher than, that of careful and competent human drivers.” And therein lies the problem: how do you accurately demonstrate an autonomous vehicle’s safety?
Thanks to the immensely unpredictable nature of everyday traffic, weather, road conditions, other drivers, and much more, testing fully autonomous systems is far more time consuming than other automotive safety aids. According to recent data, “autonomous vehicles would need to be driven for...hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries.” How long would such a regime take using current methods? Not decades, but centuries…
Clearly then, a drastically more flexible means of testing must be devised to prove fully autonomous tech to the public and legislators alike.
There’s no doubt that safely programming autonomous cars will be one of the great challenges of the 21st century. In addition to the near-endless number of environmental factors at play, tyres are another often overlooked variable in the equation.
As your vehicle’s only contact with the road, tyres play an integral role in vehicle safety. However, while human drivers are surprisingly adept at adjusting and adapting to variations in a tyre’s grip, how does an autonomous car account for such variances?
Ultimately, an autonomous vehicle’s response is only as good as the data it receives. For instance, to calculate the appropriate speed to travel on a flooded motorway, a self-driving car needs to know the exact tread depth to avoid aquaplaning. But, what happens when that key information is missing or incorrect?
There’s an irony in the fact that autonomous machines must prove themselves to us with such rigor, when we, the humans, are often the issue.
Self-driving cars may be on the horizon, but semi-autonomous tech (like AEB, drones and Roombas) is here and widespread. And, in this hybrid of man and machine, we appear to be the weakest link – with human overreliance on and misuse of semi-autonomous tech open to disastrous consequences.
The “ valley of degraded supervision” refers to events where humans lose situational awareness when using automated systems, resulting in an insufficient response when that system fails. Tragically, this scenario occurred when the pilots of Eastern Airlines Flight 401 failed to notice the aircraft’s autopilot system had been programmed to descend rather than maintain altitude.
Similarly, an overuse of autopilot systems has been shown to diminish pilots’ ability to manually execute commonly automated manoeuvres. Apply this logic to autonomous cars and it’s easy to imagine how drivers might lose the skills needed to respond to tyre blowouts or taking evasive action to road obstacles.
Despite the evident complexity of the problem, many manufacturers have already made great strides towards nailing fully autonomous cars. Most famously, Tesla’s autopilot system has highlighted that in many cases it’s the legislation, not the software, that’s hampering a widespread rollout of self-driving cars.
According to the marque’s own safety statistics, in Q1 of 2024 autopilot-equipped Teslas covered 7.63 million miles per accident in the US, in comparison to the National Highways Traffic Safety Administration’s (NHTSA) most recent data showing human-driven cars covered just 670,000 miles per accident.
What’s more, some semi-autonomous and driver assistance technology has received praise from prestigious motoring safety bodies such as Euro NCAP. In particular, the Renault Austral’s Active Driver Assist and the Nissan Ariya’s ProPILOT Assist were both awarded “Very Good” gradings for their ability to “enhance driver safety whilst reducing fatigue”.
While it’s easy to pick flaws in any burgeoning new technology — when safety is on the line, we have every right to be critical — the early evidence suggests that although such systems are far from infallible, they have the potential to drastically improve road safety once their programming is perfected. Until then, humans shall remain an essential link in the road safety chain.
Do you think we’re ready for full-scale automotive autonomy? Let us know if you’d get behind the wheel of a self-driving car.
Hero image credit: Shutterstock
Subscribe to our newsletter for the latest EV, Performance, SUV & 4x4 content
Sign upWe love to hear from our customers. Connect with us today and let’s start a conversation.