
“Solving a problem means reducing it to a simpler problem”
—Walter Warwick Sawyer
Why are there still no self-driving cars on the roads? Everything is relatively simple: control the movement of the car following the intended route, observe the rules of the road, and avoid potentially dangerous situations. But despite the clarity of the task, self-driving vehicles are still rare.
Why?
Maybe because we—as usual—ignore the biological experience of solving this problem?
Let’s figure it out together
There are four main challenges when building fully self-driving cars:
1- Perception
Self-driving cars must be able to accurately perceive their environment in order to make safe driving decisions. This includes detecting and recognizing objects such as pedestrians, other vehicles, traffic lights, road markings, and traffic signs.
2- Decision making
Once a self-driving car has mastered the environment, it must make a decision about how to move safely and efficiently in that environment. This includes predicting the behavior of other vehicles and pedestrians, choosing the best route to your destination, and deciding when to slow down, speed up, or change lanes.
3- Technical limitations
Autonomous vehicles require significant processing power to operate effectively. Modern processors and graphics cards, as well as high-capacity storage and memory, are needed to process the huge amount of data generated by self-driving cars.
Despite advances in technology, self-driving cars still face technical limitations. For example, they may not work or work incorrectly in extreme weather conditions or on poorly marked roads.
4- Safety
Self-driving cars must be highly reliable and safe, as any mistake or malfunction can lead to severe accidents. This requires extensive testing and validation and reliable backup systems to prevent accidents in the event of a sensor or software failure.
What is the result?
If we discard a lot of clarifying details, it becomes obvious that two processes lie at the heart of all four problems: piloting (direct movement) and safety.
At the same time, the developers of unmanned vehicle systems are trying to solve these two problems simultaneously and within the framework of one system, which eliminates the plasticity maneuver by creating a computational trap of the “all or nothing” type.
Maybe there is an error in this?
How nature does it
Any living organism shares the process of controlling its body and observing safety standards. We can move in darkness, in fog, and even in water—where most of our senses (primarily eyes) cannot work correctly. Our brain divides the task of moving safely in space into two processes, each of which is solved separately.
In biology, movement is the control of muscles, and safety is the work of the senses. In fact, these tasks do not have a rigid link and therefore they can be solved separately more efficiently.
To illustrate this, I want to tell you a funny story.
One late evening, a taxi driver was driving a plane pilot to the airport. Looking at the beautiful uniform jacket of the pilot, the taxi driver says, “Here you pilots are given beautiful uniforms and paid more than us taxi drivers, but you and I do the same job.”
The pilot smiled and said, “Not at all. We pilot sometimes fly the plane in complete darkness using radar and navigational instruments.”
The taxi driver also smiled and said in response: “We can do that too.” After these words, the taxi driver turned off the headlights, and the car continued to drive in complete darkness.
The pilot desperately grabbed the seat and fearfully says, “What are you doing? You can’t see the road at all!” To which the taxi driver calmly replied, “Don’t worry, we are also going on ‘navigational instruments’ now—on the taximeter counter—after 15 cents there will be a left turn.”
How to split one big problem into two separate equations?
If we try to implement a biological strategy, then driving a car should be divided into two tasks. The first is direct piloting, which should be implemented not by the intelligence of the car, but by the individual artificial intelligence of the driver (personal AI will repeat the driving style of its owner based on a very simple non-detailed model of reality), while safety will be carried out by the autonomous system of the car, the task which will prevent a collision and keep restrictive parameters for the actions of the piloting system.
Get a system of frames and balances
In this case, each of us will have a relatively simple own autopilot, which we can connect to any car that has a universal communicator, and the car’s own security system will become just a collision response complex, and each of the automakers will make it with more or less detail and technological fullness.
The driver is responsible, and the car is certified
Legally, the responsibility, in this case, will remain with the driver, who allowed his AI to act on his own behalf, and it will be the responsibility of automakers to certify and standardize the anti-collision system, which must be non-stop and work both with a live driver and with an autopilot in the form of a personal AI.
At the same time, with such a binary scheme, each self-driving car will repeat the driving style of its owner, and your piloting system will look like a very powerful navigator that is connected by cable to the car’s diagnostic connector.