In mid-February of 2023, Tesla announced a recall of nearly 363,000 of its vehicles as a result of concerns associated with its Full Self-Driving (FSD) Beta software. The recall was announced only after the National Highway Traffic Safety Administration (NHTSA) had begun investigating Tesla models equipped with this software due to complaints that its limitations were putting motorists, their passengers, and nearby travelers at a heightened risk of being involved in a crash.

The Concerns That Prompted The Recall

Essentially, glitches in this “driverless” feature were allegedly causing vehicles to violate traffic safety laws at speeds that didn’t allow enough time for motorists to sufficiently intervene. Specifically, the recall notice detailed that “The FSD Beta system may allow the vehicle to act unsafe around intersections, such as traveling straight through an intersection while in a turn-only lane, entering a stop sign-controlled intersection without coming to a complete stop, or proceeding into an intersection during a steady yellow traffic signal without due caution… and (more broadly) allows a vehicle to exceed speed limits or travel through intersections in an unlawful or unpredictable manner that increases the risk of a crash.”

An Example Of A Larger Pattern Of Concerns

These Tesla self-driving software glitches highlight two of regulators’ primary concerns with self-driving vehicles: motorists may not be granted enough time to remedy any mistakes made by self-driving software even when they are fully engaged and alert at the wheel.

The second primary concern that regulators are starting to scrutinize with greater urgency is that their technology has not yet been perfected. As a result, self-driving vehicles are safe, but they are – quite surprisingly – not as safe as human drivers.

The research team at Jalopnik recently conducted a study concerning the safety of self-driving vehicles. The team embraced the widely-reported statistic that self-driving vehicles are 99.9% safe. This seems to be about as safe as any technology can possibly be. However, it has been proven that human drivers – despite their many flaws and inclinations to ignore risk to a certain degree – operate their vehicles safely 99.999819% of the time. Practically speaking, this statistic renders the 0.01 error rate for autonomous vehicles a major cause of safety-related concern.

Mathematically speaking, in order for self-driving vehicles to achieve the level of safety attained by human motorists, they will need to hit at least six “nines of reliability.” This begs the question, given the current differences in error rates between human motorists and self-driving vehicles, will crash rates actually go up as autonomous safety software becomes more readily available to the public at large?

When Safety Concerns And Ethics Collide

An additional concern is partially-ethical in nature. Automated driving systems are designed to assess risk. And although they are programmed to make such evaluations at very high levels, there are practical and ethical concerns related to the execution of these evaluations.

Say, for example, a toddler has rushed into the road and their mother has followed to scoop them up. An automated system, sensing that it doesn’t have enough time to brake completely is compelled to “make a decision” between hitting mother and child, swerving to the right into a school bus, and swerving to the left into a cyclist.

A human driver is equipped to make a value judgment – perhaps in favor of risking personal harm by colliding with the stable bus to spare a direct collision with the unprotected parties – whereas an automated driving system isn’t.

A Matter Of Degrees

It is worth noting that there are different levels of self-driving capability that many vehicles on U.S. roads benefit from. Most “driverless” vehicles aren’t really driverless. Instead, they act as driver support systems that generally require motorists to remain actively engaged and ready to take over in an instant. The NHTSA classifies automated driver technology according to the following six levels, in ascending order of how much control is assumed by technology as opposed to a driver:

  • Momentary driver assistance
  • Driver assistance
  • Additional assistance
  • Conditional automation
  • High automation
  • Full automation

As a result, it may be an oversimplification to discuss the safety-related merits of driverless vehicles unless the concept of driverless is more precisely defined. Does automated technology help motorists and passengers to stay safer under a number of circumstances? Yes. Are human drivers still safer than vehicles equipped to exercise high and full levels of automation? Also, yes.

Contact A Trusted Arizona Personal Injury Attorney Today To Learn More

If you have recently been injured by a vehicle that may have been operating in a “driverless” capacity at the time of your crash, you may be under the impression that there is no way that you can possibly prove that the other vehicle was at fault for your collision. Yet, it is vitally important to avoid making assumptions about the strength or weakness of your case before an experienced car accident lawyer has objectively evaluated your circumstances. As the general public is learning over time, driverless does not mean incapable of technical errors and limitations.

Instead of spending sleepless nights wondering what you can do about your circumstances, take a moment to schedule a consultation with Perez Law Group, PLLC car accident attorneys by calling (602) 730-7100 or filling out a contact form on our website. Once we’ve been alerted to your situation, we can start gathering evidence, determining whether any other driverless car accident like yours have occurred with models of the same vehicle with driverless capacity that was involved in your collision, and start building a case on your behalf. If you are owed compensation, we will do our utmost to ensure that you receive every dollar to which you are entitled.