Arizona just hit the spotlight again by becoming the first state of the Union to have a self-driving car KILL somebody.
It happened to be self-driving Uber which did it, which is ironic because Uber is taxi gig using smart phone technology and common people as HUMAN drivers.
Apparently Uber is planning otherwise when it started experimenting with the self-driving car concept (discarding the human drivers and begin building a fleet of self-driving cars?)
As luck would have it (for the self-driving pundits) Governor Ducey of Arizona (an ex-corporate CEO) personally signed an Executive Order giving a free hand to all self-driving car experimentation in Arizona.
And it is still in the experimental stage, even though by seeing all the Google ‘Waymo’ cars driving around, one would think otherwise.
And an experiment it is.
Each car, whether Waymo otherwise, carries a human minder in the driver’s seat, who doesn’t really drive the car. but keeps watch, just in case the car does something out of hand.
But this job tends to become boring, so many of these minder-drivers switch to doing what most people do when they get slightly bored...they take out their Smart Phones.
Which is exactly what this Uber co-driver did, while coasting down a dark section of road at 10 pm on a Sunday night.
It was at this time that a woman decided to walk her bike across this dark road...and was run over and killed by the Uber self-driving vehicle.
The minder-driver’s identity was at first not released, then later the driver’s name was given as ‘Rafael’ ..and later Rafaela’.
In the video was later release of the driver at the wheel, who looks a lot like Saturday Night Live’s Horatio Sanz in drag
So Rafel or Rafela or whatever seems to be a tranny, and not only a tranny, but a felon (meaning 4 years in prison for armed robbery).
It seems Uber has a policy of hiring felons to be Uber co-drivers (no wonder these drivers are piling up records from fondling to murder).
So Rafael/Rafaela did what most felons have a tendency to do: fuck things up.
Not that this particular Uber co-driver did what most self-driving car co-drives do anyway (text while co-driving).
To their credit, Uber has suspended all driverless car experimentation on the streets of Arizona (moving on to other states which permit it).
The funny thing is the Uber AI driving computer seems to have fucked up as well during the course of the accident.
The video seems to show the Volvo SUV actually accelerating before hitting pedestrian - Elaine Herzeberg - as she pushes her bike across the road.
The video itself looks a bit tampered, as the pedestrian is totally invisible until the last second (despite intermittent street lighting), when she suddenly appears clearly in the SUV’s headlights, in a performance worthy of a Ninja.
In fact, Tempe drivers have taken video of this exact spot and it appears to be much MORE illuminated than what the Uber doctored video seems to show (see YouTube videos of this)
The weird thing is Uber and its experts have no explanation of how it happened.
That’s right, they are baffled.
After all, it was the Uber self-driving mechanism which was driving the car.
And the Uber system has a variety of safety features which include a infrared camera, and the equivalent of a radar and a laser radar..which should have detected the pedestrian in the middle of the road without problem and caused the vehicle to reduce speed or stop (in spite of its distracted co-driver).
That these systems failed to detect the pedestrian (and, in fact, the car seemed to accelerate before running over the pedestrian) is indeed strange.
Did the Uber AI computer start “laughing” like Alexa has been doing lately?
Did the Uber AI quiver with glee as it killed its first human?
And in fact, AI is totally involved in the driving mechanism, a fact that is mentioned in published articles on the matter.
But , as horrifying as the ‘accident’ may be, the more alarming aspects are the residual effects.
Clearly seen are the legal matters (and the specially trained lawyers) brought to the fore by the court case.
Of course it is going to the courts...and perhaps even the Supreme Court, ultimately.
The point of it all?
What to do when AI kills?
First, call in the specially prompted and trained lawyers to supply the spin.
Secondly, match the case to bought-off-and-paid-for judges who will rule the way the corporations want the judges to rule.
Some lawyers (the ones ignored by the media) are saying that there may be a problem with the program and are urging the cars be withdrawn from the public roadways.
Other lawyers (the select elite of the system) like James Atrrowood, who runs Arrowood Attorneys and teaches at the State Bar as an ‘expert’ regarding driverless cars and product liability issues backs up AI.
The spin about driverless cars is they...
...are safer than human drivers....
...can put an end to traffic congestion.
...can free up our time to do paperwork in the cars while getting to our jobs.
...can make cross-country driving a pleasure instead of a chore.
Yeah, yeah, sounds fantastic!
But what do you do when the AI driving the car begins to KILL?
The answer seems to be in the process of becoming more and more obvious.
Nothing...because it is declared nobody’s fault.
The driver is blameless because control of the car has been surrendered to AI.
And AI is blameless because its actions are seen as a mechanical ‘hiccup’ or mechanical mistake involving no human error.
In case you have stopped following this FIRST HUMAN MURDER BY AI, the results seem to be a blameless nobody’s-fault resolution under the law.
That’s right - it’s nobody’s fault that a car ran over this innocent pedestrian.
(last report, Uber has gotten over the fault bump by giving the pedestrian’s daughter a large (undisclosed) sum of money)
Meanwhile The Federal and State governments have let themselves be pre-empted by Uber’s sudden decision to halt all self-driving car experiments in Arizona and moved on to to other things.
I guess they became so impressed by this sudden decision that they decided NO ONE was at fault - not even Uber who sees fit to hire Ex-Convict FELONS to be co-driven around by untried AI as back up safety chauffeurs.
No...No fault on nobody.
And as you know, in most legal cases, what is decided soon becomes precedent or the basis for future legal decisions.
Well, the decision over Uber’s AI murder of a pedestrian is ‘no fault for anybody’ as far as the government and legal system are concerned.
Never mind that Uber cannot explain (or has not bothered to explain) how an AI equipped with all these safety features could not (or did not want to) see the pedestrian in the middle of the road crossing the street.
Again, laughing Alexa comes to mind when judging the unexpected DEMONIC actions of the Uber self-driving mechanism (it actually accelerated as it got closer to the pedestrian).
But in this age of accelerated adaptation, society seems to have already accepted the fact that when AI murders it’s nobody’s fault.
Accepted also is the fact that AI WILL misbehave...sometimes catastrophically.
(remember laughing Alexa)
And when it does, humanity has begun learning that it’s nobody’s fault.
The repeatedly demonstrated DOUBT about the safety and fallibility of AI is being turned into no-fault technology, blameless of all the death and destruction it may cause.
And there is talk of actually INCREASING our reliance on AI and computer-operated machines.
Is the extent of how much trust we are willing to deposit on these machines based in any way shape or form on the extent of just how reliable these independently operated systems are?
At present it does not seem to matter.
Just type in “self-driving car accidents” in Google images and see how many pictures show up.
The obvious interpretation is self-driving cars are NOT safe and are totally fallible.
The over-reliance and trust we are being programmed to have regarding self-operated machinery is excessive.
It isn’t safe at all!
The actual truth is future generations of humans will judge our stupidity based on how much control we give to AI over our society in general.