This is a quick(?) diary as follow-up / expansion / response to yesterday’s concise, well-written high profile diary, “Elon may have killed another customer”.
I generally agree with the diarist’s main messages. In particular, this part of the closing sentence has caught my eye:
...the Full Self Driving approach will ultimately, in my view, harm the image of EVs...
Oh… don’t get me started, I thought upon reading. I’ve had a score to settle with SDH (Self-Driving Hype) for years.
SDH will not “ultimately” harm EVs. It has been causing direct damage to the EV revolution — and therefore to Humanity — for years.
Laughs she who laughs last — EVs are now mainstream whereas SDH has remained pretty much the same — a hype and a topic for (mostly dudes it seems) to argue about. But given that nothing less than our planet’s livability is at stake, any damaging delay is hard to forgive. Particularly when it goes largely unnoticed.
If you want a bit more detail why I’m making this accusation, follow below the fold.
(And also a secondary technical footnote to an assumption about programming made in yesterday’s diary.)
the cool…. it kills
Mainstream US media has a middle-schooler’s obsession with “Cool”. It tries to classify as “Cool/Uncool”, things that really shouldn’t be. And the results have been no less than murderous.
A few examples?
- Al Gore? Uncool.
- George W. Bush? Cool in a goofy, lovable way.
- War on Terror, Clash of Civilizations, and Invading Iraq? Even cooler!
- Obamacare? Uncool.
- They tried their darnest to make Obama himself “Uncool”, and have largely succeeded with White people. Not so much with the rest of us. Thank Goodness
- The “Tea Party”? Cool!
- Hillary — most Uncool, and Trump — bizarre but Cool. Or even, cool *because he’s so bizarre?
Note a pattern there?
It is a really stupid and ugly way of picking winners and losers, while pretending to remain “neutral”.
5-10 years ago, the media did the same with emerging automotive trends: they played EVs and self-driving off against each other. EVs as the ultimate Uncool, and self-driving as the ultimate Cool, the true future of transportation, already knocking on our door.
And automakers happily played along. I won’t do a thesis-worth amount of 2010s media research here, but see e.g., this GM media blast from Sep. 2017:
GM and Cruise announce first mass-production self-driving car
Kyle Vogt, CEO and founder of Cruise Automation, revealed very big news for his company and its owner GM, which acquired the startup last year. The news is that they’re ready to mass produce a vehicle ready for self-driving, with everything on board they need to become fully autonomous vehicles once the software and regulatory environment is ready to make that happen.
The mid-2010s saw a relentless media assault of this type, treating nebulous press releases like the one above at face value. At the same time, similar releases about EVs were being shredded to pieces, laughed out of the room, or simply ignored.
The message was clear: only millionaires and chumps buy EVs, the auto world’s equivalent to “eat your veggies”. The cool kids? They jump on every new self-driving development
And indeed, for at least several years Big Auto fronted and centered its SDH efforts, while giving EVs second-tier priority at best, particularly on the mass-marketing and mass-acceptance front. This is the key point here: like any company, automakers have a limited pot of $$$$ to put into R&D and emerging/speculative product lines. At least during the mid-2010s, lots of that pot went into self-driving at the expense of EVs.
It didn’t help that Tesla, the only EV brand in the US able to escape the “Uncool” trap, has decided to present self-driving as inseparable from its technology, and often to market SDH more aggressively than the basics of its groundbreaking EVs.
Now we know that Tesla totally lied about their self-driving capability. Perhaps the entire thing has been a sophisticated ruse: confuse the competition to invest billions into self-driving, in order to prevent them from noticing how much easier (relatively speaking) the transition to EVs is.
It seems no coincidence to me that those GM/Cruise vehicles in 2017 were Chevy Bolt EVs. The only way to make their (deliberately uncool?) Bolts a bit Cooler was to sprinkle some magic SDH powder on them. Or, Musk Jedi mind tricked them.
Then came March 18 2018, when Elaine Herzberg found herself at the wrong place and the wrong time (btw, it was a Volvo gas SUV that killed her). Yes, it was not a crosswalk; holding a bicycle she was an unusual-looking pedestrian and the road was dark; but traffic was very light and the car’s vision did notice her with plenty of lead time (6 seconds according to Wikipedia). No human driver in their right senses would not at least slow down or swerve to another lane.
In retrospect, Ms. Herzberg had inadvertently sacrificed herself for the benefit of humankind. Overnight, SDH was deflated like a hot-air balloon. Initial spin efforts to blame the victim only backfired further. Self-driving has contracted back closer to its true size: a speculative, expensive, risky tech development that, on the list of pressing human and economic needs that technology is called upon to solve, doesn’t even make the top 20.
Top 20? Maybe top 100.
What now?
Meanwhile — mostly with the indirect help of EU regulators and the Chinese auto industry (ok, ok, also Tesla) — EVs have become mainstream in the auto world. “Cool/Uncool” have finally become irrelevant to that basic question. EVs are just what cars are going to be from now on.
At present self-driving seems at best, no closer to maturity than we were led to believe during the mid-2010s hype. Probably further away, because now there’s actual regulator scrutiny. At least one US Big Three automaker — who not coincidentally, is also the current Big Three leader on EVs — has now completely shut down its self-driving project after wasting at least 5 years and nearly $3B.
Maybe one day there will also be some media scrutiny of self-driving, on a scale anywhere near what they run Democratic politicians through.
But we must make sure to decouple the two, so that self-driving doesn’t continue to drag EVs down.
footnote on self-driving and programming
Yesterday’s diarist wrote,
...the software cannot handle every possible situation that comes up.
This is true. But the diary’s paradigm was that of a traditional, deterministic computer program: When X happens, do Y.
There’s certainly lots of that in self-driving software. But that part of the algorithm is probably fairly simple (relatively speaking). Recall your driving or defensive-driving lessons, the logic is straightforward:
- Follow the road and follow traffic
- Keep proper, safe distance
- Follow the road signs and rules
- Give advance alert when making changes
- Make changes gradually unless in emergency
- In emergency, brake.
One can program this kind of stuff pretty hermetically (the SUV that killed Ms. Herzberg was apparently NOT programmed hermetically; seems like criminal negligence to me).
Once programmed hermetically enough, any robot will do a faaaaar better job than any human driver. Because the robot didn’t just fight with their partner, isn’t too young or too old to respond properly to X, hopefully doesn’t have Y Chromosome Syndrome (sort of the opposite of the Y that’s the recommended action, heh), is not under the influence, etc.
The complicated part is figuring out what X is, i.e., assessing the immediate situation. That part — no programmer can hermetically code every single scenario. Instead, that part’s software is a super-complicated statistical model, a.k.a., machine learning, a.k.a. “Artificial Intelligence” (AI). Mostly it’s machine vision, if you will.
In preparation (“training”) for the actual task, the AI is fed millions upon millions upon millions of images (and possibly also videos) and told for each and every one what X is. Then the model builds a “response surface”, so to speak, a surface that given never-seen-before input images would decide what X is.
On that front, it is really really hard to match human performance. There have been published articles showing like-human or even better-than-human performance in, e.g., figuring out traffic signs. But these have been on curated image sets, essentially “laboratory conditions”. Just like with new drugs and FDA, it’s a looooong way from success in the lab — to proven, safe success in real life with human lives. And now that finally there’s proper scrutiny of self-driving, yes it will take a while.
And even after algorithms start exceeding human performance in real-life situations on that front — the type of errors they do make will be very different, and will often look jarring or insultingly stupid.
It’ll then be the classic moral dilemma from the tiring genre of “whom do you sacrifice — the 5-year-old kid or the brain surgeon?”
So yeah. At bottom line I agree. It will probably take a while. And that’s good — because what we really need to do now is electrify all of our cars, trucks, buses, trains, etc. — while continuing to accelerate the cleaning of our electric grid, and expanding (clean!) electricity service to underserved populations around the world.
Self-driving be damned. Really.
Disclaimer: I’m a research statistician and have been teaching machine-learning at beginner level for years. I've dabbled with machine learning in various contexts both for teaching and in my own work, but have never worked on an industrial-scale AI algorithm of the type deployed for self-driving nowadays. These algorithms are known as “Deep Learning”, a relatively recent expansion of neural networks.