#Portland 😞 #Vision60or70something "third [sic] consecutive year of failure for #VisionZero, an ambitious and expensive 2016 policy that included a goal of eliminating traffic deaths and serious injuries by 2025... Officials had hoped that an increased police presence would be the missing ingredient to reduce fatal crashes." wtf who thought cops were going to help our #infrastructure design problems that we never adopted REAL policy requiring #trafficEngineers to fix?
I was supposed to be on vacation, and while I didn't do any blogging for a month, that didn't mean that I stopped looking at my distraction rectangle and making a list of things I wanted to write about. Consequentially, the link backlog is massive, so it's time to declare bankruptcy with another #linkdump:
Let's kick things off with a little graphic whimsy. You've doubtless seen the endless #TrolleyProblem memes, working from the same crude line drawings? Well, philosopher John Holbo got tired of that artwork, and he whomped up a fantastic alternative, which you can get as a poster, duvet, sticker, tee, etc:
The #trolleyproblem 'thought experiment' is a perfect demonstration of why philosophy should never be used to solve a real-world problem. It has nothing to do with reality.
If you want to know why people don't trust #OpenAI or Microsoft or Google to fix a broken faux-#AGI#chatbot#LLM, consider that using suicidal teens for A/B testing was regarded as perfectly fine by a Silicon Valley "health" startup developing "#AI"-based suicide prevention tools.
(Aside: This is also where we get when techbros start doing faux-utilitarian moral calculus instead of just not doing obviously unethical shit.)
Even more than not wanting cars to do it, I don't want an #LLM to solve the #TrolleyProblem.
There's reason to suppose a sample rectruited from #MechanicalTurk users isn't so great, but even if the results DON'T bear out, this is terrifying, because these researchers apparently did all this work without it once occuring to them what a horrible idea this would be.
#TrolleyProblem is predicated on the assumption that it's possible to know all possible outcomes in a scenario. Just like all other thought-experiments. "Broad Logical Possibility" as an old prof used to put it when students would get hung up on the fact that #THOUGHTEXPERIMENTS ALMOST INVARIABLY PRESUME PRIOR CONDITIONS THAT ARE NOT IN ANY CONCEIVABLE WAY POSSIBLE.
In a real self-driving car scenario, the car will almost always be able to do something else besides killing someone.
Put another way: #TrolleyProblem variations always presume binary options. There's no third (or fourth or other) way - as there almost always is in real life.