EU now wants a label on content generated by AI as it fights against misinformation.
After all, everyone haven't forgotten that incident of an AI generated image of the Pentagon being attacked from a tweet that had the blue checkmark that caused the stock market to crash by 500 billion dollars.
An INCREDIBLY busy Saturday for me, but at least while I was walking the dogs and shuttling kids to activities I was able to listen to some great talks for my #AcademicRunPlaylist! (1/6)
Next was a powerful talk by Joan Donovan on #ethics and #misinformation at the Markkula Center for Applied Ethics. Donovan takes us on a grand tour of disinformation, the dark corners of the internet, and the implications for policy and practitioners https://www.youtube.com/watch?v=RdSMzVEKIpk (4/6)
The decision for #YouTube to allow 2020 election disinformation denial videos, a decision already being celebrated among the supporters of the Jan 6 insurrection, is in my opinion the single worst and likely most negatively consequential decision in the history of #Google.
@lauren Google's gonna have a really rough awakening, when the EU's Digital Services Act comes into force, the vibes of which will also be felt in Californian offices.
RIP, YouTube.
🤡 YouTube will stop removing false claims of 2020 US election fraud
➥ @lemonde
"The ability to openly debate political ideas, even those that are controversial or based on disproven assumptions, is core to a functioning democratic society – especially in the midst of election season," YouTube said in a blog post.
#AI#misinformation comes in many forms. One source is malicious actors deliberately using AI to generate text, images, and other media to manipulate others; another is the AI hallucinating nonsense that is then accepted as truth. But a third category comes from AI itself being a poorly understood technology, allowing implausible stories about them to go viral before they can be fact-checked.
A good example of this final category was the recent story about a US Air Force drone "killing" its operator in a simulated test on the grounds that the operator (who had the final authority on when to fire) was hindering its primary mission of killing as many targets as possible. As it turns out, this was a hypothetical scenario presented by an Air Force Colonel in a conference hosted by the Royal Aeronautical Society in order to illustrate the AI alignment problem, rather than an actual simulation; nevertheless, the story rapidly went viral, with some versions of the story even going so far as to say (or at least suggest) that a drone operator was actually killed in real life.
In hindsight, this particular scenario was quite implausible - it required the AI driving the drone to have a far greater degree of autonomy and theory of mind (and far more demanding need for processing power) than was required for the task at hand, and for many obvious guardrails and safety features that one would naturally place on such a military weapon to be either easily circumvented or completely absent. But the resonance of the story certainly illustrates the level of unease and unfamiliarity with the actual level of capability of this technology.
It's about ChatGPT, lawyers, and a very angry judge. The details are juicy, and the implications are huge. Let's dig in. 🍿
⚖️ 3 characters are important: Roberto Mata (plaintiff), Steven Schwartz (attorney), Peter LoDuca (attorney on record).
Mata sues Avianca airlines through Schwartz at state court. Avianca transfers the case to Manhattan's federal court. This is when things get interesting. 🕵️♂️
"I don't think #AI will try to destroy humanity, but it might put us under strict controls”
"There's a small likelihood of it annihilating humanity. Close to zero but not impossible”
"We also need the people who are close to these systems to have a kind of certification... we need ethical training here. Computer scientists don't usually get that, by the way”
Dr Sasha Luccioni, research scientist at the #AI firm #Huggingface, said society should focus on issues like #AI bias, predictive policing, and the spread of #misinformation by #chatbots which she said were "very concrete harms".
"We should focus on that rather than the hypothetical risk that #AI will destroy humanity," she added.
More than one in three Slovaks believe in EU insect mandate
More than one in three Slovaks believe Brussels is endangering public health by ordering insect protein be added to food without consumers knowing, despite the European Commission’s attempts to debunk the hoax, a new study has found.
The Ipsos study conducted for the Central European Digital Media Observatory also looked into other conspiracy theories, finding that 37% of Slovaks believe their president consulted members of the new technocratic government with the US embassy, and 53% believe election fraud is “highly possible”.
Regarding the debunked EU insect mandate, 36% believe in Brussels’ plans to force companies to put insects in the food they produce, with far-right and nationalist voters among those who believed the hoax most.
“Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us.” - Daniel C. Dennett
Weird. It’s almost like this was the point of ending free API access at Twitter. Maybe journalists should switch to a platform that is purposely open and encourage their readers to do the same… if what the journalists value is accurate and trustworthy dissemination of information and the ability to verify it as such. #Twitter#API#Misinformation#Bot#fediverse#activityPub
The best hope for stopping this is regulatory action, particularly under the DSA.
Groups like the Coalition for Independent Technology Research are helping lobby on behalf of researchers. Learn more and get involved: https://independenttechresearch.org/
This is a great idea -- a book on how to do fact-checking. It used to be that every large newspaper and other media outlets had fact-checkers on staff. That is no longer the case -- it's left up to editors who have varying skill levels in this.
Anyway, could be useful to others too. From the University of #Chicago. Also, a good way to deal with #misinformation.
On my main account, I posted some news about #AI being used to #misinform others. I would like to remind the Sakurajima community and neighboring communities that #misinformation will not be tolerated and is against our rules. @chikorita157, @NaraMoore, @cymaiden, @kbnet, and I will moderate misinformation and take the appropriate #moderation actions when we find out about a case of misinformation in our part of the #fediverse; we also will not take reports of misinformation lightly and respond ASAP.