Whoever would have predicted that the AI tech to predict/detect problems would itself become a problem?
Some #AI image detecting tools are labeling real #photographs from the #Israel-#Hamas war as fake, creating what an expert calls a "second level of disinformation"
I played around with #dalle3 over the last few days and it's quite amazing. It's most striking feature is the ability to generate effective prompts on it's own, combined with text generation. All the text in the pics came from the machine, none of it was my input. It still produces a lot of typos and sometimes just nonsense text, but it's miles above any other generator I've seen. "Make a poster about a war between soups" is all I entered. And it gave me this:
Well then. Others losing similar amount of money. This promises to entrench the incumbants with lots of money while leaving other #AI services behind.
Microsoft's #GitHub#Copilot, the $10/month service used by 1.5M+ people, loses an average of $20+ per month per user and as much as $80 for some users.
Meta's #Llama 2 license has an unusual clause whereby they withdraw your right to use the model if you allege #Meta has breached your own IP rights by training their stuff on your intellectual property. #copyright#genai#LLama2
Search engines are designed for a time when the web was mostly written by humans. Now GenAI is making search - here Bing - harder to trust. #fakenews#genai#generativeAI#search
@nick_tune almost certainly, yes. But the shit cannon aspect is going to need some countermeasures because otherwise entire orgs are going to hallucinate their mission, domain knowledge, and even incident reports.
Updates to my article on easy ways to run LLMs locally:
Ollama is now available for Linuxas well as Mac, with Windows still “coming soon”
Among the models you can run in Ollama: medllama2, which has been fine-tuned to answer medical questions (obviously use any responses with both skepticism and caution!) @simon ‘s LLM project now has tools for generating text embeddings
In questo 2° sondaggio globale di #JournalismAI, più di 120 redattori, giornalisti, esperti di tecnologia e media maker provenienti da 105 piccole e grandi redazioni in 46 paesi condividono le loro conoscenze sull'uso dell'intelligenza artificiale e della #genAI. Il rapporto affronta la qualità e la sostenibilità del giornalismo
I had an insight into the #security of #genai. Teams don't understand #dataclassification with respect to #ML models. ML models are a bit like "data soup". If I told you that carrots were sensitive data and celery and potatoes are not, and then you try to make soup from carrots, celery and potatoes, how do you classify the soup?
When someone does inferences against a model, it is super hard to know, much less control, which data might be used to formulate the answer. It's hard to prevent the release of training data in an output. It's like making soup with carrots, but trying to ladle out bowls of soup that have no carrots in them.
Most #AI models have no authorisation model inside the data. In a #relational#database, server understands identity. It knows who is issuing the query, which lets it make decisions about what kind of data can appear in the answer. We can impose controls at table-level, column-level, row-level, even individual cell level.
Most #LLM models have no such ideas. I don't know any that do, but I'm not deeply experienced. They don't know the identity of the thing issuing a query, and they can't impose limitations on the answer beyond "this entity is allowed to issue a query and that entity is not allowed to query."
***** Generative AI's fundamental problem -- the executives pushing it *****
I want to be very clear about this.
The fundamental problem is not the Generative AI systems themselves and their often utterly wrong (or even worse) partly wrong (but oh so nicely written and so convincing!) "answers".
The problem is that executives at Google and other Generative AI Tech firms are putting these tools out there in ways that encourage their use by the nontechnical public as "answer machines", when we (and presumably the execs) know that those answers can be dangerously wrong. Disclaimers saying "This is an experiment, there may be wrong answers, be sure to check for yourself blah blah blah" are utterly worthless except perhaps to satisfy their lawyers.
This situation is deeply unethical, even if we put aside their stealing text for answers word for word from sites -- usually with no credit given or links back.
It's pretty much the most alarming thing I've seen on the Internet so far in my entire career, in terms of the potential damage that could be done to websites and the public at large over time. -L
@lauren just left a tech consulting firm that fired thousands while simultaneously pumping billions into #genai . You’re right … it’s extremely bizarre and I don’t get it
"This article will talk about the relationship between requirements and software, as well as what an AI needs to produce good results." -- #JaredToporek
@mamund I think the hardest part of software is even beyond "requirements", tbh. The idea that there are "requirements" that can be specified before delivery starts is so 1990s waterfall.
The hard part is actually embedding awareness and agency inside the organization that is evolving the software so it can make continuous adjustments.
Better "AI requirements" is not going to provide that. Ever.
As #genai takes over, tools like ChatGPT will be generating more and more code without any quality assurance in place. I firmly believe tools like https://codewarden.ai are very important to add to your tool chain. Basically, AI peer reviewing itself, you need to take a look at this thing.
We ran a post-survey after teaching students to use #ChatGPT to write #rstats code in our incoming #Sociology grad student bootcamp.
The results were 7 positive, 3 unsure about #genAI.
On the + side:
"I haven't seen another academic environment using the tool in such an open manner...so it was nice"
"I loved it! I definitely will continue using ChatGPT for coding. It really challenged me to put into words what I wanted to do... as I was explaining to chatgpt...I better understood it."
Do you use #ChatGPT? We need your thoughts about one of your past interactions … please take our academic research survey and opt in for a US$100 Amazon gift card. https://bit.ly/chatGPTix#genAI#hmc@commodon
Fan Fiction has been around for a long time, but it mostly in the form of books since producing a short film or series requires a whole production. What Corridor Crew has been doing to combine VFX and generative AI shows there really could be fan-made short films which are really good. I'd love to see bars from all over the world transformed into Star Wars bars with ships going through the sky along with blasters and light sabers. Bring together some cosplayers with some amateur film makers and you could really create something fun. #FanFiction#GenAI#CorridorCrew
Using #GenAI and LLMs for unfiltered and unsupervised one-to-many #marketing and #Customerexperience tactics for content feels downright dangerous. That may change over time, but for now, pay attention to brand missteps with AI when it's given free rein to produce content for customers. Explore AI, but protect your #CustomerExperience by using AI cautiously for now. Three examples🧵 :