Those of you wanting to play with the iOS #ChatGPT app, #OpenAI's ChatGPT app for iOS is available in 11 new markets.
Namely, these are Albania, Croatia, France, Germany, Ireland, Jamaica, Korea, New Zealand, Nicaragua, Nigeria, and the UK, and OpenAI says there are more to come "soon."
$20/month isn't a ton of money, but if you use a free UI with the #OpenAI API, you can get away with paying less than $5/month for #ChatGPT with relatively heavy usage and it's a lot more stable than the free version #AI#LLM#LLMs
I hope that #EU adopts the legislation that will force #bigTech and their #Ai projects to be fully transparent.
#openAi's argument that it's impossible to comply with this regulation is pure bulls*it.
They're refusing to do so, because of the copyright restrictions they're violating with the training data used - and this isn't happening by accident but on purpose!
Yesterday, I received a sales call on my personal phone for tickets for Goodwood. I asked how they got my number and, of course, it was through a data broker.
I immediately went onto the data broker's website and asked for my details to be removed, in compliance with the GDPR.
It made me think, though, that LLMs are trained on a corpus of data that may include details such as this. What happens then? How do we get our data removed?
(I'm sure @neil and other large brains have noodles on this)
In der EU wird ein neues Gesetz zum Umgang mit Künstlicher Intelligenz erarbeitet. OpenAI reagierte auf erste Entwürfe und kündigte einen Rückzug aus Europa an, sollte das Gesetz nicht entschärft werden.
All I want is a CDN that's also private in a way that I don't need to include any US entities on the list of sub processors. I guess bunny.net is not on the list 🔓
Thought provoking article. Explains #SiliconValley’s fixation on #longtermism — prioritizing future lives over those living today. And why the moral math is flawed.
OpenAI's Sam Altman often appears to contradict himself when discussing AI – on one hand he offers an optimistic outlook where machines catapult us to heights we never imagined. On the other hand he gravely admits AI could play a key role in humanity's downfall. So why does he still work on perfecting something that could kill us all?
This piece around "superintelligence governance" has sparked quite the uproar.
But is it all just hype?
Let's take a closer look at why some believe it's much ado about nothing.
First and foremost, there seems to be a fog of confusion surrounding the very definition of "superintelligence." Even if we consider Nick Bostrom's interpretation (or lack thereof), the way OpenAI employs the term leaves us scratching our heads.
So the problem I see with the "FDA for AI" model of regulation is that it posits that AI needs to be regulated separately from other things.
I fully agree that so-called "AI" systems shouldn't be deployed without some kind of certification process first. But that process should depend on what the system is for.
A final kind of risk that might not be adequately handled by existing frameworks is the risks that widely available media synthesis machines pose to the information ecosystems.
Here, I keep hoping for some way to set up accountability: what if #OpenAI were actually accountable for everything #ChatGPT outputs? (And #Google for #Bard and #Microsoft for #BingGPT?)
Maybe we already have what we need, maybe there's something to add.