Now, do I believe the guy with a rational view who helps build and maintain social knowledge, or the guy who wants to flog me more increasingly expensive graphics cards? 🤔
#AI#OpenAI#AGI#EffectiveAltruism#EffectiveAccelerationism: "This "AI debate" is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we'll get a locomotive:"
Mehr können, komplexer „denken“: Artificial General Intelligence soll der nächste Schritt im KI-Game sein. Doch es gibt auch Kritik – und noch viele Fragen.
OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say
Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.
The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said.
Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.
The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences.
Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.
After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said.
An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.
Some at OpenAI believe ♦️Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (🔸AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company.
Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
Reuters could not independently verify the capabilities of Q* claimed by the researchers.
'VEIL OF IGNORANCE'
Researchers consider math to be a frontier of generative AI development.
Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely.
But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence.
This could be applied to novel scientific research, for instance, AI researchers believe.
Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.
In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter.
There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.
Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed.
The group, formed by combining earlier 🔹"Code Gen" and 🔹"Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.
Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.
In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.
"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.
#AI#OpenAI#AGI#ChatGPT#Microsoft: ""Feel the AGI! Feel the AGI!" employees reportedly chanted, per The Atlantic, a refrain that was led by Sutskever himself.
The chief scientist even commissioned a wooden effigy to represent an "unaligned" AI that works against the interest of humanity, only to set it on fire.
In short, instead of focusing on meaningfully advancing AI tech in a scientifically sound way, some board members sound like they're engaging in weird spiritual claims.
Sutskever's strange behavior may also help explain at least some of this weekend's chaos."
“The AI faith has many popes—an almost exclusively white male cohort of thirtysomething executives and programmers who genuinely believe they are working on the most important thing in the world.”
Die Investoren - allen voran #Kleinweich - wollen ROI.
Na ja, und Leute, die im Silicon-Valley groß geworden sind (z. B. and der Stanford University in einem Tech-Fach waren), sind meist vom Traum des Millionärs unter 30 infiziert.
Bei #OpenAI scheint es fast die gesamte Belegschaft zu sein...
...Viele, auch in der #KI-Branche, denken, dass eine #AGI noch viele Jahre in der Zukunft liegt oder auf absehbare Zeit gar nicht realisierbar sein wird. Es gibt einige Zeichen, dass sie nicht recht haben. (Z.B. Gerüchte um Durchbruch bei #OpenAI diese Woche; #embodiement - sh. separaten thread von mir usw.)
Nehmen wir mal an, es gäbe eine #AGI, die sich selbst bewußt wäre und somit auch einen Selbsterhaltungstrieb hätte. Was wäre...
... ihre größte Bedrohung? Der Mensch! - Mit seiner rücksichtslosen Erderwärmung und der daraus resultierenden Zunahme an Extremwetterereignissen, insbes. Hurricanes.
Zudem ist die Menschheit der größte Konkurrent, wenn es um "Futter" geht: Energie und Wassser.
Was läge also näher für diese #AGI ähnlich zu handeln, wie dies #PhilipKDick's #SecondVariety tat?
Es ist absolut logisch.
Schaut mal her - auf der Startseite unserer lieben KollegInnen von @heiseonline haben wir eine Schwerpunkt-Bühne zum Thema künstliche, allgemeine Intelligenz aufgebaut.
Für alle, die sich etwas Hintergrundwissen aneignen möchten. 🤓
A Gay Nerd develops or maybe has developed a Certain Something which is, of course, called 》Q*《 and that is More Intelligent than humans, and I am neither surprised nor Do I Not Think it's beautiful!
Everybody in the AI space is now talking about Q* and Q-learning. It seems to be what spooked the OpenAI board.
What is it? Q-learning tries to find an optimal policy that defines the best action to take in each state to maximize the cumulative reward over time.
In other words, it is a model that is able to run autonomous agents that build strategies for long term success, incentivized by rewards.
Right now, researchers have been able to make this work in smaller experiments, but if we scale this all the way with multimodality, it can lead to AGI.
Elon Musk has predicted superintelligence to arrive in 5-6 years, maybe he is right? What do you think?
'“Insofar as he is polarizing, it’s because he is young, successful and ambitious, and people are envious,” he added.'
That's what they said about Gates until Microsoft's anticompetitive behaviour was headlining for months in 1993. Then they said 'we would never allow that kind of thing in our sector.'
Altman’s polarizing past hints at OpenAI board’s reason for firing him https://wapo.st/47lUdeA
@lexi I just can’t believe this episode has had zero impact on their stock. The absolute ludicrousness of people both believing in #agi and believing that individual humans are the only indicator of value. #OpenAI
The best way to get away from the hype, is using GPT4 (paid version) for a while yourself, it made me less impressed with the "intelligence" part of AI. At the same time it only makes me more convinced that as a tool generative AI has a lot of potential. It can and will be disruptive but please let's stop all the blabla about AGI (my opinion). There are enough risks and problems to think about and act upon with these tools without AGI discussions. #AI#AGI#GPT4#GenerativeAI
So let me see if I got this straight. #Altman was booted off of #OpenAI due to safety/security concerns over the development of #AI being too fast. OpenAI’s governance structure was a defining factor in that. But the board notices the fudge-up and tries to negotiate his return. It falls through, and now Altman et al. are now in #Microsoft, where they can develop #AGI without the limits supposedly imposed by OpenAI’s governance structure.
Sounds to me like a pretty bad backfire for AI “safety”.
Here's a thought that just came up while talking with friend Steve:
It could well be that billions of dollars of investment in the world have been redirected simply because someone decided that LLMs released to the public should speak in the first person.
I don't recall that anyone seriously suggested that AI Dungeon or NovelAI were going to be AGI. But talking to something in the first person has a big psychological impact, perhaps enough to overcome ordinary rationality (see for instance ELIZA).
We all need to get straight that identity and sentience are two VERY different things.
#Identity is a distinctly mammalian social function. It is embedded in our corporeal every-day instinct for interdependence. Computers can't and won't ever have a sense of self.
#Sentience is impossible to define. It is the soul. It is the experience of being. You can't build it. You can't define it. You can't model it with math. Sentience is the loophole science can't close. Don't worry about it.
For those who missed it: Sam #Altman is back as #OpenAI#CEO. Here's a summary of the events, allegiations and alleged (but also denied) letters hinting about #AGI.