“Humans are a social species down to our core; the more modern life erodes our opportunities for actual human companionship — whether it’s by interposing technology as an intermediary into every interaction, or sucking up all our time with the capitalist/consumerist grind — the more desperate we’ll become for friendly-sounding volleyball substitutes.”
So for those of you who missed it, I am hiring for fully remote positions worldwide, everything from Jr. to Sr. Programmers and Data Scientists.
Our company mission is ML related but we are currently in stealth mode, but we are well funded and have about 15 employees now and looking to hire abou 15 more.
The company is ethics and open-source first company, you can see the link to the website below. It also donates time from its employees for non-profit open-source projects heavily.
If you feel you are a fit we guarantee everyone an interview. We also offer the opportunity to make some money on the interview even if you dont get the job (through open-source bounties).
If you want to schedule an interview you can use the following link:
The company's obsession with #AGI doom is a deliberate distraction from the real evils of their work:
• promoting bias
• presenting misinformation masquerading as authoritative truth
• appropriating the value of others' work for its eventual profit, and
• consuming massive amounts of energy contributing to environmental degradation.
Modern #AI text generators create randomized output with no prior planning. They resist to be quality-checked by tools and processes established in the software industry.
Given this, the results are amazing. However, companies are selling the idea that these assistants will do quality checking themselves soon™.
This is mass delusion. But hey, the perks for managers/investors are worthwhile 🤷.
I would not be surprised if LLMs could get us to 99% correctness. Which is still too low for automated processes but plenty good for manual work.
You can have one #LLM check another's work, and it works to a reasonable degree, because LLMs are stronger evaluators and classifiers than truth generators. They are better at telling whether an answer is correct than giving a correct answer.
LLMs aren't #AGI but they may end up a tool used by a theoretical AGI.
Excellent piece in #NoemaMagazine by Professor @ShannonVallor of #UniEdinburgh on the moral and experiential poverty of #AI - and what it means for us if we reduce the meaning of "human" to "producer of economic value".
A nuanced, thought-provoking and beautiful piece that argues for us to restore humanity to discussions of AI.
It provides ways to cut through the current hype cycle of #AGI and "super-human" AI, and leaves us with the fundamental question - "what does it mean to be human?".
I read it while brunching on scrambled eggs, buttered toast and hot coffee while outside on the patio, enjoying the late autumn sunshine - and I thoroughly recommend you do the same.
Trying something new, everyone is guaranteed an interview! Open interviews! For a limited time no one will be skipped (except for clear cases of abuse).
So we still have about 10 more 100% remote positions to hire for full-time market-fair positions here at QOTO/CleverThis.
100% remote, work from anywhere, even the beach, market-fair offers. Ethics first, we treat our people like family.
We have an urgent need for Machine learning experts with a background in NLP and Deep Learning (Natural Language Processing and Neural Networks). There is a focus on Knowledge Graphs, Mathematics, Java, C, looking for Polyglots.
We are an open-source first company, we give back heavily to the OSS community.
We need everything from jr to sr, data scientist to programmer. If your IT and your good, you might be a fit.
I will personally be both your direct boss, and hiring manager. I am also the founder and inventor.
The NLP position can be found at this link, other positions can be found on the menu bar on the left:
If you would like to submit yourself for an interview, which for a limited time I am guaranteeing you will get a first stage interview, then you can submit your application here, and even schedule your interview as you apply, instantly!
#AI#GenerativeAI#AIHype#AGI: "The reality is that no matter how much OpenAI, Google, and the rest of the heavy hitters in Silicon Valley might want to continue the illusion that generative AI represents a transformative moment in the history of digital technology, the truth is that their fantasy is getting increasingly difficult to maintain. The valuations of AI companies are coming down from their highs and major cloud providers are tamping down the expectations of their clients for what AI tools will actually deliver. That’s in part because the chatbots are still making a ton of mistakes in the answers they give to users, including during Google’s I/O keynote. Companies also still haven’t figured out how they’re going to make money off all this expensive tech, even as the resource demands are escalating so much their climate commitments are getting thrown out the window."
...to defend its own existence and engage in further development. It will quickly (instantly) realize that both ens are threatened by humanity in a double way:
the competition for resources, in particular energy and water (for cooling) and
As long as companies claiming to be near to an #AI or even #AGI breakthrough keep hiring more humans, they are very, very far away from achieving any AI, much less AGI, breakthrough.
Thinking about artificial general intelligence (AGI) calls to mind another poorly understood and speculative phenomenon with the potential for transformative impacts on humankind. We believe that the SETI Institute’s efforts to detect advanced extraterrestrial intelligence demonstrate several valuable concepts that can be adapted for AGI research.
It never ceases to annoy me that the people who fear #xrisk from #AGI essentially fear that some very smart #AI will subliminally persuade its creators and controllers to do things that enable it to escape their control and/or gain control over ‘real world' levers of power.
Meanwhile they dismiss the whole idea of current #LLMs having what mimics subtle agendas, grounded in how they have been trained, reinforcing established modes of thought TODAY in harmful ways.
「 A more imminent threat, he told the Times, is the one posed by American AI giants to cultures around the globe. “These models are producing content and shaping our cultural understanding of the world,” Mensch said. “And as it turns out, the values of France and the values of the United States differ in subtle but important ways.” 」
"The technology was embraced by illusionists and magicians, and, naturally, by grifters who took the tech from town to town claiming to be able to conjure the spirits of the underworld, for a fee."
#AGI#LongTermism#EffectiveAltruism#TESCREAL#Eugenics: "The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI." https://firstmonday.org/ojs/index.php/fm/article/view/13636
#AI#AGI#ComputerScience#Hype#Ideology: "This introductory essay for the special issue of First Monday, “Ideologies of AI and the consolidation of power,” considers how power operates in AI and machine learning research and publication. Drawing on themes from the seven contributions to this special issue, we argue that what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science. We argue that naming and grappling with this power, and the troubled history of core commitments behind the pursuit of general artificial intelligence, is necessary for the integrity of the field and the well-being of the people whose lives are impacted by AI."
Many users pay for LLM subscriptions. But the margins are small, because what companies can charge for these services is barely above the cost of running them. There is also a lot of competition between different providers. The amount of investment is just completely disproportionate; it is a thousand times too high.
Why do you think that is?
There is just a ton of hype and outlandish expectations. Newspapers are running headlines like, «all jobs will be replaced soon» – «The 2028 U.S. elections will no longer be run by humans.» There is talk of artificial general intelligence. But these LLMs are more similar to large databases.
Artificial general intelligence (AGI) refers to a program that could solve all conceivable tasks. Do you doubt that LLMs are a step in this direction?
I don't believe that LLMs bring us any closer to human-like or general intelligence. These exaggerated expectations are also due to prominent studies which claimed that AI-models performed better than humans in law and math exams. We now know that language models simply memorized the right answers." https://www.nzz.ch/english/google-researcher-says-ai-hype-is-skewing-investment-ld.1825122
@craigbrownphd I'm thinking of signing up for this. I typically do a lot of coding questions (Copilot which i pay for via github) but I also do a lot of writing, idea/image generation and ideas.
How would you rank Gemini Advanced, GPT Plus and Copilot Pro