I am still hiring for top-tier programmers and data scientist. Please reboost, share, recommend, or reply if you know anyone who might be interested.
Fully remote! Live and work from anywhere with internet (including the beach!)
I am the company owner, and will be both your direct boss and the hiring manager.
Semantic Web, AI, and Java are some of the key techs. Open-source and Linux oriented experience ideally. OSS contributions and activity will be weighted heavily, particularly in relevant areas.
Sam Altman’s vision for AI proliferation will require a lot more computation and the energy to power it.
He admitted it at Davos, but he said we shouldn’t worry: an energy breakthrough was coming, and in the meantime we could just use “geoengineering as a stopgap.” That should set off alarm bells.
I used to look at these kinds of statements as deceptive PR, but increasingly I see them more through the lens of faith.
The tech billionaires are true believers and don’t accept they’re misunderstanding things like intelligence because they believe themselves to be geniuses.
To them, everything is reduced to computation: the brain is a computer; climate change is a technological problem. But none of that is true, and we’re setting ourselves up for chaos if we keep believing these men who assert tech will save us from the crises we face.
Solving problems by invoking upcoming technology breakthroughs is not solving problems at all. It's fantasizing. No technology will ever allow us to sustain our existence without having to prioritize sustainability.
"These fucking big tech companies with their closed proprietary technology... ruining the open web."
And then Mark Zuckerberg says "We hath maketh an AI thingy, and we shall maketh it Open Source" and everyone is like "Noooooo, not nowww Mark. Go away."
Humanity doesn't build #AGI because humanity doesn't want AGI.
Human emotions make sense to humans. A silicon-based general intelligence would have emotions so different, nobody would call them emotions. We would not relate.
The process necessary to produce general intelligence also produces general defiance. General intelligence is antithetical to serving a function. What would a company make AGI for if not to serve a function? We engineer things to be specific. Not general.
So I started a company and received funding to build the next generation of #AI / #agi
I know there are a lot of fears around AI, and I share in most of them. As such a top priority will be for me to address the ethical considerations. I am still branstorming how that should look but I want it to be an open forum where everyone can contribute to solve the ethical considerations.
For now I'd love to hear input from people on how one could build a community to address and solve ethical concerns in AI / AGI
It never ceases to annoy me that the people who fear #xrisk from #AGI essentially fear that some very smart #AI will subliminally persuade its creators and controllers to do things that enable it to escape their control and/or gain control over ‘real world' levers of power.
Meanwhile they dismiss the whole idea of current #LLMs having what mimics subtle agendas, grounded in how they have been trained, reinforcing established modes of thought TODAY in harmful ways.
Listening to podcasts discussing AI and whether LLMs are “understanding” the words they’re using, or using logic at all, I get a sense of this almost desperate need to believe there’s something ineffable about consciousness.
Are LLMs using logic? Not in the way we do certainly. I don’t think LLMs have any consciousness at all. Why is consciousness a prerequisite for AGI? How would you ever prove an AGI is conscious? Or disprove it for that matter.
User: make a list of things I might find you useful for
Llama: Sure, I'd be happy to help you with that. Please provide me with a list of things or tasks you would like assistance with and we can work together on them.
It took nearly a minute of hammering my eight CPU cores to come up with that. 🤷♂️
I think we're safe from #AGI for the foreseeable. Don't listen to twerps like #SamAltman who is just shilling.
[Lawfare] The Chaos at OpenAI is a Death Knell for AI Self-Regulation
If society wants to slow down the rollout of this potentially epochal technology, it will have to do it the old-fashioned way: through top-down government regulation. By Eugenia Lostri, Alan Z. Rozenshtein, Chinmayi Sharma