#SiliconValley#BigTech#VCs#SocialMedia#Web#AI#Capitalism: "I believe we're at the end of the Rot-Com boom — the tech industry's hyper-growth cycle where there were so many lands to conquer, so many new ways to pile money into so many new, innovative ideas that it felt like every tech company could experience perpetual growth simply by throwing money at the problem.
It explains why so many tech products — YouTube, Google Search, Facebook, and so on — feel like they’ve got tangibly worse. There’s no incentive to improve the things you’ve already built when you’re perpetually working on the next big thing.
This belief — that exponential growth is not just a reasonable expectation, but a requirement — is central to the core rot in the tech industry, and as these rapacious demands run into reality, the Rot-Com bubble has begun to deflate. As we speak, the tech industry is grappling with a mid-life crisis where it desperately searches for the next hyper-growth market, eagerly pushing customers and businesses to adopt technology that nobody asked for in the hopes that they can keep the Rot Economy alive."
There’s a difference between being useful and being a usefool.
A usefool is someone who launders their legitimacy to benefit corporate interests for short-term gain. They do so even if, ultimately, it’s against their own interests and those of others in the longer term. (1/3)
I love how folks still think people farmers like Google have a trust problem.
In 2024!
They don’t have a trust problem.
They are untrustworthy.
And they’ve always been untrustworthy. It’s just that your friends happen to work there and your agency gets scraps of work from them sometimes and maybe they sponsor events you speak at that you’re even still questioning this. And, by doing so – still, even today – helping legitimise them.
#AI#GenerativeAI#OpenAI#BigTech#SiliconValley: "Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about. A separation letter on the termination documents, which you can read embedded below, says in plain language, “If you have any vested Units ... you are required to sign a release of claims agreement within 60 days in order to retain such Units.” It is signed by Kwon, along with OpenAI VP of people Diane Yoon (who departed OpenAI recently). The secret ultra-restrictive NDA, signed for only the “consideration” of already vested equity, is signed by COO Brad Lightcap.
Meanwhile, according to documents provided to Vox by ex-employees, the incorporation documents for the holding company that handles equity in OpenAI contains multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees or — just as importantly — block them from selling it.
Those incorporation documents were signed on April 10, 2023, by Sam Altman in his capacity as CEO of OpenAI."
Has anyone written about how textual generative AI feels strangely close to toxic masculinity in some respects? The absolute confidence in everything stated, the lack of understanding of the consequences of getting that confidence wrong for important questions, the semi-gaslighty feeling when it “corrects” itself when you call it out on something. It so often feels like talking to someone one would despise and avoid in “real life.” I’m curious if anyone did some writing on this.
A female computational neuroscience and machine learning expert took to X at the weekend to describe a “dark side” of the startup culture in Silicon Valley.
Sonia Joseph alleged that a culture of sexual coercion has taken hold of San Francisco’s community housing tech scene, with “heavy LSD use” and “sex parties held by mainly male tech and entrepreneurial elites that involve mock-violent role playing with female participants.”
In particular, “early OpenAI employees” were referenced by Joseph, as well as their friends and “adjacent entrepreneurs.” Salon has more.
I'm truly, deeply alarmed at how the tech industry is trying to insert itself in every human interaction, getting between humans in every possible relationship, and they think that's "better" while absolutely destroying everything that makes society work.
The answer is MORE human-to-human interaction not LESS. FFS.
(screenshot from a substack that landed in my inbox, but you can see this same ethos everywhere, including strained attempts to portray chatbots with "theories of the mind")