What hype exactly? The " destroy all humans" hype or something else I’m unfamiliar with?


Will companies ever stop selling snake oil?


DarkGamer avatar

Although I'm very impressed by the results that LLMs have been producing, as long as it is probabilistic there's a lot of tasks it will be unsuitable for. For anything critical it needs to be right all the time and hallucinations and model collapse are not acceptable. However, I can totally see it being the natural language interface that then gets handed off to non-probabilistic endpoints.

Rottcodd avatar

Axiomatically, no, since it isn't even AI in any meaningful sense of the term, so it fails to live up to its hype right out the gate.


you're confusing AI with AGI/GI when they're separate things

we've had AI for a very, very long time

SharkAttak avatar

Problem is, it was marketed as HAL9000 when it is T9/2024.

Ghostalmedia, avatar

I’m just here for the claymation hotdog man.

tal, avatar

I am confident that AI will play a major role in the future.

I am not confident that any given company or project will, and some definitely try to oversell what they have done.


It is already showing really great potential.

Then the news drops that all of the progress we made on global warming has been undone by the energy usage caused by AI.

So sure, AI will live up the hype, and we will still die a slow and agonizing death of heat and suffocation.

But at least we will have AI friend chat bots to comfort us through the end.


tbh if ai ever reaches its full potential, it will probably be responsible for massively reversing global warming, because when the entire population is unemployed, they won't be able to afford to travel, heat their homes, buy products, or eat, all four of which are key contributors to emissions


I'd say the masses dying of starvation and exposure would reduce global warming, but the millions of cars will be replaced by hundreds of private jets that net greater emissions.

The poor factory workers will be more destitute, and the rich will always find a way fill that void. Fuck, they will force their pilot to fly the empty jet around the world just to make sure that conditions on earth dont improve and they eventually get to utilize their vault 3 miles underground before they die.


That doesn’t even make sense. I have the mild suspicion that the fossil fuel industry sponsors nonsense like that, as a distraction from sane measures.

What we need to do to stop global warming is very simple: Stop using fossil fuels. We must not add CO2 to the atmosphere.

AI has nothing to do with that. It’s just one more use for electricity. If we wanted to stop global warming, we would get the electricity by saving elsewhere, or generating more carbon-neutral electricity, with solar, wind or what not. We simply chose not to do that.


CO2 doesn't only come from fossil fuels. It comes from combustion in general.

We can go nuclear, but look at how Russia has ruined its entire country trying to do that (if you are not aware of how severe the radiation problem in all of Russia, not just Chernobyl, has become, there are tons of youtubers that do documentary style content, Plainly Difficult is one of my favorites).

Solar, wind, hydro can do it, but the amount of CO2 produced by manufacturing the generators is still massive. Its just producing the CO2 upstream in the process instead of during the actual power generation. It would take so many solar panels and windmills to replace burning coal that producing them would still release an amount of greenhouse gasses that rivals just burning the coal.

I don't disagree that we can make try moves to mitigate the damage, but giant red flags went out about Crypto mining. The power draw from AI is far surpassing that, and AI has hardly even started to spin up yet.

I hope for the day we figure out how to produce unlimited energy without destroying the atmosphere in the process, but its Newton's Third Law. Every action has an equal and opposite reaction. Each "solution" comes with its drawbacks, but our thirst for electricity only ever grows.

The answer would be to make AI draw less power, not to create more power in different ways.

There is no way we are going to get CEOs to scale back their AI power draw when it gives them the ability to scan everyone's face and spy on them in a comprehensive, existential way. They are already using it on Anti-Zionist protesters. They are never going to give that kind of power up, but that kind of power requires an insane amount of... power.

abhibeckert, (edited )

Solar, wind, hydro can do it, but the amount of CO2 produced by manufacturing the generators is still massive

That’s FUD.

Sure - the concrete in a large hydro dam requires a staggering amount of electricity to produce (because the chemical reaction to produce cement needs insane amounts of heat), but there’s no reason any CO2 needs to be emitted. You can absolutely use zero emission power to high temperatures needed to produce cement.

And not all hydro needs a massive concrete wall. There’s a hydro station near my city that doesn’t have a dam at all - it’s just a series of pipes that run from the top of a mountain to the bottom of a mountain. There’s a permanent medium sized river that never stops flowing that comes down off the mountain - with an elevation change of several hundred metres. It provides more power than the entire city’s consumption and does so while only diverting a tiny percentage of the river’s water. As the city grows, the power plant can easily be upgraded to divert more of the water though pipes instead of flowing uselessly down towards the sea.

Covid and Russia’s war created massive fluctuations recently but if you look through that noise global CO2 emissions are pretty much flat and have been for a few years now. They are almost certainly going to trend downwards going forward (a lot of countries already are seeing downward movement).

The simple reality is fossil fuels are now too expensive to be competitive. Why would anyone power an AI (or mine crypto) with coal power that costs $4,074/kW when you could use Solar at $1,300/kW (during the day. At night it’s more like $1,700 to $2,000 with the best storage options, such as batteries or pumped storage). Or wind at around $1,700.

Nuclear is $8,000/kW unless you live in Russia, where safety is largely ignored.

Hydro can be cheap if you happen to be near an ideal river - but for most locations it’s not competitive with Solar/Wind. So hydro is safe as a long term power generation method into the future, but it’s never going to be the dominant form of power unless (like my city) you happen to have ideal geology.


Where do you think rebar comes from?


No. But not because AI isn’t gonna get better, but because hype is an ever moving goal post. Nobody gets excited about what’s already possible. Hype lives on vague promises of some amazing future that is right around the corner we promise. Then by the time it becomes apparent that a lot of the claims were nonsense and the actual developments were steadier and less dramatic, they’ve already moved onto new wild claims.


So true, but especially true of ai. Previous rounds of hype for ai tended to turn into boring things that just worked, and the hype moved on. Even automated driving, where ai really hasn’t delivered yet, has turned into boring everyday ho hum features common to cars, and the hype moved on to generative ai


because hype is an ever moving goal post.

That’s it exactly.

Nothing ever lives up to its hype because the hype is setting unachievable expectations.

Pxtl, avatar


Bing Chat Assistant is better than Google, Bing search, or DDG today. If I search for “how do I do X in software Y” on a normal search, I get zillions of dead-link-filled MS pages, some interesting tangentially-related stackoverflow posts, and a bunch of old blogspam.

If I ask the robot, I often get “no, there’s no supported way to do that officially” which is the clear clean answer I can’t find elsewhere. Or sometimes it misunderstands the question and gives me a tangentially-related result, which is bad but is the same thing I get from Google via StackOverflow, except Bing is much more responsive to me saying “no, I didn’t mean that way, I meant this” in which case I often get either the right answer or the “no” answer, which is still good and accurate! The problem is as you iterate, the conversation accumulates cruft and becomes more erratic and hallucinatory.

But right now, with the level of SEO that has ruined all major search engines (ironically partially caused by AI), Bing Chat is the best search on the market now imho. <homer>The cause of and solution to all of life’s problems </homer>

So yeah, in terms of “things where AI has lived up to its potential”? It is winning the search war today. Everything else is something on the horizon in various distances (art, music, text generation, true general AI) but better search for information is here right now.


Why would i ask an LLM a question that i cant verify the accuracy of instead of just doing a traditional search of trusted resources? They give you the answer they think you want. Search engines don’t want to crack down on SEO techniques because it will ultimately harm their business. But they can get around that by lighting a few acres of rainforest on fire, make up some random crap that sounds believable, and boost their stock price.

I’m sure that there are some niche use cases currently that can benefit from these programs, but most are just the next project for crypto grifters, and any legitimate uses are only really gonna be useful at commercial/industrial scale and won’t be actually useful for the general public.

Pxtl, avatar

Bing Chat provides its sources.


Bing Chat Assistant is better than Google, Bing search, or DDG today.

Because those search engines turned into crap. That’s not a victory for AI.

Google should’ve had “this thing, not that thing” clarification a god-damn decade ago.

snooggums, avatar

Bing Chat Assistant is better than Google, Bing search, or DDG today. If I search for “how do I do X in software Y” on a normal search, I get zillions of dead-link-filled MS pages, some interesting tangentially-related stackoverflow posts, and a bunch of old blogspam.

Oh, so you are saying that AI works around SEO and filters out the crap that google and other web searches used to filter out. Basically the sales for AI searches is that it is almost as good as web searches used to be.

Awesome, it is a mediocre and overly energy wasting approach to getting back to about 15 years ago which will be undone as soon as the websites abusing SEO also leverage AI to counteract the AI search and all that crap will be right back in the results again within a few years.

Pxtl, avatar

I mean yeah. I’m not disagreeing with any of that (except the fact that AI caused it - search engines got destroyed by SEO before AI textgen started crapflooding).

But it is what it is. The SEO spammers won. They defeated Google and Microsoft and DDG’s respective search algorithms. Traditional search got killed. The internet got worse instead of better.

In light of this miserable new reality, AI-based content synthesizers (particularly ones that can coherently point to the references for their synthesis) are the current solution to SEO spam. Maybe this is another temporary plateau that the SEO spammers will murder. And yes, it’s tragic that this energy-pig of AI is the best solution to something that used to be doable with a simple trie.

But still: there is a real problem today for which an AI-based tech provides the current best solution. In this one specific case, the AI lives up to the hype. It swallows the hellscape of noise of the internet and gives you the signal.


Warning, here’s the cynic in me coming out.

The NY times has a vested interest in discrediting AI, specifically LLMs (what they seem to be referring to) since journalism is a huge target here since it’s pretty easy to get LLMs to generate believable articles. So how I break down this article:

  1. Lean on Betterridge’s law of headlines to cast doubt about the long term prospects of LLMs
  2. Further the doubt by pointing out people don’t trust them
  3. Present them as a credible threat later in the article
  4. Juxtapose LLMs and cryptocurrencies while technically dismissing such a link (then why bring it up?)
  5. Leave the conclusion up to the reader

I learned nothing new about current or long term LLM viability other than a vague “they took our jerbs!” emotional jab.

AI is here to stay, and it’ll continue getting better. We’ll adapt to how it changes things, hopefully as fast or faster than it eliminates jobs.

Or maybe my tinfoil hat is on too tight.


The NY times has a vested interest in discrediting AI, specifically LLMs (what they seem to be referring to) since journalism is a huge target here since it’s pretty easy to get LLMs to generate believable articles.

The writers and editors may be against AI, but I’m betting the owners of the NYT would LOVE to have an AI that would simply re-phrase “news” (ahem) “borrowed” from other sources. The second upper management thinks this is possible, the humans will be out on their collective ears.


I’m betting the owners of the NYT would LOVE to have an AI that would simply re-phrase “news” (ahem) “borrowed” from other sources

No way. NYT depends on their ability to produce high quality exclusive content that you can’t access anywhere else.

In your hypothetical future, NYT’s content would be mediocre and no better than a million other news services. There’s no profit in that future.

QuadratureSurfer, avatar

This would actually explain a lot of the negative AI sentiment I’ve seen that’s suddenly going around.

Some YouTubers have hopped on the bandwagon as well. There was a video posted the other day where a guy attempted to discredit AI companies overall by saying their technology is faked. A lot of users were agreeing with him.

He then proceeded to point out stories about how Copilot/ChatGPT output information that was very similar to a particular travel website. He also pointed out how Amazon Fresh stores required a large number of outsourced workers to verify shopping cart totals (implying that there was no AI model at all and not understanding that you need workers like this to actually retrain/fine-tune a model).


I would say that 90% of AI companies are fake. They are just running API calls to ChatGP-3, and calling themselves “AI” to get investors. Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.

QuadratureSurfer, avatar

I don’t think that “fake” is the correct term here. I agree a very large portion of companies are just running API calls to ChatGPT and then patting themselves on the back for being “powered by AI” or some other nonsense.

Amazon even has an entire business to help companies pretend their AI works by crowdsourcing cheap labor to review data.

This is exactly the point I was referring to before. Just because Amazon is crowdsourcing cheap labor to backup their AI doesn’t mean that the AI is “fake”. Getting an AI model to work well takes a lot of man hours to continually train and improve it as well as make sure that it is performing well.

Amazon was doing something new (with their shopping cart AI) that no model had been trained on before. Training off of demo/test data doesn’t get you the kind of data that you get when you actually put it into a real world environment.

In the end it looks like there are additional advancements needed before a model like this can be reliable, but even then someone should be asking if AI is really necessary for something like this when there are more reliable methods available.


I honestly don’t understand why they didn’t just use RFID for the grocery stores. Or maybe they are, idk, but it’s cheap and doesn’t require much training to apply. That way you can verify the AI without needing much labor at all.

Then again, I suppose that point wasn’t to make a grocery service, but an optical AI service to sell to others.

That said, a lot of people don’t seem to understand how AI works, and the natural response to not understanding something is FUD.

abhibeckert, (edited )

Unless you pay for expensive tags (like $20 per tag) or use really short range scanners (e.g. a hotel key), RFID tags don’t work reliably enough.

Antitheft RFID tags for example won’t catch every single thief who walks out the door with a product. But if a thief comes back again and again stealing something… eventually one of them will work.

But even unreliable tags are a bit expensive, which is why they are only used on high margin and frequently stolen products (like clothing).

All the self serve stores in my country just use barcodes. They are dirt cheap and work reliably at longer range than a cheap RFID tag. Those stores use AI to flag potential thieves but never for purchases (for example recently I wasn’t allowed to pay for my groceries until a staff member checked my backpack, which the AI had flagged as suspicious).


The purpose of the RFID wouldn’t be to catch thieves, but to train the AI. As the AI gets better at detecting things, you reduce how many of the products are tagged. I’m seeing something like $0.30/ea on Amazon, ~$0.10/ea on Ali Express. I’m guessing an org like Amazon could get them even cheaper. I don’t know how well those work on cans, so maybe it’s a no-go, IDK.

Barcodes could probably work fine too, provided they’re big enough to be visible clearly to cameras.

Regardless, it seems like there are options aside from hiring a bunch of people to watch cameras. I’m interested to hear from someone more knowledgeable about why I’m wrong or whether they’re actually already doing something like this. I don’t live near any of the stores, so I can’t just go and see for myself (and are they still a thing?).


Mechanical Turkis a service that Amazon sells to other companies that are trying to pretend to be AI companies. the whole market is full of people making wild claims aboit their product that aren’t true, and them desperately searching for the cheapest labor to actually do it.

I’m not actually a nuclear fission company if i take millions of R&D investment, pay me amd my buddy half of it, and then pay a bunch of crackheads to pour diesel into an electric generator.

QuadratureSurfer, avatar

After reading through that wiki, that doesn’t sound like the sort of thing that would work well for what AI is actually able to do in real-time today.

Contrary to your statement, Amazon isn’t selling this as a means to “pretend” to do AI work, and there’s no evidence of this on the page you linked.

That’s not to say that this couldn’t be used to fake an AI, it’s just not sold this way, and in many applications it wouldn’t be able to compete with the already existing ML models.

Can you link to any examples of companies making wild claims about their product where it’s suspected that they are using this service? (I couldn’t find any after a quick Google search… but I didn’t spend too much time on it).

I’m wondering if the misunderstanding here is based on the sections here related to AI work? The kind of AI work that you would do with Turkers is the kind of work that’s necessary to prepare the data for it to be used on training a machine learning model. Things like labelling images, transcribing words from images, or (to put it in a way that most of us have already experienced) solving captchas asking you to find the traffic lights (so that you can help train their self-driving car AI model).


It might not be fake but companies built on top of the OpenAI API don’t bring significant value and won’t last.

If you already have a solid product and want to add some AI capabilities, the the OpenAI API is great. If it’s your only value proposition, not so much.

veeesix, avatar

The benefits to learning math and science look pretty promising.


Not for junior programmers around me. They use ChatGPT and then cannot tell me what “they” wrote or why it’s wrong. They will learn nothing and I suspect it’s the same for every thing that requires some thinking and fixing your own mistakes.

As a senior I don’t care, but I pity them.

veeesix, avatar

Right, that’s just plagiarism.

I’m talking about the recent demos using AI to teach you subject matter via conversation. Seems like info retention could be higher.

originalucifer, avatar

ever? thats a long time. remotely-efficient LLMs have only been around a few years.

i would say 'yes, inevitably'

db0, avatar

inevitably can be 100 years from now, or 1000 years from now when we setup a dyson sphere. Inevitably is too vague.


Then so is the question. The answer is yes. The specifics and timeline are what people disagree on.

db0, avatar

You’re just being anal about phrasing. “in a reasonable amount of time” or “before this bubble bursts” are clearly implied

originalucifer, avatar

"in a reasonable amount of time" or "before this bubble bursts" are clearly implied

hahahaha. ok

admin, avatar

I’m with the other person on this one. The question is stupidly vague. Whereas “ever” isn’t very productive, neither is “live up to its hype” - that could mean anything, depending on whoms hype you follow.

All in all, this feels live a clickbait circlejerk article.


Well it’s an opinion column, not an article, so I wouldn’t go into it expecting quality journalism in the first place.


an opinion piece is a form of article 🤓

  • All
  • Subscribed
  • Moderated
  • Favorites
  • mdbf
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • Youngstown
  • everett
  • cisconetworking
  • slotface
  • GTA5RPClips
  • rosin
  • thenastyranch
  • kavyap
  • tacticalgear
  • modclub
  • JUstTest
  • osvaldo12
  • Durango
  • khanakhh
  • anitta
  • provamag3
  • ngwrru68w68
  • cubers
  • tester
  • ethstaker
  • megavids
  • normalnudes
  • Leos
  • lostlight
  • All magazines