It should not be used to replace programmers. But it can be very useful when used by programmers who know what they’re doing. (“do you see any flaws in this code?” / “what could be useful approaches to tackle X, given constraints A, B and C?”). At worst, it can be used as rubber duck debugging that sometimes gives useful advice or when no coworker is available.
Yeah, I saw. But when I’m stuck on a programming issue, I have a couple of options:
ask an LLM that I can explain the issue to, correct my prompt a couple of times when it’s getting things wrong, and then press retry a couple of times to get something useful.
ask online and wait. Hoping that some day, somebody will come along that has the knowledge and the time to answer.
Sure, LLMs may not be perfect, but not having them as an option is worse, and way slower.
In my experience - even when the code it generates is wrong, it will still send you in the right direction concerning the approach. And if it keeps spewing out nonsense, that’s usually an indication that what you want is not possible.
That’s what I meant by saying you shouldn’t use it to replace programmers, but to complement them. You should still have code reviews, but if it can pick up issues before it gets to that stage, it will save time for all involved.
I agree it’s being overused, just for the sake of it. On the other hand, I think right now we’re in the discovery phase - we’ll find out out pretty soon what it’s good at, and what it isn’t, and correct for that. The things that it IS good at will all benefit from it.
Articles like these, cherry picked examples where it gives terribly wrong answers, are great for entertainment, and as a reminder that generated content should not be relied on without critical thinking. But it’s not the whole picture, and should not be used to write off the technology itself.
(as a side note, I do have issues with how training data is gathered without consent of its creators, but that’s a separate concern from its application)
The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT....
Tack “&udm=14” on to the end of a normal search, and you’ll be booted into the clean 10 blue links interface. While Google might not let you set this as a default, if you have a way to automatically edit the Google search URL, you can create your own defaults.
I think they provide a very reasonable reality check / a bit of reflection. And it sounds like you could use one, if you’re surprised that Facebook still exists.
There were a series of accusations about our company last August from a former employee. Immediately following these accusations, LMG hired Roper Greyell - a large Vancouver-based law firm specializing in labor and employment law, to conduct a third-party investigation. Their website describes them as “one of the largest...
Same. While Linus is part of the problem for using practices he claims to disagree with, I’d rather be part of the solution by not rewarding it with attention.
The scary thing is, even when there is a button “only required” right next to it, it’s scary how many people automatically click “accept all”. Even among tech-savy people.
I checked the report, but it seems at no point it seems to clarify what they consider “bot traffic”. Is it measured in api calls, page views, or bytes? Generally the term traffic is meant as raw data transported, but in that context those numbers make no sense.
For example, one of the biggest traffic consumers in the Internet is video streaming. There’s no way in hell that half, or even a tenth, of that data is fake - it would simply cost too much to waste it on bots. Both for the bot owners as well as the streaming providers.
This level of vagueness and lack of transparency (what do the numbers mean, and where do they come from) does not fill me with confidence on this report.
Craig Doty II, a Tesla owner, narrowly avoided a collision after his vehicle, in Full Self-Driving (FSD) mode, allegedly steered towards an oncoming train....
Counterpoint: we don’t get much articles about human drivers crashing, because we’re so used to it. That doesn’t make it a good metric to consider their safety.
Edit: Having said that, this wasn’t even an article. Just an unsourced headline with a photo. One should strongly consider the possibility of a selection bias at work here.
17 cringe-worthy Google AI answers demonstrate the problem with training on the entire web (www.tomshardware.com)
These are 17 of the worst, most cringeworthy Google AI overview answers:...
ChatGPT Answers Programming Questions Incorrectly 52% of the Time: Study (gizmodo.com)
The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT....
Massive explosion rocks SpaceX Texas facility, Starship engine in flames (interestingengineering.com)
Google Search’s “udm=14” trick lets you kill AI search for good | Ars Technica (arstechnica.com)
Tack “&udm=14” on to the end of a normal search, and you’ll be booted into the clean 10 blue links interface. While Google might not let you set this as a default, if you have a way to automatically edit the Google search URL, you can create your own defaults.
Google is losing it (lemmy.world)
Arizona lawmaker uses ChatGPT to help craft legislation to combat deepfakes (www.nbcnews.com)
archive.is...
Linus Tech Tips (LTT) release investigation results on former accusations (x.com)
There were a series of accusations about our company last August from a former employee. Immediately following these accusations, LMG hired Roper Greyell - a large Vancouver-based law firm specializing in labor and employment law, to conduct a third-party investigation. Their website describes them as “one of the largest...
‘Shocked, Angered and in Disbelief’: Scarlett Johansson Slams ChatGPT Over ‘Eerily Similar’ Voice (www.hngn.com)
OpenAI is halting its use of its ‘Sky’ voice in its ChatGPT chatbot after actress Scarlett Johansson said it was “eerily similar” to hers.
‘Let yourself be monitored’: EU governments to agree on Chat Control with user “consent” [updated] (www.patrick-breyer.de)
deleted_by_moderator
Are you chatting with a pro-Israeli AI-powered superbot? (www.aljazeera.com)
Amazon plans to give Alexa an AI overhaul — and a monthly subscription price (www.cnbc.com)
Self-Driving Tesla Nearly Hits Oncoming Train, Raises New Concern On Car's Safety (lemmy.zip)
Craig Doty II, a Tesla owner, narrowly avoided a collision after his vehicle, in Full Self-Driving (FSD) mode, allegedly steered towards an oncoming train....
Should I start worrying about my job? (www.theverge.com)
NVIDIA 555 Beta Linux Graphics Driver Released with Explicit Sync Support (9to5linux.com)
Scarlett Johansson denied OpenAI the right to use her voice. They used it anyway. (boingboing.net)
OpenAI says Sky voice in ChatGPT will be paused after concerns it sounds too much like Scarlett Johansson (www.tomsguide.com)