@bornach@masto.ai avatar

bornach

@bornach@masto.ai

I'm an ex-postdoc researcher who was bullied out of academia over a decade ago

I now pursue my interests in
#science #technology #education #art #mathematics

via online content creation that explores ideas in #computer #programming, science #communication, #visualization, #electronics #circuit design, #cardboard #crafts, kinetic sculpture, #synthesizer music, and machine learning

This profile is from a federated server and may be incomplete. Browse more on the original instance.

bornach, to ai
@bornach@masto.ai avatar

Enrico Tartarotti on the current "frenzy" of putting Large Language Model chatbots into everything and marketing everything as having
https://youtu.be/CY_b8w8u9NY

yatil, (edited ) to random
@yatil@yatil.social avatar

Will we honestly talk about the trickery in the “Be My Eyes Accessibility with GPT-4o“ video? Like the taxi that uses the signal way before the passenger signal and actually basically passed the signalling passenger before the signal and still coming to a stop? Or that we don’t see any real processing time? Or that the voice is clear despite standing in London with speakers on? (1/3)

bornach,
@bornach@masto.ai avatar

@yatil
It's impressive given that a decade ago neural networks were barely capable of distinguishing between a dog and a cat. However all those canned responses to recognizing London stuff are not evidence this is anywhere close to the sentient AI depicted in the movie "Her". You are basically conversing with a mutated remix of the cached annotations of thousands of gig workers in India and other countries spread across Africa and the far East.
https://thedailyguardian.com/indias-data-annotation-revolution-gig-workers-fueling-ai-training-boom/

bornach,
@bornach@masto.ai avatar

@yatil @Lottie @pixelate
Just check the video yourself. At 0:46 the taxi has already put its left turn signal on to indicate an intention to pull over. A full 3 seconds later at 0:49 the blind man extends his arm signalling to the driver. He cannot see that the taxi was already going to pull over. The voice over of GPT4o doesn't tell the blind man that the taxi was already going to stop next to him so he didn't need to do anything. He is fully convinced that no one else flagged down the taxi

renwillis, to generationx
@renwillis@mstdn.social avatar

I have some #GenZ coworkers and was telling them about how we used to take our car stereo face plates with us when we left the car back in the days when stereos weren't integrated into the dash and you wanted a nice one.

They were very amazed. So I made this meme.

#meme #genx #funny #grandma #aging

bornach,
@bornach@masto.ai avatar

@renwillis
What do you mean "we used to..." ?

The factory stereo in my 19 year old car was draining the battery flat in less than a day when parked, so I replaced it with this one. Halfords was selling off their removable faceplate stereos cheap and I got this half price

bornach,
@bornach@masto.ai avatar

@hub @renwillis
That eventually developed a fault where it eats the tape

Anyone remember 8-track car stereo?
https://youtu.be/MqewX8Ix7-4

bornach, (edited ) to random
@bornach@masto.ai avatar

withdrawn because it is no longer needed and not because of side effects. [Back to the Science] explains
https://youtu.be/49DjUSD8aWQ

What I'd like to know is why did SkyNews (Australia) choose to go with the headline: "AstraZeneca withdrawn worldwide over side effects"

bornach, to OpenAI
@bornach@masto.ai avatar

Best part of this [AI Explained] video on 's is at 5:45
"Even though it failed all my maths prompts it is still a big improvement..."
https://youtu.be/ZJbu3NEPJN0?t=5m45s
That in a nutshell sums up the state of AI news coverage on social media

steely_glint, to random
@steely_glint@chaos.social avatar

Thanks to @saghul for the perfect illustration of the problems with chatGPT:

bornach,
@bornach@masto.ai avatar

@the_moep @steely_glint
It understands kilograms just as well as it understands pounds
https://sharegpt.com/c/vijL1Me
That is, it doesn't understand measurements at all

bornach,
@bornach@masto.ai avatar

@steely_glint @solarisfire @saghul
Bing Chat/Copilot reportedly uses GPT4-Turbo and can search the Internet, yet it doesn't understand that you cannot pour 1 liter into an already full jug
https://masto.ai/@bornach/112201221575789055

bornach,
@bornach@masto.ai avatar

@steely_glint @scribe @saghul
Related to the failures that Yejin Choi found
https://youtu.be/SvBR0OGT5VI?t=4m1s

Tried the jugs example on Copilot the other day. No improvement.
https://masto.ai/@bornach/112201311315573304

bornach,
@bornach@masto.ai avatar

@raganwald @solarisfire
Likely the OpenAI engineers went through the failures that users uploaded to ShareGPT
https://sharegpt.com/c/vijL1Me

And on Reddit
https://www.reddit.com/r/ChatGPT/comments/11rr668/still_doesnt_pass_the_featherlead_test/

Then turned them into microtasks for an annotation company in Nigeria or India to source a better answer from a gig worker
https://m.economictimes.com/tech/technology/indian-gig-workers-toil-at-frontlines-of-ai-revolution/articleshow/109864213.cms

The training data created by the annotation gig industry (AGI) was then incorporated into GPT4 via RLHF

atomicpoet, to random

The Thirteenth Floor is set in 2024.

So while I’m watching this VHS tape to get some nostalgia for 1999, this film is speculating about the year I’m living in now.

bornach,
@bornach@masto.ai avatar

@jantzen @atomicpoet
IMHO it is the weakest of the 4 scifi films that came out around that time speculating on simulated worlds:

The Thirteenth Floor
eXistenZ
Dark City
The Matrix

bornach,
@bornach@masto.ai avatar

@atomicpoet @jantzen
Wonder why they cut the ending. Perhaps the twist reveal came too early in the plot so this extended ending just seemed to drag things out.
https://youtu.be/lB17_peD96w

Compare this with how eXistenZ paced their plot twist reveal

18+ urusan, to random
@urusan@fosstodon.org avatar

This is an interesting video:
https://youtu.be/dDUC-LqVrPU

TL;DW We're starting to see early evidence of diminishing returns with our current AI architectures. If this is true, then eventually they start to improve in a logarithmic manner, making superintelligence (at least using our current architectures) impossible to achieve from a practical standpoint. The issue is that we need too much data on specific things for it to perform well on them all.

bornach,
@bornach@masto.ai avatar

@urusan
See also
https://youtu.be/nkdZRBFtqSs
on the implications for post-AI-bubble applications for all the LLM systems that are predicted to fall far short of the hype.

bornach,
@bornach@masto.ai avatar

@dcz @urusan
Probably retracted because peer reviewers later found out that Google had given their AI an unfair advantage through the use of EDA tools (Synopsys suite)
https://www.theregister.com/2023/03/27/google_ai_chip_paper_nature/

Dhmspector, to random
@Dhmspector@mastodon.social avatar

And thus the futile and ultimately fools errand that is “AGI” is exposed…

https://apple.news/Ak8hDr7jRQkCJIAmaUjtH0w

bornach,
@bornach@masto.ai avatar

@Dhmspector
Just predicting the behavior of a single dendrite already requires a deep 5 to 8 layer artificial neural network
https://youtu.be/hmtQPrH-gC4

bornach,
@bornach@masto.ai avatar

@kiki_mwai_mwai @Dhmspector
Apparently the pervading belief among leading AI companies is that there is no need to understand neuroscience. Just need to throw enough training data at a sufficiently deep transformer (or related attention mechanism) and they will get AGI.

There might be a few problems with this approach as highlighted in this [Internet of Bugs] video
https://youtu.be/nkdZRBFtqSs

KathyReid, to stackoverflow
@KathyReid@aus.social avatar

Like many other technologists, I gave my time and expertise for free to #StackOverflow because the content was licensed CC-BY-SA - meaning that it was a public good. It brought me joy to help people figure out why their #ASR code wasn't working, or assist with a #CUDA bug.

Now that a deal has been struck with #OpenAI to scrape all the questions and answers in Stack Overflow, to train #GenerativeAI models, like #LLMs, without attribution to authors (as required under the CC-BY-SA license under which Stack Overflow content is licensed), to be sold back to us (the SA clause requires derivative works to be shared under the same license), I have issued a Data Deletion request to Stack Overflow to disassociate my username from my Stack Overflow username, and am closing my account, just like I did with Reddit, Inc.

https://policies.stackoverflow.co/data-request/

The data I helped create is going to be bundled in an #LLM and sold back to me.

In a single move, Stack Overflow has alienated its community - which is also its main source of competitive advantage, in exchange for token lucre.

Stack Exchange, Stack Overflow's former instantiation, used to fulfill a psychological contract - help others out when you can, for the expectation that others may in turn assist you in the future. Now it's not an exchange, it's #enshittification.

Programmers now join artists and copywriters, whose works have been snaffled up to create #GenAI solutions.

The silver lining I see is that once OpenAI creates LLMs that generate code - like Microsoft has done with Copilot on GitHub - where will they go to get help with the bugs that the generative AI models introduce, particularly, given the recent GitClear report, of the "downward pressure on code quality" caused by these tools?

While this is just one more example of #enshittification, it's also a salient lesson for #DevRel folks - if your community is your source of advantage, don't upset them.

bornach,
@bornach@masto.ai avatar

@mapto @j3j5 @blogdiva @KathyReid

This assumption made by Wolfson:
"they do not reproduce images in their data sets"
is on very shaky ground, especially when it comes to Large Language Models.

Patronus AI found several examples of LLMs generating passages of copyrighted books
https://www.patronus.ai/blog/introducing-copyright-catcher
One might be able to chain together a sequence of text completion prompts to regenerate entire chapters.

bornach,
@bornach@masto.ai avatar

@mapto @j3j5 @blogdiva @KathyReid
Not sure what the relevance of corrupt-and-train is to the legal argument being made here. Wolfson claims "they do not piece together new images from bits of images from their training data" but one could argue that neither is transcoding a Disney movie into a lossy MPEG format. Each frame is regenerated from discrete cosine transforms and motion vectors. Error correction happens during storage. Does that make it fair use?

bornach,
@bornach@masto.ai avatar

@krans @wraptile @KathyReid
"Raise everyone up with the tide" would be releasing their training weights and biases as open source as required by CC-BY-SA but OpenAI has just stated they have no intention of doing this
https://youtu.be/lQNEnVVv4OE

Their lawyers will claim fair use and that their Terms and Conditions mean the user has taken on all risk of any copyright infringement
https://youtu.be/fOTuIhOWFXU

bornach,
@bornach@masto.ai avatar

@highvizghilliesuit @KathyReid
Just do an internet search on Transformers, "Attention is all you need", GPT, BERT, etc. There are many great tutorials covering different levels of detail. This video is more of an overview:
https://youtu.be/Rx-5AGHNu7M

They do in fact encode the copyrighted work into their neural network weights and biases, and can be prompted to regenerate entire passages of text.
https://www.patronus.ai/blog/introducing-copyright-catcher

But it is all linear algebra under the hood

bpaassen, to random
@bpaassen@bildung.social avatar

The last days, I could participate in a Dagstuhl Seminar on Generalization in Humans and Machines. I learned a lot of things, especially one: How weird it is that we expect large language models to generalize to all kinds of tasks. Let me explain. (1/10)

https://www.dagstuhl.de/seminars/seminar-calendar/seminar-details/24192

bornach,
@bornach@masto.ai avatar

@bpaassen
We laymen think it should work because of scifi movies where
Johnny 5 reads all the encyclopedias and becomes sentient
https://youtu.be/WnTKllDbu5o

bornach,
@bornach@masto.ai avatar

@bpaassen
I was skeptical when the AI companies refused to reveal what was in the training data and seemed uninterested in determining whether their LLM was figuring things out for itself or was simply regurgitating an answer that got scraped into the dataset.

So taking a lead from Yejin Choi
https://www.ted.com/talks/yejin_choi_why_ai_is_incredibly_smart_and_shockingly_stupid?language=en

I tried prompting with well known FAQ puzzles but with slight changes that invalidated the stock answer. Didn't take long to confuse the LLM
https://masto.ai/@bornach/112207324622232774

bornach, to OpenAI
@bornach@masto.ai avatar

All your GPUs are belong to #OpenAI

https://youtu.be/lQNEnVVv4OE
Matthew Berman describes how #SamAltman is gunning for #RegulatoryCapture of the
#AI market

#GPU #monopoly #ArtificialIntelligence #generativeAI #ClosedSource

lowqualityfacts, to random
@lowqualityfacts@mstdn.social avatar

Even more impressive, he did both the English and Japanese versions.
https://patreon.com/lowqualityfacts

bornach,
@bornach@masto.ai avatar

@nikatjef @lowqualityfacts
https://youtu.be/Ch5MEJk5ZCQ
"Penguins didn't exist until he made that movie. He's that good"

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • mdbf
  • osvaldo12
  • thenastyranch
  • InstantRegret
  • Youngstown
  • rosin
  • slotface
  • Durango
  • ngwrru68w68
  • khanakhh
  • kavyap
  • everett
  • DreamBathrooms
  • Leos
  • magazineikmin
  • modclub
  • GTA5RPClips
  • vwfavf
  • tacticalgear
  • ethstaker
  • cubers
  • cisconetworking
  • normalnudes
  • tester
  • anitta
  • provamag3
  • JUstTest
  • All magazines