As educators and scientists, we can and should communicate clearly that generative AI tools are not sentient, have no capacity for truth, and are merely complex statistical algorithms dressed up in a plain language outfit.
Oh, joy, and now something inside Meta is crawling me. This makes me think that tthe #GenAI overlords are watching YCombinator and crawling everything that gets on the front page. Meta User-Agent: "facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)"
I see that openai.com/gptbot is crawling my blog, top to bottom, side to side. I’m sure OpenAI has consulted the “Rights” link clearly displayed on every page, invoking a Creative Commons license that freely grants rights to reuse and remix but not for commercial purposes.
@ben Wow, when I posted that original FEP 18 months ago - now moved to https://codeberg.org/fediverse/fep/src/branch/main/fep/c118/fep-c118.md - the thing people were most worried about was search. Now that we have reasonable opt-in Fediverse search, I felt less urgency. But now that you point it out, it's obvious that the problem of #GenAI crawlers is the same problem, and the proposal is probably interesting again. After all the discussion, I was beginning to think that ODRL was an attractive alternative.
“I speak to a lot of businesses around #AI, and particularly #GenAI, and I’m sensing a #hype fatigue. Part of this is due to the challenging of bridging the gap from PoC to production"
I can't imagine working without GenAI any more. I often write quick bash scripts to automate things, but for some reason, the syntax always falls out of my head and I'm constantly looking things up.
Now I just hit ChatGPT and ask it to write the script for me. With the latest version, is usually works perfectly the first time, so long as I craft a good prompt. This is a huge productivity boost.
"In [the counterfactual task] paradigm, models are evaluated on pairs of tasks that require the same types of abstraction and reasoning, but for each pair, the content of the first task is likely to be similar to training data, whereas the content of the second task (a “counterfactual task”) is designed to be unlikely to be similar to training data." -- #MelanieMitchell
Question of the day. Is the whole beautiful mass of free and open Internet knowledge now to be considered as the satanic mills of AI Gen Big Tech? At their mercy, to do with as they please.
@arstechnica it doesn’t need work it needs a fundamental rethink of whether the technology makes sense outside of specific research or narrow use cases.
It should never have made it out of research labs or opt-in curiosities for technologists.
None of these details are interesting and almost aren’t even worth reporting on.
This is a stupid, stupid bubble and saying they need to work on parts of it is like saying we’re close but just need refinement which is concretely untrue.
Me at work: "Mmmh, the frequent use of the word 'delve' in this article and the chromatic irregularity in this image suggest they might be AI generated"
Me at home: "Now for this salad recipe, how small is a 'small rock' and how many should I add?"
He puts into clear terms what had previously been an unarticulated, creeping suspicion I had about #GenAI. Clearly there are many angles from which to come at what's going on with #AI#hype , but I appreciate this one quite a bit.
FreeCodeCamp released today a new course for fine tuning LLM models. The course, by Krish Naik, focuses on different tuning methods such as QLORA, LORA, and Quantization using different models such as Llama2, Gradient, and Google Gemma model.