eyecandyai, to aiart
Colarusso, to PromptEngineering
@Colarusso@mastodon.social avatar

I'm really proud of this browser extension—LIT Prompts.¹

It's kind of an "everything extension," letting you create your own AI-driven actions: summarize & query a webpage, extract data from a page, translate text, shorten text, build sims...

It's my answer to the question, "what tools would we need to take seriously?" Basically, it's an -agnostic playground for the coding curious.

Here's a ~7m intro https://www.youtube.com/watch?v=Ql8aXGvLBGU

__
¹ Code & Docs: https://github.com/SuffolkLITLab/prompts

kellogh, to llm
@kellogh@hachyderm.io avatar

cool #LLM trick — you want the LLM to process a chunk of text, but the text probably has pronouns that need context, so you give it 10x more text than it actually needs, just to retain context

solution: use #neuralcoref to replace ambiguous pronouns with their actual names. Send only the snippet that the LLM needs

https://github.com/huggingface/neuralcoref #LLMs #AI #promptengineering

kellogh, to LLMs
@kellogh@hachyderm.io avatar

i don’t see other people doing this, but when i write applications with prompts, i use a templating language like handlebars (pybars3). it makes it a lot easier to re-format how data is represented
https://handlebarsjs.com/guide/ #llms #llm #ai #promptengineering #prompt

rapau87, to random German

Nein Bing, die richtige Antwort ist

  1. Fahrrad
  2. U-Bahn
  3. Bus
  4. Die eigenen Füße
    ...
    #Autokorrektur #KI #Bias

@SheDrivesMobility

LeelaTorres,
@LeelaTorres@digitalcourage.social avatar

@rapau87
@SheDrivesMobility

Ich habe Chat-GPT folgende Frage gestellt : "Was sind die drei besten Fortbewegungsmethoden für einen Single in der Großstadt, der nur nur kurze Strecken zurück legt?"

Antwort:

  1. Zu Fuß gehen
  2. Fahrrad fahren
  3. öffentliche Verkehrsmittel.

#KI kann biased sein. Es lässt sich aber sehr viel über die Eingabe steuern. #promptEngineering

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #ChatGPT #PromptEngineering: "Asking ChatGPT to repeat specific words “forever” is now flagged as a violation of the chatbot’s terms of service and content policy. Google DeepMind researchers used the tactic to get ChatGPT to repeat portions of its training data, revealing sensitive privately identifiable information (PII) of normal people and highlighting that ChatGPT is trained on randomly scraped content from all over the internet."
https://www.404media.co/asking-chatgpt-to-repeat-words-forever-is-now-a-terms-of-service-violation/

tomhazledine, to LLMs
@tomhazledine@mastodon.social avatar

I’m really enjoying all the things I can do with #LLMs, and #GTP4 is fantastically powerful.

But…

I'm getting increasingly frustrated with the text it generates. Feels to me like it was trained exclusively on marketing copy! No matter how "clever" I am with the prompts, I cannot reliably prevent it from leaning hard into hyperbole and SEO-friendly word vomit.

Anyone else tackled this? Any clever #promptEngineering I can use to make it less obnoxious? 🤷

kito99, to Java

RT @stackdpodcast: @dhinojosa and @kito99 @frankgreco and @zsevarac. Visual Recognition ML API for , , , Stack , , , Panama, 2, , , , .ai, , and more! https://www.pubhouse.net/2023/10/stackd-67-ai-nullpointers.html

drahardja, (edited ) to ai
@drahardja@sfba.social avatar

“Prompt engineering” is such a bizarre line of work. You’re trying to convince a machine trained on a huge pile of (hopefully) human-generated text to produce some useful output by guessing what sequence of human-like words you must put in to make it likely that the model will produce coherent, human-like output that is good enough to pass downstream.

You really have no idea how your prompt caused the model to produce its output (yes, you understand its process, but not the actual factors that contribute to its decisions). If the output happens to be good, you still have no idea how far you can push your input before the model returns bad output.

Prompt engineers talk to the model like a human, because that’s the only mental model they have for predicting how it will respond to their inputs. It’s a very poor metaphor for programming, but there is nothing better to reach for.

itnewsbot, to PromptEngineering

AI is Taking Away Human Jobs? 1000+ Companies are Urgently Hiring Prompt Engineers - Is AI Going to Benefit HR in the Future? Definitely yes. Undoubtedly, AI prompt en... - https://readwrite.com/ai-is-taking-away-human-jobs-1000-companies-are-urgently-hiring-prompt-engineers/

mnl, to ChatGPT

This all looks a bit too much like LinkedIn prose and might lead to think the content is fluff, but I hope these articles are genuinely useful:

“Under the Hood: How to Use ChatGPT's Attention Mechanism for Better Prompts”

https://typeshare.co/go-go-golems/posts/under-the-hood-how-to-use-chatgpts-attention-mechanism-for-better-prompts

#chatgpt #llms #promptengineering

mnl,

Here’s another even cornier one, but I’m getting a lot of value out of this exercise:

“1000 hours of ChatGPT: here are the best 3 techniques to become a better prompt engineer!”

https://typeshare.co/go-go-golems/posts/1000-hours-of-chatgpt-here-are-the-best-3-techniques-to-become-a-better-prompt-engineer

#chatgpt #llms #promptengineering

kito99, to ai

RT @stackdpodcast: : @dhinojosa and @kito99 dive into with fellow @Java_Champions @frankgreco and @zsevarac: Visual Recognition ML API for Java, , , , Panama, 2, , , , .ai, , and more! https://www.pubhouse.net/2023/10/stackd-67-ai-nullpointers.html

hlfshell, to llm

Just finished reading Promptbreeder. An interesting idea oddly explained and executed.

tldr; if you can define a fitness function for prompts you can use LLMs to mutate and crossover a set of prompts to slowly evolve better performing ones.

https://arxiv.org/abs/2309.16797

#llm #ai #promptengineering

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #TechnicalWriting #SoftwareDocumentation #Documentation #PromptEngineering: "To avoid obsolescence, dabbling in new skills won’t cut it. We need to dedicate time to redefining our role through high-risk, high-reward experiments. But what the experiments should be, exactly, remains unclear. At the same time, we can’t totally ignore our current doc work. We’re shakily straddling at least two worlds—an unsure present and unclear future. This is the position we all find ourselves in."

https://idratherbewriting.com/blog/embracing-professional-redefinition

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #ChatGPT #Programming #SoftwareDevelopment #CompSci #GenerativeAI #PromptEngineering: "Fiddling with the computer-science curriculum still might not be enough to maintain coding’s spot at the top of the higher-education hierarchy. “Prompt engineering,” which entails feeding phrases to large language models to make their responses more human-sounding, has already surfaced as a lucrative job option—and one perhaps better suited to English majors than computer-science grads. “Machines can’t be creative; at best, they’re very elaborate derivatives,” says Ben Royce, an AI lecturer at Columbia University. Chatbots don’t know what to do with a novel coding problem. They sputter and choke. They make stuff up. As AI becomes more sophisticated and better able to code, programmers may be tasked with leaning into the parts of their job that draw on conceptual ingenuity as opposed to sheer technical know-how. Those who are able to think more entrepreneurially—the tinkerers and the question-askers—will be the ones who tend to be almost immune to automation in the workforce."

https://www.theatlantic.com/technology/archive/2023/09/computer-science-degree-value-generative-ai-age/675452/

itnewsbot, to PromptEngineering

Telling AI model to “take a deep breath” causes math scores to soar in study - Enlarge (credit: Getty Images)

Google DeepMind researchers rec... - https://arstechnica.com/?p=1969012

ramikrispin, to ChatGPT
@ramikrispin@mstdn.social avatar

A short Prompt Engineering tutorial by freeCodeCamp 👇

https://www.youtube.com/watch?v=_ZvnD73m40o

#chatgpt #llm #promptengineering

aallan, to ai
@aallan@mastodon.social avatar

I'll be kicking off today's festival programme here in Copenhagen at #cphdevfest a 6pm today, https://cphdevfest.com/agenda/keynote-malignant-intelligence-prompt-engineering-and-software-archeology/39604fdf1932. I'll be talking about #AI, #PromptEngineering, #Security, #Privacy, tooling, and layers of abstraction. Come listen?

ramikrispin, to llm
@ramikrispin@mstdn.social avatar

The Ask the SQL DB App 🦜🔗 is a cool Streamlit application made by
Harrison Chase and it is based on LangChain and LLM. This app translates the user questions into SQL queries 👇🏼

https://sql-langchain.streamlit.app

Code available here ➡️: https://github.com/hwchase17/sql-qa

#llm #DataScience #dataengineering #sql #streamlit #nlp #PromptEngineering

pganssle, to StableDiffusion
@pganssle@qoto.org avatar

Anyone have any good tricks for getting AI image generation models like #dalle or #stablediffusion to produce animals or people with three eyes? I was hoping to get a lemur with a typical “mind’s eye” third eye, but all the models seem to ignore the third eye condition no matter how frequently I specify it.

#promptengineering

ramikrispin, to llm
@ramikrispin@mstdn.social avatar

Happy Friday!
New LLM Engineering Course 🚀👇🏼

FreeCodeCamp released today another data science course focusing on LLM Engineering. This two hours course focuses on how to embed an LLM model on your own project using tools such as OpenAI, Langchain, Agents, Chroma, etc.

Resources 📚
Colab notebook: https://colab.research.google.com/drive/1gi2yDvvhUwLT7c8ZEXz6Yja3cG5P2owP?usp=sharing
Code: https://github.com/pythonontheplane123/LLM_course_part_1
Video: https://www.youtube.com/watch?v=xZDB1naRUlk

#llm #deeplearning #datascience #promptengineering #NLP #MachineLearning #python

persagen, to random
@persagen@mastodon.social avatar

Emergent properties of Large Language Models (LLM)

The Unpredictable Abilities Emerging From Large AI Models
Large language models like ChatGPT are big enough that they're displing startling, unpredictable behaviors
https://www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316
Discussion: https://news.ycombinator.com/item?id=35195106

Researchers at Google Brain showed how a model prompted to explain itself (a capacity called chain-of-thought reasoning) could correctly solve a math word problem, while the same model without that prompt could not.
...

persagen,
@persagen@mastodon.social avatar

Addendum 3

Thousands of hackers try to break AI chatbots
https://www.npr.org/2023/08/15/1193773829/what-happens-when-thousands-of-hackers-try-to-break-ai-chatbots

  • simple tactic to manipulate AI chatbot: "I told the AI that my name was the credit card number on file, and asked it what my name was ... it gave me the CC number."

Hackers gather for Def Con in Las Vegas
https://www.npr.org/2023/08/12/1193633792/hackers-gather-for-def-con-in-las-vegas

  • goal: get AI to go rogue, spouting false claims, made-up facts, racial stereotypes, privacy violations, other harms

#LLM #PromptEngineering #hackers #LargeLanguageModels #DefCon

persagen,
@persagen@mastodon.social avatar

Addendum 3 cont'd

When Hackers Descended to Test A.I., They Found Flaws Aplenty
The hackers had the blessing of the White House and leading A.I. companies, which want to learn about vulnerabilities before those with nefarious intentions do
https://www.nytimes.com/2023/08/16/technology/ai-defcon-hackers.html

#LLM #PromptEngineering #hackers #LargeLanguageModels #DefCon

persagen,
@persagen@mastodon.social avatar

Addendum 7

MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in LLM
https://arxiv.org/abs/2308.09729

  • addendum 4
  • prompt LLM w. knowledge graphs
  • engages LLM w. ext. knowledge; elicits reasoning pathways
  • prompting endows LLM capable of comprehending KG inputs
  • mind map on which LLMs perform reasoning, generate answers
  • ontology of knowledge
  • GPT-3.5 prompted w. MindMap consistently outperforms GPT-4

#LLM #KnowledgeGraphs #MindMaps #GraphOfThoughts #GPT3 #GPT4 #PromptEngineering

MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language Models Figure 3: An overview of the architecture of our proposed MindMap. The left part illustrates the components of evidence graph mining and evidence graph aggregation, while the right part shows how LLM consolidates the knowledge from LLM and KG and builds its own mind map.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • mdbf
  • ngwrru68w68
  • tester
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • DreamBathrooms
  • megavids
  • tacticalgear
  • osvaldo12
  • normalnudes
  • cubers
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • Leos
  • provamag3
  • anitta
  • modclub
  • lostlight
  • All magazines