The crybabies who freak out about The Communist Manifesto appearing on university curriculum clearly never read it - chapter one is basically a long hymn to capitalism's flexibility and inventiveness, its ability to change form and adapt itself to everything the world throws at it and come out on top:
Many of the biggest "open AI" companies are totally opaque when it comes to training data. Google and OpenAI won't even say how many pieces of data went into their models' training - let alone which data they used.
Other "open AI" companies use publicly available datasets like #ThePile and #CommonCrawl. But you can't replicate their models by shoveling these datasets into an algorithm. Each one has to be groomed - labeled, sorted, de-duplicated, and otherwise filtered.
I've gotten a bunch of #infosec followers over the last coupla days.
For those interested in #fileforensics and especially PDFs, please take a look at our fairly newly released 8 million/8TB PDF corpus, derived from #CommonCrawl and then augmented by our team at #nasajpl
This visual deep dive into one of the largest AI language datasets is nonstop fascinating, jaw-dropping, and troubling, and anyone who is remotely interested in how LLMs really work, their biases, or intellectual property should read it. https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/