The protest encampment at UC Berkeley is currently about 150 tents strong, but so far the administration has refused to call police or disperse the group. Classes continue, and commencement is on track to take place as scheduled.
It's hard to say from this report in what sense Lavender is "AI," as opposed to just a database with a custom algorithm, but a theme that recurs throughout is the dehumanization of targets based on presumed rank. The justification for relying on Lavender, rather than more rigorous human verification, is that "you don't want to invest manpower and time" on a low-ranking target. Even confirming that they're an enemy combatant was considered a waste of time. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes#IsraelHamasWar
I've been consistently critical of the current wave of "AI," but in this particular context, it feels like a buzzwordy distraction than the core of the story, which is that Israel appears to have first decided on a number of casualties they wanted to inflict, then tweaked the parameters of their intel apparatus to generate a list of targets large enough to hit that quota. Maybe they did, in fact, use ML to do so, but I don't see how not having ML would have changed that outcome. #IsraelHamaswar
Inasmuch as there is a tech angle to this, it's less about handing life-and-death decisions to "artificial intelligence" than it is about using computer tooling to defray accountability. Having effectively pulled the names out of a hat, Israeli officials can argue that no person was directly responsible for the decision to target any particular individual. That's why the predetermined quotas matter—they show that the determining factor was not military value, but damage done to the population.
Researchers asked an AI chatbot to act as a research assistant, then instructed it to develop prompts that could 'jailbreak' other chatbots so that they would produce instructions for making meth, laundering money, and building bombs. The approach had a 42.5% success rate against GPT-4. https://www.scientificamerican.com/article/jailbroken-ai-chatbots-can-jailbreak-other-chatbots/