@lvxferre@mander.xyz avatar



The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

A PR disaster: Microsoft has lost trust with its users, and Windows Recall is the straw that broke the camel's back (www.windowscentral.com)

It’s a nightmare scenario for Microsoft. The headlining feature of its new Copilot+ PC initiative, which is supposed to drive millions of PC sales over the next couple of years, is under significant fire for being what many say is a major breach of privacy and security on Windows. That feature in question is Windows Recall, a...

@lvxferre@mander.xyz avatar

I know that I shouldn’t, but here’s what I think about this whole deal, illustrated with a single image macro:


Get wrecked, Microsoft.

I think that the article does a good job highlighting how much of a trainwreck this is, because Microsoft is not to be trusted. The Windows users hysterically complaining about this are not expecting Microsoft to behave in some outrageous way; they’re expecting Microsoft to behave as usual.

lvxferre, (edited )
@lvxferre@mander.xyz avatar

This is going to be interesting. I’m already thinking on how it would impact my gameplay.

The main concern for me is sci packs spoiling. Ideally they should be consumed in situ, so I’d consider moving the research to Gleba and ship other sci packs to it. This way, if something does spoil at least the spoilage is near where I can use it. Probably easier said than done - odds are that other planets have “perks” that would make centralising science there more convenient.

You’ll also probably want to speed up the production of the machines as much as possible, since the products inherit spoilage from the ingredients. Direct insertion, speed modules, upgrading machines ASAP will be essential there - you want to minimise the time between the fruit being harvested and outputting something that doesn’t spoil (like plastic or science).

Fruits outputting pulp and seeds also hint me an oil-like problem, as you need to get rid of byproducts that you might not be using. Use only the seeds and you’re left with the pulp; use only the pulp and you’re left with the seeds. The FFF hints that you can burn stuff, but that feels wasteful.

lvxferre, (edited )
@lvxferre@mander.xyz avatar

When it comes to English the problem can be split into two: the origin of the word, and its usage to refer to the planet.

The origin of the word is actually well known - English “earth” comes from Proto-Germanic *erþō “ground, soil”, that in turn comes from Proto-Indo-European *h₁ér-teh₂. That *h₁ér- root pops up in plenty words referring to soil and land in IE languages; while that *-teh₂ nouns for states of being, so odds are that the word ultimately meant “the bare soil” or similar.

Now, the usage of the word for the planet gets trickier, since this metaphor - the whole/planet by the part/soil - pops up all the time. Even for non-Indo-European languages like:

  • Basque - “Lurra” Earth is simply “lur” soil with a determiner
  • Tatar - “Zemin” Earth, planet vs. “zemin” earth, soil
  • Greenlandic - “nuna” for both

The furthest from that that I’ve seen was Nahuatl calling the planet “tlalticpactl” over the land - but even then that “tlal[li]” at the start is land, soil.

The metaphor is so popular, but so popular, that it becomes hard to track where it originated - because it likely originated multiple times. I wouldn’t be surprised for example if English simply inherited it “as is”, as German “Erde” behaves the same. The same applies to the Romance languages with Latin “Terra”, they simply inherited the word with the double meaning and called it a day.

And as to why Earth has become the accepted term rather than ‘terra’, ‘orbis’ or some variant on ‘mundus’, well, that’s a tougher question to answer.

In English it’s simply because “Earth” is its native word. Other languages typically don’t use this word.

@lvxferre@mander.xyz avatar

I’ve seen worse.

Like. There’s a Spanish city called Cartagena. And a neighbourhood in that city called Nueva Cartagena.

What’s Spanish “Nueva”? New.

What’s “Cartagena”? It was inherited from Latin “Carthago Nova”, then univerbated. That Latin “nova” is the same as Spanish “nueva”, new.

Where did “Carthago” come from? Ultimately from Phoenician, 𐤒𐤓𐤕-𐤇𐤃𐤔𐤕/qrt-ḥdšt. That 𐤇𐤃𐤔𐤕/ḥdšt means city, and the 𐤒𐤓𐤕/qrt means new.

The neighbourhood name is literally “new new new city”.

@lvxferre@mander.xyz avatar

That ⟨地球⟩ is perhaps the only exception that we’re damn sure on how Earth got its name. The guy who coined the expression was a priest of the Papal States called Matteo Ricci, living in Ming around 1600. He did a living translating works back and forth between Chinese and Latin, and calqued that expression from Latin orbis terrarum - roughly “the globe of soils”, or “the ball of earths”.

@lvxferre@mander.xyz avatar

You’re welcome!

@lvxferre@mander.xyz avatar

It’s the opposite - the name of the primal goddess is just the word for ground. The same things happens with other gods like Hestia (hearth), they were named after the things that they personified.

@lvxferre@mander.xyz avatar

Thanks for the further info! That 地 alone does follow the pattern of the other languages.

Your explanation gives Ricci’s odd calque a lot more sense - he’s using the old term, but highlighting that it’s a ball, not an infinite plane. As in, he was trying to be accurate to the sources, and he could only do it through that calque.

lvxferre, (edited )
@lvxferre@mander.xyz avatar

Fair point - notlahtlacōl. “World” does seem more accurate.

I wouldn’t be surprised if modern Nahuatl varieties used tlālticpactli to refer to the planet itself. (Still, my example is from Classical Nahuatl, so your correction is spot on.)

lvxferre, (edited )
@lvxferre@mander.xyz avatar

I know that this is a gaming comm so I apologise for the intrusive politics. It’s just that both China and USA are far enough from me that I can watch their trade war from afar, without too much emotional involvement.

All those tech-related measures that USA is currently taking against China - banning the export of chips, banning TikTok from its own territory, heavily tariffing imported Chinese cars etc. - smell like desperation. And they’ll likely backfire.

The presence of common commercial partners makes them at most a deterrent, when it comes to innovation sharing; USA could push other governments to a “you eithurr chrare [trade] with Chir̃a or with us”, to avoid those common commercial partners, but the answer wouldn’t be pretty:

China is nowadays a more valuable partner than USA. Curiously even in some countries where the population mostly speaks English, like Australia.

Another possible outcome is USA economically isolating itself. That’s like migrating a few fish from a big global pond to a smaller local pond. I’ve seen Brazil doing this through heavy tariffs, and the result is not pretty either - the larger fish starve, and all that you’re left is a bunch of medium fish that, if placed again in the large pond, are quickly eaten by the others.

Even if I take Gelsinger’s position as Realpolitik from USA’s PoV, that “magic line” he talks about is just a hallucination. Lessening the restriction on exports does lessen the pressure for the local development of technology, but it doesn’t eradicate it. That pressure will exist even at no restriction, for ideological reasons (governments often babble about sovereignty, and China is no different.)

EDIT: I got to admit that I’m rather amused at the downvotes.

If they were being issued due to it being intrusive politics, they would “leak” into the TheOneCurly’s child comment*; if it was due to some reasoning flaw, or some incorrect statement, people would be quick to point that flaw out; and, if it was due to my usage of eye dialect, it wouldn’t “leak” into the grandchild comment.

So, here’s a question. How many of you are “shooting the messenger”, since what I said would have obvious implications towards your living standards? “If you don’t talk about it, it’ll magically stop happening” style?

*nor I think that it should. I disagree with what the other user said, but they’re contributing to the discussion; there’s no reason to downvote their comment.

@lvxferre@mander.xyz avatar

If the destruction of an ally would happen regardless of another government’s actions (because, as you said, China will get weapons from elsewhere), then concerns like “we shouldn’t profit off its destruction” are solely moral and/or ideological in nature. Thus being irrelevant for the sake of Realpolitik:

  • sell to China - you got some profit, but lost the ally
  • don’t sell to China - you got no profit but you still lost the ally

And it’s clear that USA follows Realpolitik when it comes to its foreign policy.

I also don’t think that the PRC even needs to weaponise itself further to annex Taiwan. What’s keeping the PRC at bay seems to be international repercussions, that are better addressed through soft power, not hard power.

Because of both things, I don’t think that Taiwan plays a role explaining those policies. I think that USA is trying to protect its internal industry against competition.

@lvxferre@mander.xyz avatar

People in the future will be like:

“Pokémon? There’s radio silence after the 3DS games. I think that Nintendo closed down by then.”

“Ah, Ultrakill? Here. [points to some file in the repo] Still playable. Small dev from a brilliant indie scene.”

I’m being kind of cheeky; it’s reasonably possible that people in the future know that Nintendo games actually existed past the 3DS, they simply weren’t preserved because their corporation got too greedy. In the meantime, game devs like Hakita are keeping their legacy alive.

@lvxferre@mander.xyz avatar

…frankly, most stuff past gen V.

@lvxferre@mander.xyz avatar

Gen7 for me was… meh. I remember being extremely annoyed at the RotomDex telling me what to do, as if it didn’t allow me to explore properly. Perhaps because my nostalgia is geared towards the older games (I still play Emerald, to give you an idea.)

@lvxferre@mander.xyz avatar

In your case I wouldn’t recommend trumpets and water Emerald then, as it’s exploration-heavy - there’s huge routes, and often what you want is in a specific place. You’ll probably have a great time with Gen 4 instead, specially Platinum.

@lvxferre@mander.xyz avatar

They’re even more exploration-heavy than Emerald. Roughly, the earlier the game, the bigger the focus on exploration, as hardware limitations didn’t allow much storytelling.

Also, I recommend playing their remakes instead of the original games; the originals are extremely buggy and have huge balance issues. (For example, there’s a shore in Red/Blue that you can use to catch Safari Zone mons. And Psychic mons are crazy overpowered - the only Ghosts in the region are partially Poison, there’s a lot of other Poison types, and since Gen1 was before the special split they got huge offensive and defensive capabilities.)

@lvxferre@mander.xyz avatar


lvxferre, (edited )
@lvxferre@mander.xyz avatar

May I be blunt? I estimate that 70% of all OpenAI and 70% of all “insiders” are full of crap.

What people are calling nowadays “AI” is not a magic solution for everything. It is not an existential threat either. The main risks that I see associated with it are:

  1. Assumptive people taking LLM output for granted, to disastrous outcomes. Think on “yes, you can safely mix bleach and ammonia” tier (note: made up example).
  2. Supply and demand. Generative models have awful output, but sometimes “awful” = “good enough”.
  3. Heavy increase in energy and resources consumption.

None of those issues was created by machine “learning”, it’s just that it synergises with them.

lvxferre, (edited )
@lvxferre@mander.xyz avatar

Yup, it is a real risk. But on a lighter side, it’s a risk that we [humanity] have been fighting against since forever - the possibility of some of us causing harm to the others not due to malice, but out of assumptiveness and similar character flaws. (In this case: “I assume that the AI is reliable enough for this task.”)

@lvxferre@mander.xyz avatar

I’m reading your comment as “[AI is] Not yet [an existential threat], anyway”. If that’s inaccurate, please clarify, OK?

With that reading in mind: I don’t think that the current developments in machine “learning” lead towards the direction of some hypothetical system that would be an existential threat. The closest to that would be the subset of generative models, that looks like a tech dead end - sure, it might see some applications, but I don’t think that it’ll progress much past the current state.

In other words I believe that the AI that would be an existential threat would be nothing like what’s being created and overhyped now.

@lvxferre@mander.xyz avatar

I don’t think that a different training scheme or integrating it with already existing algos would be enough. You’d need a structural change.

I’ll use a silly illustration for that; it’s somewhat long so I’ll put it inside spoilers. (Feel free to ignore it though - it’s just an illustration, the main claim is outside the spoilers tag.)

The Mad Librarian and the Good BoiLet’s say that you’re a librarian. And you have lots of books to sort out. So you want to teach a dog to sort books for you. Starting by sci-fi and geography books. So you set up the training environment: a table with a sci-fi and a geography books. And you give your dog a treat every time that he puts the ball over the sci-fi book. At the start, the dog doesn’t do it. But then as you train him, he’s able to do it perfectly. Great! Does the dog now recognise sci-fi and geography books? You test this out, by switching the placement of the books, and asking the dog to perform the same task; now he’s putting the ball over the history book. Nope - he doesn’t know how to tell sci-fi and geography books apart, you were “leaking” the answer by the placement of the books. Now you repeat the training with a random position for the books. Eventually after a lot of training the dog is able to put the ball over the sci-fi book, regardless of position. Now the dog recognises sci-fi books, right? Nope - he’s identifying books by the smell. To fix that you try again, with new versions of the books. Now he’s identifying the colour; the geography book has the same grey/purple hue as grass (from a dog PoV), the sci book is black like the neighbour’s cat. The dog would happily put the ball over the neighbour’s cat and ask “where’s my treat, human???” if the cat allowed it. Needs more books. You assemble a plethora of geo and sci-fi books. Since typically tend to be dark, and the geo books tend to have nature on their covers, the dog is able to place the ball over the sci-fi books 70% of the time. Eventually you give up and say that the 30% error is the dog “hallucinating”. We might argue that, by now, the dog should be “just a step away” from recognising books by topic. But we’re just fooling ourselves, the dog is finding a bunch of orthogonal (like the smell) and diagonal (like the colour) patterns. What the dog is doing is still somewhat useful, but it won’t go much past that. And, even if you and the dog lived forever (denying St. Peter the chance to tell him “you weren’t a good boy. You were the best boy.”), and spend most of your time with that training routine, his little brain won’t be able to create the associations necessary to actually identify a book by the topic, such as the content. I think that what happens with LLMs is a lot like that. With a key difference - dogs are considerably smarter than even state-of-art LLMs, even if they’re unable to speak.

At the end of the day LLMs are complex algorithms associating pieces of words, based on statistical inference. This is useful, and you might even see some emergent behaviour - but they don’t “know” stuff, and this is trivial to show, as they fail to perform simple logic even with pieces of info that they’re able to reliably output. Different training and/or algo might change the info that it’s outputting, but they won’t “magically” go past that.

@lvxferre@mander.xyz avatar

Chinese room, called it. Just with a dog instead.

The Chinese room experiment is about the internal process; if it thinks or not, if it simulates or knows, with a machine that passes the Turing test. My example clearly does not bother with all that, what matters here is the ability to perform the goal task.

As such, no, my example is not the Chinese room. I’m highlighting something else - that the dog will keep doing spurious associations, that will affect the outcome. Is this clear now?

Why this matters: in the topic of existential threat, it’s pretty much irrelevant if the AI in question “thinks” or not. What matters is its usage in situations where it would “decide” something.

I have this debate so often, I’m going to try something a bit different. Why don’t we start by laying down how LLMs do work. If you had to explain as full as you could the algorithm we’re talking about, how would you do it?

Why don’t we do the following instead: I play along your inversion of the burden of the proof once you show how it would be relevant to your implicit claim that AI [will|might] become an existential threat (from “[AI is] Not yet [an existential threat], anyway”)?

Also worth noting that you outright ignored the main claim outside spoilers tag.

@lvxferre@mander.xyz avatar

I also apologise for the tone. That was a knee-jerk reaction from my part; my bad.

(In my own defence, I’ve been discussing this topic with tech bros, and they rather consistently invert the burden of the proof. Often to evoke Brandolini’s Law. You probably know which “types” I’m talking about.)

On-topic. Given that “smart” is still an internal attribute of the blackbox, perhaps we could gauge better if those models are likely to become an existential threat by 1) what they output now, 2) what they might output in the future, and 3) what we [people] might do with it.

It’s also easier to work with your example productively this way. Here’s a counterpoint:


The prompt asks for eight legs, and only one pic was able to output it correctly; two ignored it, and one of the pics shows ten legs. That’s 25% accuracy.

I believe that the key difference between “your” unicorn and “my” eight-legged dragon is in the training data. Unicorns are fictitious but common in popular culture, so there are lots of unicorn pictures to feed the model with; while eight-legged dragons are something that I made up, so there’s no direct reference, even if you could logically combine other references (as a spider + a dragon).

So their output is strongly limited by the training data, and it doesn’t seem to follow some strong logic. What they might output in the future depends on what we add in; the potential for decision taking is rather weak, as they wouldn’t be able to deal with unpredictable situations. And thus their ability to go rogue.

[Note: I repeated the test with a horse instead of a dragon, within the same chat. The output was slightly less bad, confirming my hypothesis - because pics of eight-legged horses exist due to the Sleipnir.]

Neural nets

Neural networks are a different can of worms for me, as I think that they’ll outlive LLMs by a huge margin, even if the current LLMs use them. However, how they’ll be used is likely considerably different.

For example, current state-of-art LLMs are coded with some “semantic” supplementation near the embedding, added almost like an afterthought. However, semantics should play a central role in the design of the transformer - because what matters is not the word itself, but what it conveys.

That would be considerably closer to a general intelligence than to modern LLMs - because you’re effectively demoting language processing to input/output, that might as well be subbed with something else, like pictures. In this situation I believe that the output would be far more accurate, and it could theoretically handle novel situations better. Then we could have some concerns about AI being an existential threat - because people would use this AI for decision taking, and it might output decisions that go terribly right, as in that “paperclip factory” thought experiment.

The fact that we don’t see developments in this direction yet shows, for me, that it’s easier said than done, and we’re really far from that.

@lvxferre@mander.xyz avatar

The thing is that they’re complying with the court case by letter, but not by spirit. Sure, there is a system to report and remove copyright infringement; but the system is 100% automated, full of fails that would require manual review, and Google can’t be arsed to spend the money necessary to fix it.

@lvxferre@mander.xyz avatar

The species is actually A. argentata acc. to another poster, but still likely a female, so I’ll definitively post pics if she lays eggs. (She’s now Kumoko.) They aren’t dangerous - acc. to some websearch they only bite if hurt, there are plenty pics of people with them on hands, and even if they do bite the venom is comparable to a bee sting*.

and jealous that you have a kumquat tree

It’s a dwarf nagami variety, so a bit more like a bush:

*I wish that I could say the same of these:
Gaucho spiders. Native and fairly common in my homeland. Strong venom and a tendency to hide inside wardrobes, their only saving grace is to be more scared of humans than we are of them.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • InstantRegret
  • mdbf
  • osvaldo12
  • magazineikmin
  • GTA5RPClips
  • rosin
  • thenastyranch
  • Youngstown
  • cubers
  • slotface
  • khanakhh
  • kavyap
  • DreamBathrooms
  • anitta
  • Durango
  • everett
  • ethstaker
  • cisconetworking
  • provamag3
  • Leos
  • modclub
  • ngwrru68w68
  • tacticalgear
  • tester
  • megavids
  • normalnudes
  • lostlight
  • All magazines