@juliank@mastodon.social
@juliank@mastodon.social avatar

juliank

@juliank@mastodon.social

Debian Developer, Ubuntu Core Developer, Software Engineer II at Canonical. Your friendly neighborhood APT maintainer. Vegan. He/him.

Love cooking, cycling, walking, music, and netflix.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

juliank, to random
@juliank@mastodon.social avatar

200 lines of comments, 964 lines of code, and a complexity of 405 whatever that is.

OK
$ scc apt-pkg/solver3.*
[...]
Estimated Cost to Develop (organic) $25,994
Estimated Schedule Effort (organic) 3.44 months
Estimated People Required (organic) 0.67

I spent like 2 weeks on this, not 3.44 months, and I am 1 person not 0.67.

juliank, to random
@juliank@mastodon.social avatar

Yeesh, Nvidia getting back into the ARM SoC for laptop business doesn't seem very enticing to me.

juliank,
@juliank@mastodon.social avatar

Also AMD resumes development of ARM CPUs huh

juliank, to random
@juliank@mastodon.social avatar

Comparative analysis of the stress tracking yesterday and today. I had a terrible night but night time recovery 2 week average of course still is about the same.

Daytime recovery increased and day time stress load decreased.

This caused the score to go down. 1/2

juliank,
@juliank@mastodon.social avatar

To understand why let's look at the map. It clearly shows, the lower the stress the more recovery you need. Basically I think the presentation isn't useful so far.

You see every half hour is categorised as stressed/engaged/relaxed/recovered. Daytime stress going down obviously leaves more time you should aim to recover in which means you need a higher recovery score.

Like it seems they measure the recovery score as time recovered / (time in day - time stressed) rather than absolute numbers?

juliank, to random
@juliank@mastodon.social avatar

Kids in Hessen can now choose Ukrainian as their second foreign language in school, no longer just French or Latin (possibly Spanish, idk).

juliank,
@juliank@mastodon.social avatar

Also tbh, Hessen is the high German name, it's not clear to me why the English version Hesse is basically the Hessian dialect version of it (folks drop consonants at the end; ok Hessen is also the plural of Hesse, in terms of "people from Hesse(n)").

But yeah where as a normal person would say

Ein Hesse
Zwei Hessen

the Hesse will say Zwei Hesse

juliank, to random
@juliank@mastodon.social avatar

The AGPL is stulid.

Stalwart mail server is Affero GPL licensed. Can you legally use it? Remember that you need to point each client at the specific source code the binary was built from.

This provision isn't restricted to protocols intended for humans. You need to advertise the source code location to everyone connecting over JMAP, SMTP,IMAP,POP3 - whatever protocols you use.

If the protocol has no means to advertise the source code, you are fucked. You have no way to be compliant.

juliank,
@juliank@mastodon.social avatar

Depending on your interpretation of what interact means it may be impossible to use an Affero licensed HTTP server.

Now you think: Hang on, I can send the source code location in each HTTP response in a header.

But what if the client only connect()ed but never sends a request? AGPL violation.

Or did it just interact with the kernel in a legal sense? Only courts can tell!

juliank,
@juliank@mastodon.social avatar

So basically you always have to run a non-AGPL proxy server in front of AGPL servers on protocols without server greetings like HTTP such that user connections only reach the AGPL server once they send a request such that you could include the corresponding source code offer in a response.

juliank, to random
@juliank@mastodon.social avatar

New feature: If you have a package installed with

Multi-Arch: allowed
Multi-Arch: foreign

We will not allow the solver to switch its architecture to satisfy dependencies.

You want that because otherwise the solver will try to replace utilities with foreign architecture ones if they are the only ones available yet in the new version the Architecture: all package depends on...

Seriously that's not what you want and it easily enters backtracking hell.

juliank, to random
@juliank@mastodon.social avatar

Working on an interesting problem like this solver is very taxing. I couldn't fall asleep until like 2am and woke up at like 6:30 realising how to fix the problem I had remaining. 😫

juliank,
@juliank@mastodon.social avatar

What happened? You see in the upgrade test cases we had a lot of libfoo1 to libfoo1t64 upgrades, but there were also plenty of NBS libbar1, i.e. binaries from older source versions no longer built in the new version.

The solver tried to upgrade libbar1 and that held back installing libfoo1.

The answer of course is: Upgrade by source package. This also significantly reduces backtracking. Mark libfoo1? Mark all other binaries of the same source version the same, in the same decision level.

juliank, to random
@juliank@mastodon.social avatar

solver3 found the problem so complex it didn't find the full solution, running it again made it find more upgrades, huh.

juliank,
@juliank@mastodon.social avatar

But I mean at least it didn't remove packages it didn't have to

juliank,
@juliank@mastodon.social avatar

Oops it needs a third run!

juliank, to random
@juliank@mastodon.social avatar

PubGrub is an interesting dependency solver, but it's worth pointing out the field of language package managers it is for is quite different from APT's use case.

PubGrub primarily focuses on version selection, because you'll have hundreds of versions per package and you need to navigate all the package's requirements to find the right set of versions.

APT is different. Each package normally has one, at most two versions. Instead you have lots of package choices to make.

juliank,
@juliank@mastodon.social avatar

Anyway why do we talk about this, one thing that is reasonable is deferring version selection, and instead adding constraints to our state.

I quickly added that to solver3, now when it sees a dependency A (>= 2) for example it will first reject all A <<2 and then mark A for install.

This means it can decide between e.g. 2 and 3 later on.

I do believe that's helpful but I need to do more analysis on the PoC with complex time_t upgrade dumps.

juliank,
@juliank@mastodon.social avatar

The implementation of course is remarkably stupid as it will try to do the same thing for each of the versions that can be chosen for the dependency, since we already lowered the dependency to a choice of versions, and only record the version as rejected that we looked at.

But meh, we're not trying to be perfect here. I suppose we should add real VersionRange objects or something.

juliank,
@juliank@mastodon.social avatar

Arguably version range is stupid though because you may as well have just e.g. 1,3 as allowed but have versions 1,2,3 available.

juliank, to random
@juliank@mastodon.social avatar

Started with week with 86 broken test cases for the new solver, now down to 32.

I'm cheating a bit though. An important feature hasn't been implemented yet: Listing packages that would be autoremoved if you run autoremove.

I just hide the lists to get test coverage for the rest. It's not clear how many tests this enabled, but it seems to be 7.

Anyway we have a reasonable picture now of what is missing, and I hope the fixes in the other branch don't mess up the working tests :D

juliank, to random
@juliank@mastodon.social avatar

New blog post: Observations in Debian dependency solving

https://blog.jak-linux.org/2024/05/24/observations-in-debsat/

juliank, to random
@juliank@mastodon.social avatar

In an alternative solving strategy for package dependencies, you could instead of selecting the smallest dependency to solve next, select the package that would satisfy most outstanding dependencies.

The problem you have it's not generally compliant with the expectations of Debian maintainers that A|B installs A unless B is installed.

There is an alternative way to improve that: Solve dependencies closer to the roots before going deeper and only allow higher install requests to override order

juliank,
@juliank@mastodon.social avatar

So

A->B->C

X->D|C

will install

A, X, B, D, C

i.e. we don't consider D|C to be satisfied by C because it was decided lower in the graph.

But (ok I said higher but equal works too I suppose)

A->B->C

X->Y->D|C

would install A, X, B, Y, C.

juliank, to random
@juliank@mastodon.social avatar

Germany is the "stocks are evil, because we all bought Telekom stock in the 90s and it crashed and we're not going to get into that gamble again" country.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • anitta
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • megavids
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • provamag3
  • tester
  • Leos
  • JUstTest
  • All magazines