This is a great blog post on the WellTyped blog on specialization in Haskell! It's a good reminder that I (or someone) should really get around to getting rid of -fexpose-all-unfoldings and -fspecialize-agressively in the Agda codebase.
#Quiche#compiler is now alive! (At least Conway's variant of alive). The initial version was slow - about four seconds per generation. It was multiplying coordinates for each cell read and write.
The second variant uses offsets into each liner buffer, and only redraws changed cells. It's now running at three to four generations per second.
This week I added the Peek() and Poke() intrinsics to the #Quiche#compiler. That means I can now write my first non-trivial program.
I spend this morning fixing a few bugs in the parser and code generator and it's successfully generating the assembler file.
The assembler is choking on a couple of issues with identifiers, and the output code has a couple of bugs to do with parameter parsing and result processing.
The compiler tutorials I've read don't talk about how to deal with classes and inheritance. I assume that a metaclass has to be built for each class. But should I then store those metaclasses for later use, or do I regenerate them when needed? I assume the former.
Also, my parser doesn't currently check for duplicate classes or methods (inside classes). Should it be in the parser, or should it be part of the thing that builds the output?
Progress on the #compiler. The Z8 now passes the test suite and the build coverage test. The test suite is pretty basic so there are probably plenty of bugs left. Code density is not great on the Z8 though. Also added register keywords for arguments to the compiler and split I/D to the linker.
The bytecode output for the #1802 also works with a bytecode engine in C, but the 1802 part is a long way off. Might have to stop putting off debugging the 65C816 now and carry on with #6502
In their blog post "Speeding up Rust edit-build-run cycle" David Lattimore shows how you can speed up #RustLang compile times by 16x by just changing some default compiler config:
All of the operators are now passed over to the new data tables and primitive search. I'm moving onto intrinsics. These are small routines with function-like that often generate inline code, such as peek, inp and sizeof.
Many of these have quirks such as accepting multiple types, or a typedef. The quirk of Abs is unsigned values: in doesn't affect them. I could raise an error but it's nicer to fake the data to not generate any code.
I'm starting to wonder if there's any point in having the lexer and parser as two separate classes.
Other than testing, the lexer is only ever going to be called by the parser, and only once during the process.
It might be better to just have a lexer-parser class that grabs a file, tokenises it, then (if it's happy with the file it's tokenised) immediately turns it into a tree.
Is there a really good reason why they should be separate classes?
Altri #problemi di #PlayStationPortable, ma in questo caso non colpa di #Sony: avevo visto (con rabbia e disperazione) che i #core Nestopia-UE e #QuickNES per #RetroArch (e questo punto chissà quanti altri!) facevano piantare per qualche secondo la #console, che poi si spegneva con un pop. A questo punto decido di vedere se anche su #PPSSPP… e si, succede la stessa cosa, quindi non è colpa del mio hardware. 🤯️
Il grazioso #emulatore mi dice però precisamente il motivo del #crash… un jump a NULL, che è una cosa non proprio bella (in alto in foto), e mi dice molto poco. Purtroppo sulla PSP mi serve uno di quei core, perché voglio tenere quanta più possibile della mia #emulazione centralizzata in RetroArch, e a quanto pare FCEUmm (l’unico altro disponibile per #emulare il #NES) ha qualche problema: inizialmente funzionava (come in basso a sinistra in foto), ma poi ha iniziato a rompere il video in modo #cursed (in basso a destra). (No, non ho provato a resettare tutta la configurazione, perché anche se risolvesse ora il #problema non potrei farlo ogni volta che si ripresenta.) 💀️
https://octospacc.altervista.org/wp-content/uploads/2024/02/image-4-960x524.pngPurtroppo, come ormai sempre più mi capita, non trovo alcuna informazione rilevante al problema cercando sul web. E allora, unica mia possibilità: mi metto con l’animo in pace e provo a ritroso tutte le #build di RetroArch per la piattaforma, fino a trovare il punto di #crisi dove quei 2 core si sono rotti: a quanto pare, tra il 20 gennaio 2022 e il 5 marzo 2022; la #release1.10.0 è a posto (stando a PPSSPP, ancora non ho provato sul vero metallo), mentre già la 1.10.1 presenta la #rogna. E noto una singola e particolare #differenza: il passaggio della #versione del #compilatore#GCC da 9.3.0 a 11.2.0. 🧐️
https://octospacc.altervista.org/wp-content/uploads/2024/02/image-5-960x275.png…A chi devo dare la colpa ora? Saranno stati quelli di GNU ad aver #rotto roba upstream? O piuttosto quelli dell’SDK per #PSP? Perché ho skimmato commit e release della roba di #Libretro, ma non riesco ad individuare il problema lì. Ma in ogni caso, perché certi core hanno smesso di funzionare brutalmente ed altri no? Questi sono i motivi per cui odio il #software. Ora non so nemmeno a chi devo creare la #issue a riguardo. 🗡️
Per ora, la mia unica #speranza è di usare questa versione vecchietta del #multiemulatore, sperando che non ci siano incompatibilità di savestate tra versioni diverse, perché voglio giustamente tenere quelle aggiornate sui dispositivi dove funzionano. Avendo poi più tempo, potrei tentare di compilare una versione recente del pacchetto usando il #compiler vecchio… ma probabilmente non ci riuscirò. 😩️
The author is craving for an intermediate representation for general-purpose computational machines. A few years later it happened with #WebAssembly. #computerScience#compiler
Before Christmas I decided the #Quiche#compiler needed two big refactorings. The first is nearly done: the data tables for operands and primitives.
The OG version had grown confusing due to some poor initial decisions. It also put too much intelligence into the parser regarding the available types for each operator.
The new version allows the parser to scan the table to confirm if an operator can handle the operator types. It can also 'expand' types to find a match...
I'm trying to work out where the line is between a lexer (tokeniser) and a parser.
How far should a lexer go before it's doing stuff that a parser should do? Should the lexer have some intelligence about what it's expecting to see next, or what needs to be ignored (e.g. comments)? Or should the lexer just make tokens and leave the rest to be left to the parser?
I'm not building a #compiler as such, but the principles are basically the same for a preprocessor.
TIL that Go doesn't have bytes.Equal([]byte,string) or strings.Equal(string,[]byte) because as of 2019 the compiler is smart enough to make string([]byte) into a cast rather than a copy (possibly with allocation) when used in these types of comparisons.
I wish there was some central documentation of non-obvious "magical" optimizations like this.
bytes, internal/bytealg: simplify Equal The compiler has advanced enough that it is cheaper to convert to strings than to go through the assembly trampolines to call runtime.memequal. Simplify Equal accordingly, and cull dead code from bytealg. While we're here, simplify Equal's documentation. Fixes #31587