Latest version of #HTML standard includes a warning that advises against using the #XML syntax (formerly known as #XHTML), stating that it's "essentially unmaintained"🧐 :
I am co-founding a new startup! #InputLab creates test data for thousands of formats from electronic invoices to retail orders, covering all input features – and we just got an 800k€ initial funding to start as a #CISPA spin-off in September.
I figure doing XML in Rust is rather obscure. I queue for lunch, mention it to someone, someone else just ahead of me in the queue says "oh I am working on that too!"
I also chatted to two different speakers at the conference who worked on a different XSLT engine in the past (way before Rust)
DON : pour collection, car j'imagine que ça ne sert plus à rien, mais ça me fait mal au coeur de les jeter.
Livres sur #Amazon et le #XML.
Contre étiquette d'envoi avec vos coordonnées, ou récupération sur #Paris19
Hey, #XML crowd! I've got a mark-up puzzle to solve that my trainer can't answer.
How to I tag Germany or Italy in a historical context that predates their existence as a codified state? So, for Germany any time before 1871. Do I ignore historical reality and just go ahead and code it with [gw]?
I post a lot of sample code on this blog. My CodePen is full of little snippets of this and that. Quite often, these snippets need data to do something useful. A good example of that is my Lit example from this past week. Coming up with that data can be complicated, though. That is why I created a site for assorted test data. If you want to have a little rummage through it, I also made the git repository public for the site. While I was at it, I also put it behind a Cloudflare proxy to speed it up a little.
Have any questions, comments, etc? Please feel free to drop a comment below.
Okay, finally published my #Rust powered parser and XML-specific parser packages to crates.io.
sipp: "Simple parser package"
spex: "Simple(ish) parser and extractor of XML"
I've not had time to push them to my Codeberg account yet, and there are problems with my initial attempt at README pages. But hopefully there's enough there to give an idea of what the packages offer.
Feedback welcome, especially about the "shape" of the public interface and how it "feels" in actual use.
Fun fact: had #ActivityPub object representation been #XML#RDF instead of #JSON, little more than a thin wrapper with #XSLT and #noJavaScript would have been sufficient to serve them on the web —statically.
I was planning to have my parser and XML parser packages published to crates.io by now, but while writing example code I keep finding new features begging to be added, such as the ability to specify a default namespace before reaching for a chain of child elements.
But I'm pretty sure they say that scope creep always leads to the best outcome, right?
Die heutige CMS-Klausur hatte natürlich auch Bezüge zu #XML-Basics und dann darüber hinaus zu konkreten Formaten, Standards und Vorgehensweisen. Dabei gab es eine Referenz zur Serie #ForAllMankind mit fünf Fehlern und zwei vermeintlichen Triggerpunkten.
CRAN is looking for someone to maintain #XML package: #rStats
"So we are looking for a person volunteering to take over 'XML'.
Please let us know if you are interested."
The task is not easy: many thousand of packages depend on it. Anyone taking it will be doing a great service to the R community.
I have a post about the situation they are in but it seems lacking the plots and some content. I'll update this toot once I fix it, to provide a link to it.
I also wrote about my one disagreement with Russ where he advocates for writing drafts in XML, but I have become a strong advocate for using Markdown in most cases.
Is there some markup language you really like?
Do you have a vision of what a perfect markup language should look like?
Do you write your UIs without a markup language, just with code?
I have, belatedly, realised that my #Rust#XML parser needs to use dynamic dispatch, because the character encoding can only be determined at runtime. Which means all of my rigidly static generic structs need to have dynamic equivalents. But I want to keep the static generic versions too, so that (for example) a JSON parser can be built from them (JSON is always UTF-8 so no need for runtime determination).
The dynamic/static files are almost identical. Any way to avoid duplication?
Dear lazyverse: is there an XML validation tool using RelaxNG compact schemas that I can install on Fedora and that doesn't depend on Java? Assume I know about jing and rnv
Yes, of course #GNOME#Boxes, when I click "Edit Configuration" on my VM, dealing with raw #XML directly in a text editor is exactly what I want to do, why would anyone be surprised by that?
Phew, had me worried for a minute. I'm writing a simple XML 1.0 parser in #Rust just for practice, and on feeding it a 4.4MB XML file it took 56.5s to read it. I've done nothing to optimise it yet, but even so that sounded dire.
Then I remembered to use "release" mode, and the time dropped to 3.9s. Whatever the compiler is doing behind the scenes, I'll take that 14x speed boost, thank you.