heise+ | Security: Wie Zertifikate im Web funktionieren – und wie nicht
Die EU will Browser zum Akzeptieren bestimmter Zertifizierungsstellen zwingen. Sie greift damit in ein sensibles System aus technischen Anforderungen ein.
heise+ | Sicherheit: Wo sich überall Stammzertifikate verstecken
Der Inhalt von "Trust Stores" entscheidet, welchen Zertifikaten eine Software vertraut. Wir zeigen exemplarisch, welche Faktoren Sie selbst beeinflussen können.
Lots of talk on here about #WebComponents. So I went looking at other sites to see if the excitement is shared.
HN: not many upvotes for anything related to the topic, and comments are not enthusiastic for the tech.
X: some enthusiasm there, but in most cases I recognize the posters as active on #Mastodon 🤔
So why here, and now? Is it because a decentralized network attracts a certain kind of #webdev and they favor #standards and purity? Or is it something else?
I watch dev teams every week wrestling with major downstream consequences of not taking enough care over their work, and then I hear managers warning them "Beware of taking too much care!", and I wonder if they and I are perhaps living in different universes.
I sometimes wonder if I should switch to OpenBSD (they don't like letting absolute crap overwhelm the acceptable), but alas #Yunohost (#Debian) is currently proving alllmost beyond me (can't get my UPS to work yet!), so taking the step up to #openBSD might be a bit much!
Hello . Quelqu'un pour me dire comment oeuvrer à favoriser l'interopérabilité des solutions de santé , de e-santé , et de prise de rendez -vous en ligne ? Y a t'il un protocole d'interopérabilité déjà existant sur d'autres types d 'app métier ? En droit ? En construction? Des standards ? #interhop#santé#standards#esanté#securitesociale#solutionsfermeesalacon
Bereit für KI: Deutsche Unternehmen noch nicht abgehängt
Die deutschen Unternehmen sind bei KI bislang nicht abgehängt. Zeitnahe Investitionen werden darüber entscheiden sein, wo Deutschland zukünftig stehen wird.
Aside: I notice we use both #CaPoli and #CanPoli here in Canada. I will never forgive the internet standards people for going with the 2-letter iso names instead of the 3-letter ones for country-TLDs and language-codes.
3-letter codes would prevent the "is CA Canada or California".
The biggest countries use 2-letter state/prov codes, so 3-letter countries would prevent ambiguity.
My site should be Pxtl.can, written in en-can not Pxtl.ca written in en-ca.
Der herstellerübergreifende Smart-Home-Standard Matter materialisiert sich
Matter zieht in Smart Homes ein: Dennoch muss noch einiges geschehen, bis der neue Standard das Versprechen einlöst, die gespaltete Automationswelt zu vereinen.
If you are interested in running for Board or Council, please add a wiki page about your candidacy to one or both of the following sections until November 5th, 2023, 00:00 UTC. (Board application does not require XSF membership.)
Sufficient air filtration for airborne viruses should be a building standard. And a mandatory upgrade for all corporate and rental properties.
It's like having to remove asbestos or mould remediation requirements. Only it's adding in filtration to deal with airborne viruses which should be much easier and cheaper to fix. Just because the toxin is invisible, doesn't mean it should not be dealt with effectively and standards raised to account for it. #standards#building#covid#healthandsafety
I suggested expansion of the existing Robots Exclusion Protocol (e.g. "robots.txt") as a path toward helping provide websites and creators control over how their contents are used by #AI systems.
Shortly thereafter, #Google publicly announced their own support for the robots.txt methodology as a useful mechanism in these contexts.
While it's true that adherence to robots.txt (or related webpage Meta tags -- also part of the Robots Exclusion Protocol) is voluntary, my view is that most large firms do honor its directives, and if ultimately moves toward a regulatory approach to this were deemed genuinely necessary, a more formal approach would be a possible option.
This morning Google ran a livestream discussing their progress in this entire area, emphasizing that we're only at the beginning of a long road, and asking for a wide range of stakeholder inputs.
I believe of particular importance is Google's desire for these content control systems to be as technologically straightforward as possible (so, building on the existing Robots Exclusion Protocol is clearly desirable rather than creating something entirely new), and for the effort to be industry-wide, not restricted to or controlled by only a few firms.
Also of note is Google's endorsement of the excellent "AI taxonomy" concept for consideration in these regards. Essentially, the idea is that AI Web crawling exclusions could be specified by the type of use involved, rather than by which entity was doing the crawling. So, a set of directives could be defined that would apply to all AI-related crawlers, irrespective of who was doing the crawling, but permitting (for example) crawlers that are looking for content related to public interest AI research to proceed, but direct that content not be taken or used for commercial Generative AI chatbot systems.
Again, these are of course only the first few steps toward scalable solutions in this area, but this is all incredibly important, and I definitely support Google's continuing progress in these regards.
Another organization announced a separate "ai.txt" initiative which they proposed. They claimed some AI crawlers were already respecting it, but I started watching the counts - each day there are a few hits to robots.txt on my server, but not a one for ai.txt yet.
Why don’t EVs have standard diagnostic ports—and when will that change? (arstechnica.com)
OBD-II was implemented to monitor emissions, but EVs don't have tailpipes.