“Telegram has launched a pretty intense campaign to malign Signal as insecure, with assistance from Elon Musk” | @matthew_d_green
> Pavel Durov, the CEO of Telegram, has recently been making a big conspiracy push to promote Telegram as more secure than Signal. This is like promoting ketchup as better for your car than synthetic motor oil. Telegram isn’t a secure messenger, full stop. That’s a choice Durov made.
TheyWorkForYou is 20 years old and is starting a new project!
TWFY is a stalwart of British online democracy, a tool for tracking MPs and their voting interests and enabling their constituents to contact them directly.
Their new project: WhoFundsThem — is self-explanatory in its importance.
Zuckerman vs: Zuckerberg: why and how this is a battle of the public understanding of APIs, and why Zuckerman needs to lose and Meta needs to win
Imagine that you’re a cool, high-school, technocultural teenager; you’ve been raised reading Cory Doctorow’s “Little Brother” series, you have a 3D printer, a soldering iron, you hack on Arduino control systems for fun, and you really, really want a big strobe light in your bedroom to go with the music that you blast-out when your parents are away.
So you build a stepper-motor with a wheel and a couple of little arms, link it to a microphone circuit which does a FFT of ambient sound, and hot-glue the whole thing to your bedroom lightswitch so that the wheel’s arms can flick the lightswitch on-and-off in time to the beat.
If you’re lucky the whole thing will work for a minute or two and then the switch will break, because it wasn’t designed to be flicked on-and-off ten times per second; or maybe you’ll blow the lightbulb. If you’re very unlucky the entire switch and wiring will get really hot, arc, and set fire to the building. And if you share, distribute, and encourage your friends to do the same then you’re likely to be held liable in one of several ways if any of them suffer cost or harm.
Who am I?
My name’s Alec. I am a long-term blogger and an information, network and cyber security expert. From 1992-2009 I worked for Sun Microsystems, from 2013-16 I worked for Facebook, and today I am a full-time stay at home dad and part-time consultant. For more information please see my “about” page.
So what is an API? My personal definition is broad but I would describe an API as any mechanism that offers a public or private contract to observe (query, read) or manipulate (set, create, update, delete) the state of a resource (device, file, or data).
In other words: a light switch. You can use it to turn the light on if it’s off, or off if it’s on, and maybe there’s a “dimmer” to set the brightness if the bulb is compatible; but light switches have their physical limitations and expected modes of use, and they need to be chosen or designed to fit the desired usage model and purpose.
The modern equivalent for web-browsers is called Selenium WebDriver and is widely used by both automated software testers and criminal bot-farms, to name but two purposes.
So yes: the tech industry — or perhaps: the tech hacker/user community — has a long history of wiring programmable motors to light switches and hoping that their house does not catch on fire… but we should really aspire to do better than that… and that’s where we come to the history of EBay and Twitter.
History of Public APIs
In the early 2000s there was a proliferation of platforms that offered various services — “I can buy books over the internet? That’s amazing!” — and this was all before the concept of a “Public API” was invented.
People wanted to “add-value” or “auto-submit” or “retrieve data” from those platforms, or even to build “alternative clients”; so they examined the HTML, reverse-engineered the functions of Internal or Private APIs which made the platform work, wrote and shared ad-hoc tools that posted and scraped data, and published their work as hackerly acts of radical empowerment “on behalf of the users” … except for those tools which stole or misused your data.
The eBay API was originally rolled out to only a select number of licensed eBay partners and developers. […] The eBay API was a response to the growing number of applications that were already relying on its site either legitimately or illegitimately. The API aimed to standardize how applications integrated with eBay, and make it easier for partners and developers to build a business around the eBay ecosystem.
On September 20, 2006 Twitter introduced the Twitter API to the world. Much like the release of the eBay API, Twitter’s API release was in response to the growing usage of Twitter by those scraping the site or creating rogue APIs.
an ecosystem of ad-hoc tools that attempt to blindly and retrospectively track EBay’s own platform development would not offer standardisation across the tools that use those APIs, and so would thereby actually limit potential for third-party client development; each tool would be working with different assumed “contracts” of behaviour that were never meant to be fixed or exposed to the public, and would also replicate work
proliferation of man-in-the-middle “services” that would act “on your behalf” — and with your credentials — on the Twitter and EBay platforms, presented both a massive trust and security risk to the user (fraudulent purchases? fake tweets? stolen credentials?) with consequent reputational risk to the platform
But at the most fundamental level: Public APIs exist in order to formalise contracts of adequate means by which third-parties can observe or manipulate “state” (e.g.; user data, postings, friendships, …) on the platform.
By offering a Public API the platform frees itself also to develop and use Private APIs which can service other or new aspects of platform functionality, and it’s in a position to build and “ring-fence” the Public API service in the expectation of both heavy useand abuse being submitted through it.
Similarly: the Private APIs can be engineered more simply to act like domestic light-switches: to be used in limited ways and at human speeds; it turns out that this can be important for matters like privacy and safety.
Third parties benefit from Public APIs by having a guaranteed set of features to work with, proper documentation of API behaviour, and confidence that the API will behave in a way that they can reason about, and an API lifecycle management process with which will enable them to make their own guarantees regarding their work.
The shortest summary of the lawsuit that I have heard from one of its ardent supporters, is that the lawsuit:
[…] seeks immunity from [the Computer Fraud and Abuse Act] and [the Digital Millennium Copyright Act] [for legal] claims [against third parties or users] for automating a browser [to use Private APIs to obtain extra “value” from a website] and [the lawsuit also] does not seek state mandated APIs, or, indeed, any APIs
(private communication)
To make a strawman analogy so that we can defend it’s accuracy:
Let’s build and distribute motors to flick lightswitches on and off to make strobe lights, because what’s the worst that could happen? And we want people to have a fundamental right to do this, because Section 230 says we have such a right. We won’t be requiring any new switches to be installed, we just want to be allowed to use the ones that are already there, so it’s easy and low-cost to ask for, and there’s no risk to us doing this. But we also want legal immunity just in case what we provide happens to burn someone’s house down.
In other words: a return to the ways of the early 2000s, where scraping data and poking undocumented Private APIs was an accepted way to hack extra value into a website platform. To a particular mindset — especially the “big tech is irredeemably evil” folk — this sounds great, because clearly Meta intentionally prevents your having full, automated remote control over your user data on the grounds that it’s terribly valuable to them, and their having it keeps you addicted, so it helps them make money …
And you know what? To a very limited extent I agree with that premise — or at least that some of the Facebook user-interface is unnecessarily painful to use.
E.g. I feel there is little (some, but little) practical excuse for the heavy user friction which Facebook imposes upon editing of the “topics you may be interested in receiving adverts about“; but the way to address this is not to encourage proliferation of browser plugins (of dubious provenance regarding privacy and regulatory compliance, let alone uncertain behaviour) which manipulate undocumented Private APIs.
Apart from any other reason, as alluded above, Private APIs are built in the expectation of being used in a particular way — e.g. by humans, at a particular cadence and frequency — and on advanced platforms like Facebook they are engineered with those expectations enforced by rate limits not only for efficiency but also for availability, security and privacy reasons.
If you start driving these Private APIs at rates which are inhuman — 10s or 100s of actions per second — then you should and will expect them to either be rate-limited, or else possibly break the platform in much the same way that flicking a lightswitch at such a rate would break that lightswitch or bulb.
With this we can describe the error in one of the proponent’s claims: We aren’t requiring any new [APIs] to be installed, we just want to be allowed to use the ones that are already there — but if the Private API is neither intended nor capable of being driven at automated speeds then either something (the platform?) will break, or else there will be loud demands that the Private APIs be re-engineered to remove “bottlenecks” (rate limits) to the detriment of availability and security.
But if you will be calling for the formalisation of Private APIs to provide functionality, why are you not instead calling for an obligation upon the platform to provide a Public API?
Private APIs are not Public APIs, and Public APIs may demand registration
The general theme of the lawsuit is to demand that any API which a platform implements — even undocumented Private ones — should be legally treated as a Public API, open for use by third party implementors, without reciprocal obligation that the third-party client obtain an “API Key” to identify itself, nor to abide by particular behaviour or rate-limits.
In short: all APIs, both Public and Private, should become “fair game” to third party implementors, and the Platforms should have no business to distinguish between one third-party or another, even in the instance that one or more of them are malicious.
This is a dangerous proposal. Platforms innovate new functionality and change their Private API behaviour at a relatively rapid speed, and there is currently nothing to prevent that; but if a true “right to use” for a Private API becomes somehow enshrined, what happens next?
Obviously: any behaviour which interferes with a public right-to-use is illegal, so it will therefore become illegal to change or remove Private APIs — or at very least any attempt to do so will lead to claims of “anticompetitive behaviour” and yet more punitive lawsuits. The free-speech rights of the platform will be abridged by compulsion to never change APIs, or to support legacy-publicly-used-yet-undocumented APIs forever more.
I don’t want to keep flogging this horse, so I am just going to try and summarise in a few bullets:
Private APIs exist to provide functionality to directly support a platform; they are implemented in ways which reflect their expected (usually: human) modes of use, they are not publicly documented, they can come and go, and this is normal and okay
Public APIs exist to provide functionality to support third-party value-add to a platform; they are documented and offer some form of public “contract” or guarantee of behaviour, capability, and reliability. They are often designed in expectation of automated or bulk usage.
Private APIs do not offer such a public contract; they are not meant to be built upon other than by the platform itself. They are meant to be able to “go away” without fuss, but if their use is a guaranteed “right” then how can they ever be deprecated?
If third parties want to start using Private APIs as if they were Public APIs then the Private APIs will probably need to be re-engineered to support the weight of automated or bulk usage; but if they are going to be re-engineered anyway, why not push for them to become Public APIs?
If some (in-browser) third party tools claim to be acting “for the public good” then presumably they will have no problem in identifying themselves in order to differentiate themselves from (in-browser) evil cookie-stealing malware and worms; but to differentiate themselves would require use of an API Key and a Public API — so why are the third-party tool authors not calling to have the necessary Public APIs?
Just because an academic says “I wrote a script and I think it will work and that I [or one of your users] should be allowed to run it against your service without fear of reprisal even though [we] don’t understand how the back end system will scale with it”— does not mean that they should be permitted to do so willy-nilly, not against Facebook nor against your local community Mastodon instance.
This is not something I was expecting or ever imagining I would write; I’ve just heard.
This is a tremendous loss for us all.
Professor Ross Anderson, FRS, FREng
Our dear friend and treasured long term campaigner for privacy and security, Professor of Security Engineering at Cambridge University and Edinburgh University, Lovelace Medal winner, died suddenly at the family home in Cambridge overnight.
Heather Burns on Twitter: “This piece … on the Russian digital surveillance system over 540 million teenagers’ accounts, which is presented as “suicide prevention” but is really political surveillance for the Kremlin, reads like a safety tech vendor’s best sales pitch.”
Note to policymakers: if your vision for keeping young people safe online involves the same kind of technical infrastructure which is being used to manage an actual genocide, you may wish to scrap your vision and start again. https://t.co/WYRtxANMUT
We’re in the middle of a perfect storm for rollback of the “open web” and burgeoning online surveillance
Looking at fallout of the KOSA hearings today — and subsequent commentary — I remain optimistic for the development of social technology & communication but I’m beginning to think the open web may basically “Do a Yahoo!” and fade, largely because of our self-appointed privacy, safety and national-security activists.
We are living at an unfortunate confluence of several movements in civil society and politics:
people who believe that online speech is directly comparable to physical harm
privacy activists who killed cookies to protect us from GAFAM “tracking” thereby wiping-out competition in advertising, imperilling small business, and encouraging some of the largest platforms (Chrome, iOS) to simply spy on us instead
failing and minor platforms which — given the death of cookies — may perceive the KnowYourCustomer™ elements of safety-demanded “age verification” as a potential parachute for their advertising revenue
political and legal activists who demand “data localism” because they believe that data protection hinges greatly upon lawyers having local CEOs and preventing seizure of servers, perhaps than it does upon preventing intrusion, scraping and hacking
security services, sitting in the middle of this hurricane, trying to make it whirl faster because making all internet activity attributable and trackable is food for their existence
We are in for a rough few years. There will be losses. The “app” ecosystem will likely take a big — possibly majority — chunk out of the “open web” as users demand features which are more easily built without the abstraction of traditional web/web-like services.
trurl: command line tool for URL parsing and manipulation
One software thing I built at Facebook was called Host — basically a PHP library to manipulate website hostnames without error-prone regular expressions, bad assumptions and “hunting for dots”. It saved a lot of potential problems and a moderate amount of CPU (0.1%+?) and I can see the same thinking here.
British man acquitted over London-Spain flight bomb hoax | …SnapChat leaking messages to security services & supporting KOSA? Not a good combo for user privacy | HT @rebeccamkern
SnapChat must* be surveilling their non-encrypted chats (i.e. all of them, but they travel over HTTPS for privacy) & triggering on sensitive words, either on-server or on-client, reporting to law enforcement who then over-react … PLUS they announced support for the illiberal & misconceived KidsOnlineSafetyAct.
The two, combined, are not a great indicator for how they view user privacy.
A Spanish court has cleared a British man of public disorder, after he joked to friends about blowing up a flight from London Gatwick to Menorca […] A key question in the case was how the message got out, considering Snapchat is an encrypted app. One theory, raised in the trial, was that it could have been intercepted via Gatwick’s Wi-Fi network. But a spokesperson for the airport told BBC News that its network “does not have that capability”. In the judge’s resolution, cited by the Europa Press news agency, it was said that the message, “for unknown reasons, was captured by the security mechanisms of England when the plane was flying over French airspace”. The message was made “in a strictly private environment between the accused and his friends with whom he flew, through a private group to which only they have access, so the accused could not even remotely assume… that the joke he played on his friends could be intercepted or detected by the British services, nor by third parties other than his friends who received the message,” the judgement added. It was not immediately clear how UK authorities were alerted to the message, with the judge noting “they were not the subject of evidence in this trial”.
[*] if the cause is not Snap themselves then their transport security is broken and that’s an even bigger story, being either being a weakness in the app or an undocumented man-in-the-middle HTTPS backdoor implemented by authorities in airport wireless transportation
Previously
Scoop for @politico– @Snapchat is the first social media platform to support the Kids Online Safety Act. This comes as CcEO Evan Spiegel joins the heads of Meta, TikTok, X and Discord next week in a @JudiciaryDems hearing on child sexual abuse material. https://t.co/PTKLQpqtHP
Is anybody working on algorithmic, engagement-led feed generation for Mastodon?
Serious question. One reason I still visit & use Twitter is: there are people in other time zones whose fediverse content is basically unseen by me, since they post at times when I’m parenting/asleep and so are buried under a chronological timeline.
Mostly they also post to Twitter which mostly automatically solves that problem for me.
I remember sometime around 2008 – or whenever it was that “information overload” was fashionable to complain about – reading a tweet from somebody saying “there is so much traffic on Twitter that I can no longer read every tweet” [presumably of people that they followed]
It would be good for Mastodon to start addressing that.
Via @tychotithonus a novel idea: maybe it’s about time we started talking honestly about what had to be done to combat Y2K to diffuse the disinformation about it
Smart idea:
The hardest part about refuting Y2K disinfo is how many problems were fixed quietly, in part to mitigate risk of ligitation (negligence, etc.). People have stories they can’t tell.
At this point, I think enough years have passed that a formal amnesty – to encourage companies to disclose just how bad some of the problems were – would be in our historical best interest.
What the history of OpenBoot, Phrack, Mudge & Solaris, can teach us about the wisdom (or not) of Apple’s building their iPhone security debugging-backdoor-NSA-hack thing
In the days before people really, really, cared about security — when it was more amazing that mainstream computers worked at all rather than that they offered falsifiable guarantees about privacy and integrity, and most of all in the days before hackerdom decided that it would be great if all the world’s computation ran on “…surely 640Kb is enough for anyone?” glorified MS-DOS personal computers rather than on architectures specifically designed to carry the weight of “big data”… back in those days there was the concept of a monitor.
By monitor we don’t mean VDU nor LCD screen, but instead that what you considered to be your entire computer operating system was something which could be paused, inspected, poked, amended, restarted or halted, all by a little parasitic computer system which probably polled the device tree and booted it up in the first place. The consequence of the monitor was that — beyond being a mere “boot loader” — you were essentially running your entire operating system kernel under a live debugger on a 24×7 basis.
This “debugger” was the monitor; sometimes it was separate hardware, sometimes it was just a firmware-level subsystem with which you could interrupt your operating system at any point, and call back into. At Sun Microsystems (in particular, but much the same was available elsewhere) the monitor evolved into a complete and flexible little solution called OpenBoot, which subsequently became a PCI standard (it is/was(?) even in MacOS) and it was massively powerful.
Unfortunately: with great power comes great responsibility, which (per the first paragraph) people were not really aware of, yet.
Fire up the trusty OpenBoot system via L1-A and get the pointer to thecred structure via :ok hex f5e09000 18 + l@ .f5a99858ok goNow, get the effective user id byok hex f5a99858 4 + l@ .309 (309 hex == 777 decimal)ok goOf course you want to change this to 0 (euid root):ok hex 0 f5a99858 4 + l!ok gocheck your credentials!Alliant+ iduid=777(mudge) gid=1(other) euid=0(root)
tl;dr — press some keys, type a magic incantation in Forth and you become “root”
Let’s just say that OpenBoot was a very powerful and essential medicine… but that provision of that power caused security side-effects/issues that were not going to go away in any short period of time. An excellent little white paper from GIAC provided a synopsis and context from a few years later, in 2001.
The technique of elevating user privileges by manually editing system runtime memory is an exploit that can be used to subvert all operating system security measures. This vulnerability is not operating system platform specific and exists in all computer hardware that utilizes a programmable firmware component for hardware control and bootstrapping procedures. This paper will explain this vulnerability as a class of exploit and utilize the SUN Microsystems’ OpenBoot programmable ROM (PROM) and Solaris as a technical example.
Speaking as one of the people who had to clean up the mess: we/Sun Microsystems should have done a lot more to mitigate the ability of people to get at this powerful medicine; this issue was significant amongst others which drove Sun’s internal security community to create and force the adoption of the “Secure By Default” initiative, and to formalise customer provision and promote adoption of the Solaris Security Toolkit which (amongst many other configuration changes) locked-down several different routes by which the OpenBoot monitor could be exploited.
From the perspective of 2023: this all should have happened 5, perhaps 10 years before Mudge’s posting, but there was neither the corporate will — nor customer will/expertise — to address the matter at that time.
Operation Triangulation: The last (hardware) mystery | …if this turns out to be an NSA-enabling backdoor, Apple’s security reputation will be toast
Our guess is that this unknown hardware feature was most likely intended to be used for debugging or testing purposes by Apple engineers or the factory, or that it was included by mistake. Because this feature is not used by the firmware, we have no idea how attackers would know how to use it.
Why I’m not even slightly scared about the future | …good read + a thought-provoking observation from Femi Oluwole; I wonder why power may be afraid of TikTok & Social Networks?
Criminals will start wearing extra prosthetic fingers to make surveillance footage look like it’s AI generated and thus inadmissible as evidence
I’m sure the NCA would agree that it’s obviously necessary to ban all silicone prosthetics immediately, and of course there would be absolutely no downsides to doing so.
Criminals will start wearing extra prosthetic fingers to make surveillance footage look like it's AI generated and thus inadmissible as evidence. pic.twitter.com/zhbdccafTD
“Could there be an internet where Tesco, Amazon, Netflix, BBC, airlines, banking etc work well but there are major changes elsewhere?” | child-safety activists ask for a read-only internet
In a sense this is one of the scariest things I’ve read, because it demands removing interactivity and freedom of the user’s voice from the internet; we would be permitted retail and other “consumer” services, and denied anything which might enable user-to-user communications on the grounds that it might harm children, or footballers, or similar.
It’s doubly ironic because the author — child-safety activist John Carr — is running and writing on an independent blog, and one can only wonder who he asked permission to do so?
Could there be an internet where Tesco, Amazon, Netflix, BBC, airlines, banking etc work well but there are major changes elsewhere because the public and Governments get fed up of the criminal and other forms abuse linked with various interactive elements? I think there could.
“Suffice it to say that everyone in possession of a copy of the LAION-5B images has hundreds if not thousands of instances of CSAM” | …so that’s 0.0001% of the content, then
So David Thiel at Stanford has posted a much-reported paper/story which tells us that the dataset which drives Stable Diffusion and a bunch of other AI systems, has scraped:
hundreds if not thousands of instances of CSAM (and a much larger number of instances of NCII more broadly)
…and it struck me to ask “how many images are there in LAION-5B so we can get a percentage?”
It turns out that the number of images in LAION-5B is five billion – hence the 5B:
LAION-5B was released in early 2022 by a German nonprofit that has received funding from several AI startups. The dataset comprises more than 5 billion images scraped from the web and accompanying captions. It’s an upgraded version of earlier AI training dataset, called LAION-400M, that was published by the same nonprofit a few months earlier and includes about 400 million images.
So if we generously interpret “…if not thousands…” to mean “five thousand” then some simple maths tells us that this is 0.0001% of the content, or literally “one in a million”.
This is the “needle in a haystack” ballpark – again, literally, if a heavyweight darning needle weighs 1 gram, then one million needles would weigh 1000kg, and the largest 4x4x8 haybales max-out at 2000lb / a little over 900kg.
So there can be more than 9x more mouse poop in the flour which makes your bread, than there generously is CSAM in the LAION-5B dataset.
“But this is all guesswork on your part / One image is one too many…”
The numbers are all above. Feel free to nitpick. Pick your own percentages. The FDA acknowledges that that poop in food is unavoidable, and the unstated goal of “Zero CSAM in a scraped dataset” will probably likewise be unavoidable. Thiel himself acknowledges:
While it’s not surprising that a crawl of the public internet will contain some CSAM, there’s no reason to go gather data on that scale without appropriate safeguards. The project that seeded the LAION sets made some efforts to filter content with CLIP, but it didn’t do enough.
I wish that I could be as optimistic as @ciaranmartinoxf regarding the eventual wisdom of the British state regarding end-to-end encryption, but I cannot…
There will have to be at least 2x changes of Government before what Ciaran is asking for below, can happen; the first will be an ouster of the Tories which is necessary because they are fuelling the Home Office mindset (NB: not the other way around) that “The Tech Companies Must Be Brought To Heel” in the most authoritarian way possible, because they have a confused understanding of how social media is all of us, mediated; they recognise that the unwashed public having a voice is a bad thing for them, but they believe that the middlmen can/will be the ones to fix it.
The problem is: Labour are in the same position but for mirror reasons. They whine about billionaires and “surveillance capitalism” and channel Ciaran’s second tweet, re-interpreting it as “the role of Government is to create new and different ways to protect the most vulnerable [demographics]” which – being literally a statist party – to them also means “tech interventionism” and trying to stop technology rather than trying to improve humans.
We are in thrall to politicians who are trying to find levers to pull in pursuit of protecting people, rather than educating them towards invulnerability.
The only question is how much time is wasted before the state accepts the reality of basic modern communications security, & works out new & different ways to protect the most vulnerable in this new secure reality that users across the world demand 2/2https://t.co/fJ6YeGW4My
Halley’s Comet reaches Perihelion, is on its way back | props to @JohnSimpsonNews
I saw the comet in 1986 – my first year of studying Astronomy at UCL – and although it wasn’t an a visual feast, it was amazing to be even passively observing something so rare and with such a tail and tale of historical importance.
It would be nice to see it again, but I, too, have to admit that the timing is unlikely to work. Cross fingers we’ll all be well enough to do so.
At 1 am GMT this morning Halley’s Comet reached its perihelion—the farthest end of its orbit—and turned to come back towards Earth. It’ll be here in 2061, so if you were born in 1980 or later you’ll have an evens chance of seeing it. Me, not so much. pic.twitter.com/WGLmVHmjjP