ian

@ian@www.ianbrown.tech

This profile is from a federated server and may be incomplete. Browse more on the original instance.

ian, to random

Twitter co-founder and former CEO Jack Dorsey caused a minor social media ripple last week when he left the Bluesky board… a project he founded and initially funded in his Twitter days. He’s now given a (slightly confused) interview explaining why.

He tells the interviewer he wants social media curation-algorithm choice (me too!). But he’s left the one major platform (Bluesky) which provides it. 🤷🏻‍♂️

He left because Bluesky is providing moderation tools some users wanted. And it’s too focused on the (slick, Twitter-like) app layer. But how else do you get the critical mass of users to make the underlying protocol worth connecting to? There were endless complaints from non-nerds that Mastodon/the Fediverse was just too complicated for non-techies to understand and use.

He thinks the Bluesky Public Benefit Corporation is too slick a business. How else do you attract continued funding, without relying on flaky billionaires? 🤷🏻‍♂️ 🤷🏻‍♂️ Also: he’s a wannabe cypherpunk, who I don’t think has learnt much from that movement’s experiences with “freedom technologies” in the 90s/00s 😵‍💫

He also has many confused thoughts about X/Twitter since he left (it’s all good, apparently.) I’m not sure he’s much of a loss to Bluesky.

https://www.ianbrown.tech/2024/05/10/founder-jack-dorsey-leaves-bluesky-because-reasons/

ian, to random

You have to analyse every Apple announcement through the lens of how it will use it to maintain its market power and attack regulation. So, will Apple’s promised Rich Communication Services (RCS) support make iMessage fully interoperable at least with Google’s Messages? What would the most grudging compliance with Chinese 5G regulations look like?

Google apparently makes RCS support ubiquitous regardless of carrier support (via IP), as well as using a specific telco gateway. Will Apple do the same, or push individual telcos to enable RCS support on their networks? (Many already do.)

Apple won’t support Google’s end-to-end encryption extension but instead work to standardise it in RCS. How long will that take?

Trade body GSMA is responsible for the RCS standard. Telcos in the past, unlike Internet developers, have been most open to developing backdoored encryption standards for mobile communications. Will Google and Apple be able to override this here?

I haven’t tried digging out a good translation of the relevant Chinese 5G regulations, but they are allegedly the source of Apple’s change of mind on RCS support. Supporting it within a single country of course does not mean support anywhere else in the world. Many (most?) of the DMA gatekeepers are trying to limit DMA benefits to their EU users (and in Apple’s case withdrawing them once a user leaves the EU for 30 days!)

https://www.ianbrown.tech/2024/04/30/1905/

ian, to random

Former UK Prime Minister Tony Blair is STILL maddeningly naive about technology, declaring it “apolitical” even while telling the growing number of governments his “non-profit” advises tech will “change everything” 🫠

This unusually challenging interview by last weekend’s Sunday Times with Blair notes: “Some will roll their eyes at this perma-polished architect of the third way — the man whose embrace of globalisation, deregulation and high immigration arguably contributed to the 2008 financial crisis and even Brexit — returning to evangelise about technology’s power’ 🙄🙄

His “institute” is financially supported, of course, by people whose wealth is built on tech, like Oracle billionnaire Larry Ellison. And as interviewer Oliver Shah also notes: ‘After Kissinger died last November, Blair said he had been “in awe of him”, adding: “If it is possible for diplomacy, at its highest level, to be a form of art, Henry was an artist.”’

Worryingly for the 🇬🇧, Blair is still obsessed with ID cards (now in “digital” form.) They can be used to track everyone everywhere and sort out illegal immigration, apparently 😱 ‘Blair seems drawn to the model of benevolent dictatorship.’ He certainly does 🤢

https://www.ianbrown.tech/2024/04/24/1889/

ian, to random

Prof. David Erdos has shared his latest (excellent) research “showing i) little UK GDPR enforcement, ii) worrying gap with formal law expectations & iii) limited accountability for this.”

A less polite version would be: the 🇬🇧 government has demonstrated how a law on the books it dislikes (the General Data Protection Regulation) can be undermined by the appointment of supine or actively hostile Information Commissioners. (As prime minister, Margaret Thatcher was against its predecessor Data Protection Directive from the start; not much has changed.)

I hope the European Commission is not going down the same route with the Digital Markets Act’s Art. 7 (on NIICS interoperability), which it was hostile to from start (early 2020) to finish (enforcement). Legislators learned from the GDPR that it is too easy for national regulators to be deliberately undermined by governments looking to attract technology firm investment (see also: Ireland and Luxembourg). The Commission therefore has a central enforcement role. So I’m especially disappointed by the flimsiness of its finally-published decision not to designate iMessage as a DMA gatekeeper NIICS. It hardly justifies the “exceptional” non-designation decision (Art. 3(5)), or “manifestly call[s] into question” the quantitative tests it meets [1]. I wonder if Meta now feels slightly foolish to have obeyed that provision in (somewhat) good faith 🫠

I still remember the jaw-dropping moment the new 🇬🇧 Information Commissioner in 2009 told a law conference (just about his first public appearance) he didn’t think data protection law should apply to the private sector. (He previously ran the advertising “self-regulatory” Advertising Standards Authority.) It’s fortunate indeed for GDPR enforcement it contains rights of private action, so effectively taken up by Max Schrems. Meanwhile, the Commission’s lack of legal action to force some member states to properly implement the legislation, enchantment with mass surveillance/data retention, and some of its adequacy decisions, are much less impressive than the Court of Justice’s judgments on Schrems’ two cases.

I was reminded last week talking to a BigTech competitor these much smaller firms have to be extremely cautious about upsetting a company they may rely on for key resources, and the Commission has spent most of its time preparing for DMA enforcement talking to those two groups. So perhaps Schrems’ None of Your Business, or something similar, will have to take up the rights of the individuals the legislation is ultimately supposed to help 🤷🏻‍♂️ Fortunately the DMA also contains rights of private action, as well as the ability of organisations to take representative actions (thanks to campaigning by consumer and digital rights groups in its final stages). As with the Schrems I and II cases, these apparently small issues can ultimately have enormous global impact [2].


[1] Where does the DMA talk about the relative intensity of use of one core platform service versus another? This provides two of three reasons for the decision! Who cares if iMessage for Business is lightly used, given it’s likely iMessage itself is used by many microbusinesses, very few of whom I imagine were part of the “corporate users of iPhone to whom the Commission reached out during the market investigation”? Really, the EC didn’t even bother with a large-scale survey, and/or demand data from Apple?

I also heard from an impeccable source Apple threatened to withdraw iMessage from the EU if it had been DMA-designated. The EC should not be rewarding such blackmail, even if it was highly likely to be a bluff.

[2] For now, we might have to rely on technology and philanthropy to improve messenger interoperability, such as this great project: a cross-platform, memory-safe OpenMLS library to enable interoperable, end-to-end encrypted messaging (E2EE) in multiple clients, combining “Matrix’s decentralized and federated infrastructure with Signal’s low metadata footprint.” 🎯

What’s happening with TikTok in the US is a strong reminder about the vulnerability of centralized platforms to censorship and surveillance. The Open Technology Fund notes Signal “provides a high level of metadata protection, but is centralized and thus easily censored. In addition, Signal cannot efficiently provide E2EE for large-group communications.” I hope Signal will move in this direction over time, as well as towards interoperability with other platforms implementing its own protocol (with metadata guarantees) as well as the IETF’s open Messaging Layer Security standard.

https://www.ianbrown.tech/2024/04/23/1874/

ian, to random

Over 30 European police forces have (yet again) attacked the increasing deployment of end-to-end encryption (such as on Meta’s three platforms WhatsApp, Messenger and soon Instagram.)

This is how powerful policy stakeholders (like law enforcement and big business) often win arguments. They never, ever give up, repeating the same arguments ad nauseam — over decades if necessary — regardless of any evidence which emerges 🫠

Even intelligence insiders have acknowledged that, contrary to scare stories about the spread of encrypted “dark spaces”, the widespread use of connected tech has made this century a ‘Golden Age of Sigint’ (signals intelligence/surveillance).

The law enforcement statement makes the same “binary choice” error they accuse others of. “Companies will not be able to respond effectively to a lawful authority.” To do what? “Nor will they be able to identify or report illegal activity on their platforms.” Wildly inaccurate. “As a result, we will simply not be able to keep the public safe.” 💩 This would be a lot more productive debate if police chiefs would acknowledge even the smallest amount of nuance, rather than shroud-waving.

What types of systems can be built which protect vulnerable people and privacy? Meta claims it is doing this; where are the independent evaluations of their and other companies’ claims? Shouldn’t the UK govt make use of the sterling research work it has funded by examining this question?

European police have gathered evidence from tens of thousands of encrypted mobile phones (using EncroChat) which has led to thousands of prosecutions (with important human rights and evidential quality questions raised.) What lessons can we learn from that?

The distinguished and sadly recently deceased Prof. Ross Anderson published two powerful analyses of these issues in the last 18 months alone. And in February, the European Court of Human Rights determined:

Weakening encryption by creating backdoors would apparently make it technically possible to perform routine, general and indiscriminate surveillance of personal electronic communications. Backdoors may also be exploited by criminal networks and would seriously compromise the security of all users’ electronic communications. The Court takes note of the dangers of restricting encryption described by many experts in the field. (par 77)

Societies would be much better off — rights respected, criminals investigated and vulnerable people protected — if the policing and intelligence organisations pushing this agenda would learn a little from their almost 50 years of failing to get end-to-end encryption banned 🫠

https://www.ianbrown.tech/2024/04/22/1852/

ian, to ai

Forthcoming reports for the EU by two former Italian prime ministers, “Super” Mario Draghi [1] and Enrico Letta [2], are likely to be very influential on the next European Commission and Parliament. Here are some potentially far-reaching tech regulation-related comments they’ve made so far.

Former central banker, academic economist and Goldman Sachs employee Mario Draghi is focusing on industrial consolidation in defence, energy and telecommunications, and suggests “scale is also essential for developing new, innovative medicines, through the standardisation of the EU patients’ data, and the use of artificial intelligence, which needs all this wealth of data we have – if only they could be standardised.” [1] This is likely to supercharge the European Commission’s #DataSpaces project, including the just-agreed Health Data Space, and data strategy more broadly.

Draghi wants “a new common regulatory regime for start-ups in tech.” Watch out for sweeping exemptions from the General Data Protection Regulation (GDPR), then later pressure to widen them to other businesses. We would also likely see a storm of lobbying against rules on “killer acquisitions”.

Draghi suggests the EU’s public High Performance Computing network “could be used by the private sector – for instance AI startups and SMEs – and in return, the financial benefits received could be reinvested to upgrade HPCs and support an EU cloud expansion.”

He appears to look admiringly at 🇺🇸/🇨🇳 oligopolies: “To produce more investment, we need to streamline and further harmonise telecoms regulations across Member States and support, not hamper, consolidation”, claiming “investment per capita is half of that in the US” (Prof. Tomasso Valletti responded the investment claim is “factually untrue”.)

https://upload.wikimedia.org/wikipedia/commons/6/6c/Visit_of_Enrico_Letta%2C_President_of_Notre_Europe_-_Jacques_Delors_Institute%2C_to_the_European_Commission_%2804%29.jpgMeanwhile, academic political scientist and career politician Enrico “Letta will use his report to argue that Brussels must use the next five years to pursue the integration of national markets for financial services, energy and telecoms. He will also call for EU merger rules to be changed to allow for more market consolidation.” [2]

Letta concludes his interview: US President “Trump 2 will be different from Trump 1… The single market of the beginning was for a small world, now we need a single market with teeth for a big world.” Let’s hope instead for Biden 2, (much) more open to cooperation with 🇪🇺 in his industrial strategy (the door is already ajar).


I hope civil society will be ready for this onslaught of “competitiveness” justifications for reducing antitrust enforcement and protections for individuals in the EU. Letta and Draghi are distinguished men, but let’s just say they come from one particular (centrist, technocratic, white Italian male 😉) perspective in the wide range of politics which has shaped the European Union since its creation.

The GDPR was agreed in very specific (Edward Snowden-boosted) circumstances, and was by then “the most lobbied law in EU history“. Re-opening it to a further storm of business and national security/law enforcement lobbying would potentially be a disaster for human rights, as positive as narrower strengthening (eg centralised Brussels enforcement against Big Tech, DMA-style) could be. Two factors might protect the Digital Services Act and Digital Markets Act: they were passed much more recently; and (at least so far) they have had a much bigger impact on non-European firms. The same can be said of the controversial “General-Purpose AI” rules in the AI Act.

That said: as Letta concludes, “The single market has long been plagued by national disregard for EU rules, haphazard enforcement, and resistance from capitals to centralising regulatory powers.” Ireland and Luxembourg’s approaches to GDPR enforcement could not illustrate this better (alongside the UK’s pre-Brexit). I wonder if the narrow, procedural GDPR harmonisation reform underway could be widened to include more centralised enforcement, perhaps by the European Data Protection Supervisor 🤔 without “clarifications” about the scope of national security/law enforcement exclusions (particularly on data retention and international data transfers).

https://www.ianbrown.tech/2024/04/17/1804/

#AI #BigTech #CloudComputing #DataSpaces #EUHDS #GDPR #HPC

ian, to random

Both the UK’s Competition & Markets Authority, and the European Commission’s Competition DG are looking for very senior “digital” experts for senior management roles. Unfortunately I think it’s a mistake to combine the two, as very few people have experience of both. The EU position is less of an issue as it involves managing a small-ish team of subject specialists, and has now closed for applications, although I wouldn’t be surprised if it’s reopened a second time (here’s more on the EU approach). But the CMA role has “an expected team size of up to 200 colleagues” 🫣

My experience of the UK civil service is that seniority (and even vaguely half-reasonable salaries) only comes with managing very large groups of people. This is a disaster for highly technical subject areas. I had junior colleagues with PhDs at the same level as new graduates, on salaries that barely enabled them to live in London with their parents 😱

Tech firms are much better at separating out expert and managerial roles, while paying both appropriately. This is something governments are going to have to learn to do if they want to regulate the digital world effectively 🧐 (To be fair, the CMA has employed a number of digital experts already.)

The new EU AI Office is a good example of more focused mechanisms for bringing technical expertise into policymaking. The European Centre for Algorithmic Transparency is another.

Less useful in my experience are technical advisory boards. I was on the UK Information Commissioner’s Office’s for years, but it achieved little. It’s hard to tell how effectively the Technical Advisory Panel under the 🇬🇧 Investigatory Powers Act is working due to its secrecy/spookiness/need for top secret clearance, although I did participate in a useful one-day workshop it ran (they even published a summary of our discussions).

I also have broader thoughts from my time as a civil servant on making better use of academic expertise!

https://www.ianbrown.tech/2024/04/05/1777/

#AIAct #DSA

ian, to random

This morning, I’ve been giving evidence to the European Parliament’s Internet Market and Consumer Protection Committee on the Digital Markets Act the committee led on. Alongside Epic Games, we discussed the provisions requiring “gatekeeper” tech firms (specifically, currently, Apple, Google + Microsoft) to enable users to install apps from outside the gatekeeper’s own app stores. These were my speaking notes:

  1. The success of the DMA third-party app/app store provision (§6.4) critical to the whole DMA project regarding mobile phones, which are main means of Internet access for many Europeans and globally. 90% of EU Internet users access the Internet via a mobile device, compared to 31% using a desktop PC [1]
    • Not just due to the high rates of commission currently charged, but also as means for gatekeepers to impose terms on app developers, so far as they are not explicitly prohibited by the DMA, and censor content (like Jon Stewart’s squashed podcast with US FTC chair Lina Khan). US Department of Justice: “Rather than respond to competitive threats by offering lower smartphone prices to consumers or better monetization for developers, Apple would meet competitive threats by imposing a series of shapeshifting rules and restrictions in its App Store guidelines and developer agreements that would allow Apple to extract higher fees, thwart innovation, offer a less secure or degraded user experience, and throttle competitive alternatives.” [2, p.3]
    • Obviously is upstream of other DMA provisions, such as §6.7 on access to OS/virtual assistant hardware/software features (which should be provided “free of charge”).
    • Android already allows 3rd-party app stores, side loading, automatic updates of stores and side loaded apps, and progressive web apps (PWAs), so this is a provision which will require more behaviour change from Apple. However, UK CMA found Alphabet still had 90%+ of UK downloads from app stores in 2021 [3, p.92]. CMA: “alternatives face material barriers such as indirect network effects and Google’s agreements which lead to the pre-installation and prominent placement of the Play Store” [3, p.119].
  2. Good to see these provisions prioritised by the European Commission (EC) (Executive Vice President Margrethe Vestager’s comments to press, and this morning?) EC investigating whether Alphabet/Apple still obstructing app developers from steering customers to offers outside app stores, and Apple not providing meaningful choice on defaults and preferences.
  3. Was extremely worrying Apple initially announced it would not enable genuine side-loading [4].
    • Question about English vs German-language version of the obligation: ands vs unds…[4].
    • Good to see Apple is changing its position on this (was “pressed to do so” according to VP Apple Legal Kyle Andeer at compliance workshop (Christoph Schmon reported on LinkedIn)).
    • But requirement for developers to be enrolled in Apple Developer Program for two+ years and have an app with 1m+ first annual installs in the EU in the previous year [5] do not seem proportionate.
  4. Scare screens [3, p. 113] and limits on automatic updates when outside the EU more than 30 days [6].
    • Importance of EC attention to user experience details.
    • For protection of overall digital environment, gatekeepers absolutely should not be allowed to cut off security updates in order to shore up their market power.
  5. Alongside §6.4 is importance of §§5.7+6.6+6.7 for use of web apps. Again, Apple has been pushed back from initially disabling PWAs within EU. But concerning there doesn’t (yet) seem to be a plan/timetable to enable PWAs to be run on alternative web browsers/engines. CMA: “Development and usage of web apps is substantially lower than native apps and this is reinforced by restrictions on the functionality of web apps within Apple’s ecosystem, which also undermine the availability of web apps on Android.” [3, p.120]
  6. Apple’s requirement for “notarisation” might well be justified from security/privacy perspective, but in the medium term EC should look for options to enable cross-stakeholder consensus on what those checks should contain, and who can do them. See eg UK code of practice [7] (NB no. 4: Keep apps updated to protect users.)
    • Why no competition for app security/privacy validation to a consensual standard, and/or user choice over who they should trust on this? Recall macOS lets users override even minimal check for app signature.
    • Allows Apple to impose its own organizational restrictions and potentially exposes a competitor’s trade secrets to Apple. It also (as we see with Spotify) allows them to keep the approval process in limbo. Maybe EC should compare approval KPIs to ensure fairness.

I agree with Alba Ribera Martínez: “some of the solutions proposed by Apple may meet the threshold of not being blatantly contrary to the DMA’s goals of contestability and fairness but there are still many tenets of the gatekeeper’s technical implementation of the regulation that remain elusive and conflictful.” [5]. And while Alphabet has been (much) more open to date than Apple with third-party apps and app stores, we will see how the broad range of relevant DMA rules affects the market share of apps downloaded through its Play store on Android.

The DMA app store and related provisions take forward the important work of the EU with its 2015 Open Internet regulation, and will be equally important in ensuring contestable and fair markets in the growing range of industry sectors where apps play an important role [8].

References

  1. Eurostat, Digital economy and society statistics – households and individuals, December 2023 https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Digital_economy_and_society_statistics_-_households_and_individuals#Devices_used_to_connect_to_the_internet
  2. USA et al. vs Apple Inc., Case 2:24-cv-04055, US District Court for the District of New Jersey, 31 March 2024 https://www.justice.gov/opa/media/1344546/dl?inline
  3. UK Competition & Markets Authority, Mobile ecosystems market study final report, 10 June 2022 https://www.gov.uk/cma-cases/mobile-ecosystems-market-study#final-report
  4. Ian Brown, Does the DMA require direct app downloads? Data protection and digital competition blog, 5 March 2024 https://www.ianbrown.tech/2024/03/05/does-the-dma-require-direct-app-downloads/
  5. Alba Ribera Martínez, Apple’s DMA Compliance Workshop – The Power of No: Breaking Apart the Bundle? Kluwer Competition Law Blog, 19 March 2024 https://competitionlawblog.kluwercompetitionlaw.com/2024/03/19/apples-dma-compliance-workshop-the-power-of-no-breaking-apart-the-bundle/
  6. Apple, About alternative app marketplaces in the European Union, n.d. https://support.apple.com/en-us/118110
  7. UK government Department for Science, Innovation & Technology, Code of practice for app store operators and app developers (updated), 24 October 2023 https://www.gov.uk/government/publications/code-of-practice-for-app-store-operators-and-app-developers/code-of-practice-for-app-store-operators-and-app-developers-new-updated-version
  8. Christopher T. Marsden and Ian Brown, App stores, antitrust and their links to net neutrality: A review of the European policy and academic debate leading to the EU Digital Markets Act, Internet Policy Review 12(1), January 2023 https://policyreview.info/pdf/policyreview-2023-1-1676.pdf

https://www.ianbrown.tech/2024/04/03/my-evidence-to-the-european-parliament-on-the-dmas-third-party-app-app-store-provisions/

ian, to random

Sigh. For the first time (I’m ashamed to say), as I waited to fly out of Brunei, I tried to update Wikivoyage, which is still pretty limited on less-visited places (like the capital, Bandar Seri Begawan). I had some pretty useful information to add on the few cafes open during the month of Ramadan. But the airport IP address was blocked for edits. I couldn’t login with my Wikipedia account. I couldn’t create a new “local” account. I gave up! 🤷🏻‍♂️ https://pbs.twimg.com/media/GJjqmumaoAA1oZ6.jpg

When I got home, I tried again. Thanks to Wikimedia’s volunteer “stewards” for helping me investigate this further. It’s worse than I thought. The Foundation has blocked anyone using Virtual Private Networks, including Apple’s Private Relay service, from editing pages — even when logged in 😬 It’s easier to block wiki-vandals if they can’t easily hop from one IP address to another. But it means anyone using a VPN to protect their privacy — including Tor and Private Relay — is collateral damage.

I’m not willing to switch off Private Relay and clear my web browser cookies every time I want to make a small wiki-tweak (and I still had problems doing that while trying to make this single edit). That could leave close to 1bn Apple subscribers having difficulty editing any Wikimedia sites 🫣

There is already some internal Wikimedia debate on this. Ironically, by definition, it excludes those affected, as pointed out by at least one of the talk contributors 😂😢 I was told there are mechanisms for “well-established” Wikimedia accounts to bypass the block, but by definition this won’t work for the vast majority of Internet users. (One study found “Tor users make contributions to Wikipedia that are just as valuable as those made by new and unregistered Wikipedia editors. We also found that Tor users are more likely to engage with certain controversial topics.”)

What a shame. Wikivoyage has a lot of potential, and is very useful for popular destinations. But it’s never going to become a globally-useful resource with so much friction for casual contributors 😢

https://www.ianbrown.tech/2024/03/31/1741/

ian, to random

I hesitated even in linking to this preposterous article. But its definition of ‘end-to-end’ encryption as requiring ‘identical endpoints’ is not one I’ve seen before (and my PhD title literally includes the phrase!) So two users of gpg and PGP aren’t exchanging E2EE messages? Any web browser talking to any web server over TLS isn’t getting E2EE? The Internet Engineering Task Force is wasting its time in developing and testing standards for interoperable E2EE communications?

https://www.ianbrown.tech/2024/03/10/end-to-end-security-does-not-require-identical-endpoints/

ian, to random

Android, iOS and Windows have all been designated as gatekeeper operating systems (OSes) under the Digital Markets Act. Amongst other obligations, this means:

The gatekeeper shall allow and technically enable the installation and effective use of third-party software applications or software application stores using, or interoperating with, its operating system and allow those software applications or software application stores to be accessed by means other than the relevant core platform services of that gatekeeper…

Digital Markets Act Article 6(4).

While I initially read this as requiring both direct app downloads and third-party app stores, I can see it can be read as requiring only one of these — as Apple has chosen to interpret this paragraph. This would be an unfortunate interpretation for the “fairness and contestability” of app markets.

Curiously, the German-language version uses an “and” rather than “or”. The Act’s (German) parliamentary rapporteur has confirmed his view this provision requires BOTH third-party app downloads and app stores:

It is clear ✔️ Not only that it‘s a „und“ but it‘s also economically clear that using and AppStore includes the use of the Apps. @EU_Competition @vetager @Apple

— Andreas Schwab @ASchwab.bsky.social (@Andreas_Schwab) February 14, 2024

Let’s see if the “close scrutiny” of Apple’s DMA compliance promised by Competition Commissioner Margrethe Vestager addresses this point 🧐 Could the English-language Act be clarified with a corrigendum (probably too controversial)? If not, I guess the obligation could be “updated” with a delegated act by the Commission following a market investigation (Art. 12). Purposive interpretation by the Court of Justice might (ultimately) also do the job 👩‍⚖️

https://www.ianbrown.tech/2024/03/05/does-the-dma-require-direct-app-downloads/

ian, to random

More from the ongoing saga of the side dish! On Friday, European Commissioner Margrethe Vestager gave a speech to the College of Europe where she hit back at unfavourable comparisons of the EU’s approach to digital market competition with the antitrust hipsters in Washington DC. I sadly found it disappointing.

Yes, the Commission’s “classic” competition enforcement (TFEU Art. 102 on abuses of a dominant position) is evolving. But the “narrow focus on price” Vestager attacks is a straw man. It was the ‘competition is a side dish to industrial and trade policy’ comment by her chief official that sparked the controversy at the conference where this started, where the US side was focused much more on the big picture, making antitrust a holistic part of a whole-of-government approach. (Ironically, most of the rest of what Director-General for Competition Olivier Guersent said was entirely sensible.)

Vestager is no doubt hemmed in by the distribution of competences with her fellow commissioners — but she is after all Executive Vice President for A Europe Fit for the Digital Age!

It’s good the Google and Amazon cases she mentions were brought. But if “an ‘effects-based’ approach is better tailored to market realities than a formalistic approach” — what impact have they so far had more broadly on digital (and other) markets? 🫠

https://www.ianbrown.tech/2024/03/04/1708/

ian, to random

Elon Musk’s @X could face a raft of new European Union rules that place curbs on the behavior of some of the world’s largest technology firms – the Digital Markets Act #DMA: https://t.co/o3ZRX2SP84

— Samuel Stolton (@SamuelStolton) March 2, 2024

Exciting news indeed from Brussels. Apparently X has told the European Commission it meets the criteria to be designated as a gatekeeper under the Digital Markets Act 🤩🍿. If so, then we REALLY need the DMA Art. 7 interoperability mandate expanded to social networking services, and aggressive enforcement of those existing provisions which would partly enable this.

I’m curious to know why X has done this, given its current market cap (according to Fidelity) is $12.3bn, about one-seventh the size to be automatically designated a DMA gatekeeper 🤔

No self-designation is required where firms meet the qualitative tests in DMA Art. 3(8), which I’ve long argued X does (as has Germany’s current competition minister):

  1. Very significant scale (if not €75bn market cap) ✅
  2. Certainly 10k+ biz users reaching customers, and 45m+ EU end-users ✅
  3. HUUUUGE network effects ✅
  4. HUGE global scale (by far the biggest “public square” SNS) and Musk has ambitions for scope (his “super-app” plans) ✅
  5. Business user or end user lock-in (we see from the struggle for other SNSes like Mastodon and Bluesky to expand in that market) ✅
  6. A conglomerate corporate structure — just look at Musk’s various interlocking interests and potential integration between them 🤔
  7. Other structural business or service characteristics 🤔

It is slightly difficult to parse the process of designation in Art. 3 where a firm which manifestly does not meet the quantitative criteria in paragraph 2 nonetheless notifies the Commission 🤣 But I think the next step is the Commission may adopt a decision (Art. 16) to open a market investigation under Art. 17 (although it may use its investigative powers before that). It then should “endeavour” to complete the investigation within 12 months, adopting a decision advised by the member states (Art. 50(2)).

Art. 3(8) continues: “In carrying out its assessment…the Commission shall take into account foreseeable developments in relation to the elements listed…including any planned concentrations involving another undertaking providing core platform services or providing any other services…or enabling the collection of data.”

Yes, I am available for DMA market investigation consulting 😉

https://www.ianbrown.tech/2024/03/03/1693/

#DMA

ian, to random

With X/Twitter, Elon Musk seems to be singlehandedly illustrating the exciting antitrust concept of Significant Non-transitory Decreases in Quality while having limited impact on X usage (thanks to network effects). Maybe even enough to overcome the European Commission’s concerns of practicality 🤔

Mastodon still seems to be struggling to achieve liftoff, despite significant migration of specific communities (especially academics). While they interact with each other this hasn’t generated large enough cross-network effects for wider growth.

Bluesky with its various recommendation feeds seems to be making individual posts go a lot more viral (judging on the like/repost stats) but I don’t know if that will ultimately prove more important to achieving critical mass.

I’m looking forward to some very large measurement studies which tease out the dynamics and drivers in usage of these types of social networking service 🤓 (let me know if I missed any!). You could even compare individual users across services where they give usernames.

I’m frustrated but not entirely surprised it has proven just too difficult to synchronise the large-scale exit from X apparently necessary to reboot a sufficient competitor. Hence my continued focus on interoperability requirements for gatekeeper platforms like X, FB & LinkedIn. (The latter two already designated under the #DigitalMarketsAct; the former could be, I think, under the qualitative designation process. X certainly isn’t worth €75bn these days 🤭)

I’m not the only frustrated X user:

Why am I back on here? Simple. The alternatives have failed. I was badly informed. And just reading passively felt wrong.

— Thomas Rid (@RidT) February 21, 2024

Nor LinkedIn! (I killed my FB account many years ago in disgust at its privacy policies.)

A friend suggests a user survey to illustrate the ongoing drop in quality (not Small, which usually is attached to the concept of SSNIPs and SSNDQs). And indeed advertisers (and the ever-increasing groups of ex-advertisers)! Musk is f*cking BOTH sides of his 2-sided market! 🤬

https://www.ianbrown.tech/2024/02/22/1654/

#DigitalMarketsAct

ian, to random

I finally had time to watch the AI tech summit very well-organised by the US Federal Trade Commission on 25 January. You can watch the full 4½ hour video here. I’ve tried to note some key points from each speaker below.

FTC CTO Stephanie Nguyen: FTC Office of Technology now has 12 technologists working on cases/investigations and engaging on policy and horizon scanning research.

Today’s summit is to understand competition in the AI tech stack: hardware/infrastructure, data and models, and consumer applications.


FTC Chair Lina Khan: We’ve seen how these AI tools can turbocharge fraud, entrench discrimination and enhance surveillance.

Will this be a moment of opening up markets and unleashing technologies, or will a handful of dominant firms lock up these possibilities for good?

When you concentrate production, you concentrate risk, as we see today with Boeing and many other large corporations whose market power masked decline of internal capacity.

With Web 2.0 aggressive strategies have solidified platform dominance while locking in damaging business models on privacy, journalism and children’s mental health.

FTC inquiry is launching inquiries into investments/partnerships by large AI firms, eg Microsoft/OpenAI.

Model training is emerging as a feature that could incentivise surveillance but cannot come at the expense of customers’ privacy and security.

Privacy violations are fuelling market power, enabling firms in turn to violate consumer protection laws.

We are focused on aligning liability with ability and control, eg robocall investigation looking upstream to VoIP providers.

Remedies should address incentives and establish bright line rules on data, eg face recognition and location data.

FTC workshop and report on creative works in generative AI lays out guardrails on protecting fair competition.


Panel 1: AI & Chips and Cloud

Tania Van den Brande, Ofcom Director of Economics: UK cloud is very concentrated towards AWS and Microsoft, and customers are struggling to switch, given egress fees and difficulty of reengineering for multiple cloud infrastructures and moving gradually. Discounting structures are problematic, discouraging multiple cloud usage. CMA is now conducting cloud market inquiry and will include AI in that.


Dave Rauchwerk, former semiconductor founder and tech entrepreneur: semiconductor startups are competing against hyperscalers, which are now building their own chips — MS, Amazon, Tesla — this is a further barrier to entry, alongside access to capital (competing with largest companies in world), which can maintain surveillance of innovation.

Very close partnership required for success with larger firms (Nvidia has worked closely with TSMC since the 1990s). About 5000 VCs are investing in AI startups, but only 300 in chip companies.

Dominant cloud firms are becoming a monopsony for the AI semiconductor firms, which could limit innovation in the functionality exposed to applications over time.

Real innovation and specialisation is possible at the chip layer, but the cloud companies aren’t buying them.

Intel is two companies — chip designer, and chip manufacturer in “fabs”/foundries. Because it’s vertically integrated it doesn’t have the incentive to help rival chip designers, and can monitor what they’re doing. The US needs a national pure play foundry, like TSMC.


Prof Ganesh Sitaraman, Vanderbilt Law School: AI tech stack: app -> model -> cloud -> chips. Lower layers show increased concentration (Nvidia, TSMC, ASML) — at chip layer, with national security concerns (Taiwan). Firms can preference their own vertically integrated business lines, discriminate between customers, and raise prices/decrease quality. And this can deter smaller firms who don’t have the ability to deploy innovative apps across a global ecosystem. Hyperscalers can copy innovations and give them preferential treatment.

If governments get too dependent on large firms they can become “too big to prosecute” (like too-big-to-fail banks earlier this century.)

Potential solutions: structural separation, to prevent self-preferencing and other harms from vertical integration; non-discrimination rules, on prices, T&Cs, self-preferencing; transparency on T&Cs; interoperability rules.


Cory Quinn, chief cloud economist at The Duckbill Group: People tend to miss how much work it is to train large models. Amazon just spent $65m on one training. Amazon make its own chips but are using Nvidia GPUs which cost $30k. Nobody knows how Nvidia are allocating their limited supply but it certainly helps when customers have deep historical links with the firm.

Market has now tipped and the centralisation risk to resilience is massive.

These cloud companies already use the language of monopolists. The cost of entry to large-scale AI is already massive.

Egress fees as such are not objectionably large. The problem is firms want to move large quantities of data to compute facilities, and large egress fees obstruct that.

We have an Nvidia monoculture now. They are the major bottleneck, followed closely by their cloud customers. We should treat them all like utilities. In the short term more transparency over GPU distribution would help.

It’s extremely difficult and time-consuming for firms to move from one cloud provider to another.


FTC Commissioner Rebecca Slaughter: We are still dealing with the fall-out from the relaxed regulatory approach to the era of Big Data, adtech and social media/commercial surveillance. Despite early warnings about privacy and consolidation, regulators and legislators targeted only the most egregious conduct at first. And now markets have consolidated into extraordinarily large companies, the once vibrant US arts and journalism sectors are in crisis, while disinformation and material damaging teen health has proliferated. We have the knowledge and experience to see the AI era play out differently.

FTC is studying whether AI investments lead to a heavily concentrated market, while avoiding merger inquiries.

Consumer protection rules are also important. Honest marketing claims are deeply pro-competitive.

AI models can use consumer data in ways that entrench inequalities and access to opportunities.


Panel 2: AI & Data and Models

Cory Doctorow, sci-fi author and EFF Special Adviser: copyright law is not a great framework for dealing with AI data issues. It neglects the structure of many creative industries: monopsonies (5 publishers, 3 labels, 2 adtech firms). Giving artists more money won’t work in this condition as it will be taken by employers. Instead we need labour and privacy law.

AI investors are being pitched on automation and reducing headcount, not an augmenting services and human capabilities.

EFF talks about Privacy First, a potential coalition for a federal privacy law with a private right of action, much broader than AI. But specifically we’ve seen AI systems memorising then regurgitating highly personal information. Privacy law would provide many remedies for AI problems.

We need to think about data beyond a property regime but rather how to avoid harms to stakeholders, eg displace creative workers, produce grotesque privacy invasions such as non-consensual pornography, mine people’s data to make inferences adverse to their interests. We describe the most valuable things in the world — people — without property language.


Jonathan Frankel, chief data scientist at DataBricks: there is huge diversity in AI business models, from OpenAI/Anthropic/Cohere/Midjourney/Adobe access to hugely expensive models, to helping firms train open source models on their own data. Frankel’s experience in dealing with many input firms such as cloud is the markets are incredibly competitive.

Competition is not sufficient to have good outcomes. We are rapidly moving from an open, research-like approach to corporate R&D to a closed, “competitive intelligence” situation. There is a lot of regulatory/legal uncertainty so there is an incentive for firms to be secretive about what data they are using to reduce risks. There is strong pressure to get to market, which makes it harder to get things right. Data curating is one of the biggest costs for training.

“Open source” term brings baggage. Better: access to models, and transparency. Do you have access to model weights, to manipulate it, work with it yourself — like Llama but not GPT-4. Do you know how the model was built? What data was used to train it, details of hyperparameters — not true of Llama 2. Both = open source.

Making models freely available in this way has pros/cons. Pro: nobody mediates access to model. You can fully customise it, whether as a hobbyist, researcher, or large firm. You can build on the (very expensive) work of others. Great for science. You control your own fate, models won’t change under you. You can serve it yourself and know all the inputs. These are benefits of access.

Transparency part is more complex. Firms so far have been generous — is this sustainable?

Also has consequences for startups — discourages particular types of investments? Much more complex: the risks of giving people control over an artefact. Finally: this is not binary, it’s a whole design space.

We need centralised, shared, publicly funded resources for improving AI safety.


Amba Kwak, Executive Director at AI Now Institute: data quality (high levels of curation, feedback, niche datasets like healthcare and finance, assurances of accuracy and diversity) and scale are acting as barriers to entry. Big Tech firms have a big advantage from the last decades of commercial surveillance, and near-unlimited capital to invest to make datasets more robust. Will these port to the so-called AI startups they are investing in? How will large tech firms leverage their relationships with publishers and the media to maximise access and exclusivity? This isn’t unique to foundation models; getting data for fine-tuning models is also becoming more difficult. These data advantages are very self-reinforcing. Sam Altman has said “personalised” access will be the next phase of OpenAI, while giving the firm a huge advantage over competitors in access to data.

Data minimisation is the key principle, more important than ever in the age of AI, not less.

Even today’s open source firms are operating in a highly concentrated market where they benefit from network effects. SMEs will need the same protections against eg self-preferencing as closed source users.

Huge danger that AI and innovation is perceived to require lax privacy rules. The opposite is true. Data minimisation isn’t new — the lesson from the GDPR is not allowing too much room for interpretation (eg is behavioural advertising a legitimate business purpose?)

Scale/speed as proxies for progress are too limited. What about eg impact on the environment? Who decides/shapes what counts as innovation for the public good? One way forward is to go back to the drawing board and have a much more broad-ranging conversation dominated by public not narrowly private interests, rather than be passive recipients/subjects of the tech trajectory.


Stephanie Palazzolo, The Information: new group of startups trying to build non-transformer models (transformer models such as GPT and Claude).

Getting funding: investors looking at talent (eg former Google, top US colleges), whether startups are competing with OpenAI (even potentially, eg following OpenAI’s developer day announcements), how close they are to market. Capital depends on whether you can strike deals for valuable data, pay for chips… Early stage investors care about growth; later-stage care about cash generation and margins. Much harder for startups to generate cash due to entry costs.

Sustainability of open source models is important for startups. Difficult to imagine open source developers and users can compete against the bleeding-edge, largest models from Google/OpenAI/Anthropic. And we need a lot more funding of academic labs, whether on data or chips. Compared number of GPUs Meta is buying vs Carnegie-Mellon University’s leading lab — huge difference.


FTC Commissioner Alvaro Bedoya: We shouldn’t let all the media hype and attention being paid to generative AI/LLMs distract us from the fact that other forms of automated decision making are today having a much bigger impact on people’s lives. Focuses his remarks on bias in these latter systems, and the FTC’s Rite Aid case on biased facial recognition: https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without

We need to appreciate what’s at stake. These systems affect our basic ability to live our lives with dignity, with fairness, to get the healthcare we need, apartments we rent, jobs we apply for…

Algorithms are not an excuse. Firms need to ask hard questions about how systems work, how they affect people they are used against…

Success would look like people controlling technology, not the other way around. People feeling in control of tech, knowing when it’s being used to make decisions about them, why those decisions were made, knowing their remedies. And competition-wise, we use tech which proves itself the best in the marketplace on its merits — products that work, which people like, not just because it’s put out by a $1tn company.


Panel 3: AI & Consumer Applications

Conrad Kramer, AI startup: primarily startups need access to models to build products. Consumers typically interact with a product which embeds a model. Firms can access an existing service; train their own (needs lots of resources and expertise); or fine-tune an open model. Open source models are little bit lagging behind on quality. They are cheaper to acquire but still need compute resources for inferences and if needed fine-tuning/retraining. But they are usually completely transparent on data sources, which is useful in building a better product.

Model evaluation metrics are rudimental — standard questions and known answers. Needs a lot of qualitative understanding. It’s really hard. Correctness is domain-dependent, text generated for communications needs attention to precision, accuracy as well as tone.

We currently see an explosion of companies, some of which are training models for a fee. Or download the weights for a model for free from Hugging Face and run it on your own computer, but this needs a powerful machine — so better to run it on a cloud server and run it own behalf of users. There are providers of this service, which are competing on compute cost.

Excited for the potential to improve people’s lives, eg auto-filling forms and other rote/repetitive tasks to let people focus on more human activities. Is concerned about privacy, people’s control over their data.

Some products give users meaningful privacy controls (like iPhone microphone access to apps). Best systems keep data close to you, ideally on your own system, and when/how it’s used.

It is possible to build AI products which obey the law, protect users’ privacy, are safe — in terms of current harms, not x-risk. Thinks startups which innovate on ways to provide better privacy and safety will ultimately succeed.


Karen Hao, journalist: consumers are excited to use these tools to unlock their creativity, like getting ideas from ChatGPT, or using StableDiffusion to generate concept work, like building plans or poster design. Parents want to engage with kids educationally, have an interactive storytime. But there is also a huge amount of risk, especially on lack of transparency: ambiguous/deceptive marketing, and obfuscation. LLMs have a problem with hallucinations but big providers paper over them, eg Nadella claiming they are just like better search, OpenAI partnering with legal assistive services. People often don’t even realise they are dealing with an AI-based system.

Original AI Safety definition by Anthropic co-founders is not related to privacy, security, fairness, economic impacts or military applications… it is about rogue AI, existential threats… but in the public domain, “safety” means something completely different. So firms are now using it in that way in their marketing, while still causing harm. (There was a NeurIPS conference panel on this last year). AI developers meanwhile are focusing their safety concerns on AGI. Hao recommends paper on Concrete Problems in AI safety, Revisited by Deborah Raji https://arxiv.org/pdf/2401.10899.pdf

We need to really question what companies say, not only in terms of AI safety and marketing, but the ways they frame what is good for us. It doesn’t feel like we’re living in a democracy right now if a company just gets to decide… OpenAI launched ChatGPT almost on a whim, and now we’re living in this new era and have to grapple with that. I don’t feel like any of us had any democratic agency over that. We should be demanding more of these firms than retroactive excuses.


Ben Winters, Electronic Privacy Information Center: recently co-authored report on AI harms [Generating Harms: Generative AI’s Impact & Paths Forward]. Looked at social as well as individual harms. Sources of these harms: widespread availability of these chat tools; enabling harassment and impersonation; increasingly opaque data collection (thanks to lack of US federal privacy law); environmental impact; data security risk of all this data maximisation; labour manipulation, theft and displacement; discrimination such as entrenchment of discriminatory stereotypes; market power and concentration.

AI industry has a “man behind the curtain” problem, with needless overcomplicating. Focus on x-risk is just a distraction for legislators from current harms, while industry is not doing basic things like putting the transparency burden on the company not on the consumer to investigate something you cannot possibly understand if you spend half a day on it. Audits, impact assessments… but also the norm of respecting customers and valuing not just the expensive data you can buy from the NY Times, but everybody’s data.

Nothing is inevitable (that AI will be everywhere; that we face existential risk). This doesn’t have to be as complex as the largest companies want. We need to reassure consumers, regulators, legislators they can understand it and push back. And other laws do exist… civil rights, consumer protection, fair competition… we also need federal privacy law, and law which bans unconscionable uses of AI.


Atur Desai, US Consumer Finance Protection Bureau: CFPB is doing a lot related to AI. In reality, complex models have been used in consumer financial markets for a long time, eg in credit scoring. So a robust set of federal laws exist. Eg companies must provide accurate explanations of why they have denied a credit application (no matter how complex their model). CFPB has issued a notice for information to credit brokers. Doing a lot of work on capacity building internally, with a technologist programme embedding data scientists, ML experts etc. in enforcement teams.

Breaking the law should not be a company’s competitive advantage. We need ways to encourage whistleblowers where that is happening.

AI is an amorphous marketing term, describing sometimes very simple and sometimes very complex models. “Safe AI” is a murky mishmash of words. Deceptive marketing laws exist and CFPB is enforcing them already against algorithmic systems.


Henry Liu, Director of FTC Competition Bureau: Outsize market power can distort the path of innovation. FTC has already taken enforcement action such as against Broadcom and Nvidia to protection chip competition. FTC will have better powers to require firms to produce information.


Sam Levine, Director of FTC Consumer Protection Bureau: technologists are part of dozens of enforcement actions and really important for the work of the Commission. In engaging with AI we must learn from how we failed to fully deal with the Web 2.0 era. Privacy self-regulation was a serious error, and industry did not make privacy a priority. Bureau published an AI report in 2022 and guidance in 2023. We have now required algorithmic models trained on illegal acquired data be deleted. We have launched action against voice impersonation fraud. We have made clear firms cannot retain children’s data indefinitely, especially to train models; or use models that harm consumers. We are using every tool to protect the public from emerging harms.

https://www.ianbrown.tech/2024/02/20/ftc-tech-summit-on-ai/

ian, to random

Cristina Caffarra’s annual competition-fest today in Brussels (Antitrust, Regulation and the Next World Order) was as speaker- and content-packed as ever. As well as much discussion of the eagerly anticipated deadline for Digital Markets Act compliance in five weeks, it was a fascinating look beyond narrow antitrust policy to competition policy linkages with industrial and trade policy (with lots of AI on the side).

The video should be freely available soon, but until then you can read my notes below. Mille grazie Cristina and all the speakers!

Rebooting the Next Commission

Andreas Schwab MEP (DMA rapporteur): Post-EP elections resilience will become a more important part of Single Market rules, eg in telecommunications and energy. Will need MS investment at borders (interconnection).

Olivier Guersent (European Commission DG Competition): State aid control is essential to protect the Single Market, given the different resources of the member states. EU will match US IRA funding for well-researched projects. So far only 1 case.

Competition policy is a side dish for industrial policy, sectoral policies… How do you use competition to make these policies more effective? Eg EU halved cost of wind turbines through open procurement.

Industrial policy funding cannot rely on MS funding. Needs to happen at EU level.

Resilience means diversification, not 100% reinsurance.

Big Tech platforms are becoming like essential utilities, unavoidable trading partners, and EU has been trying to deal with their strategies for 35 years, which try to protect the source of their power in their core market, and use this power in upstream and downstream markets. Entrenchment, sophistication of the practices is increasing and so needs more sophisticated analysis. DG COMP had a very strong case on self-preferencing against Amazon, who dropped their iRobot acquisition rather than test it in court. Doesn’t know if they would accept Facebook acquisition of WhatsApp today but that was 8 years ago. Companies got more sophisticated along the way, as did DG COMP. Look at Booking/eTraveli for current thinking on ecosystems

AS: First DMA designations were the easiest. More complicated are cases like free e-mail, free cloud, AI, voice assistants but still needed to create a sound structure where new companies can enter.

OG: agreed. There is and will be a learning curve (especially just with 40 people, more are needed.) First must satisfy legal obligations. But being ahead of the curve is even more important, being future proof. We want the designated companies to comply effectively. We see lots of bundling and tying. Continued 102 and national activity will be essential, the European Competition Network (of regulators) has many more resources and may have to take some cases forward first. AI is the same, will be coordinated via the ECN.

Pricing vs power

Luigi Zingales (University of Chicago): dispersal of power was at the heart of antitrust development in the mid-20th century. But the rise of the consumer welfare standard was a combination of bad economics and good marketing, fuelled by powerful interests. Pendulum of enforcement has swung too far away, imperilling freedom. AI does not have a bright future as it begins in an already concentrated industry. Underlying any antitrust policy are important political choices. First should be freedom to choose between different products; to change jobs without retaliation; and to speak without fear of consequences. This needs competition not just antitrust policy and needs technology as well as law. Europe should align with India on its public tech stack.

Tommaso Valletti (Imperial College): Europe 30 years ago imported from the US the “more economic approach”, but the economic consultancies have become concentrated; there is little innovation; and the business model is to try to make money by protecting much bigger rents bigger business have.

Economic practice has narrowed onto a few models which don’t work in practice, and are ignoring scientific knowledge advances elsewhere, even within economics. Mergers have largely been approved, claiming efficiencies, but academic analysis shows in 55% of approved cases prices go up. And the profession has become insular, not interested in new topics. Do economists have anything to say about power (of largest companies to influence politics through lobbying, revolving doors…)?

Andreas Mundt (German BundeskartellAmt): German antitrust has always been about power, not price. For the first time we might see things “moving in the digital economy” thanks to the DMA. Agencies have had great cases to make markets fairer, but not contestable. Competitors for the first time seem to believe things are improving to a certain extent. Big Tech has the data, computational skills, the financial resources to benefit from AI — Europe is again lagging behind in investment, which doesn’t make it easy for competition. Meta court proceedings in Germany has already taken 5 years and this is the environment in which we try to enforce competition law. Is this the way forward? How will the DSA and AI Act affect smaller firms, the European startup scene? None of this is about price. The digital economy has an artificial price — data — and we still haven’t found the real way to deal with that question. We need to create an atmosphere for entrepreneurship and innovation in Europe, to keep our wealth and do something for consumer welfare. We need to reinstall the freedom to compete on the merits, with equal chances and democracy in the economy. Big Tech have political as well as economic power. We need very strict and rigorous competition enforcement in this world.

Rebecca Slaughter (FTC): there is no a-political enforcement of competition law. Non-intervention is a policy choice. Economics doesn’t provide neutral analysis — you see this with competing economists in court cases arguing opposite conclusions from same data. It can be one of a number of tools. FTC has created its office of technology and is building its data science and other analytical capabilities.

Enforcers must learn from the past. FTC does competition, consumer protection which in the past have been treated separately but should not be. Competition is not a side dish to industrial policy but underlies all work of government; President Biden created a whole of government approach. Ex post enforcement is hard. Cases take a long time and getting corrective action is very very challenging. We need a future-looking way at how markets can be built competitively, which is why AI is such a current topic.

Gina Cass-Gottlieb (ACCC): Australian government has set up a merger task force review — ACCC says laws are no longer fit for purpose. Task force is using micro data held by national statistical agency to show current regime only gives partial visibility of mergers (around ¼), which tend to be made disproportionately by the largest firms. ACCC proposes legal reform is required, and merging parties should have to demonstrate there will not be a significant lessening of competition.

LZ: AI challenge is huge and we need to promote competition, not only ex post, and do it fast.

AM: has AI already developed into something competition agencies cannot see properly? This is why we need to be vigorous with merger control to avoid dealing with all this mess afterwards in lengthy abuse proceedings which are so hard to win. In the past these were a niche part of our work but today they are the focus. We cannot deal with the digital industry where cases last 8, 9, 10 years, when the issue will be long gone.

TV: AI needs hardware (Nvidia), cloud (3 firms), data (Microsoft, Google…)

RS: FTC is extremely focused on deterrence to have market-wide impact eg injunction on healthcare data advertising (IQVIA), a hospital merger in N California, pesticide manufacturers, multiple pharmaceutical mergers, Illumina, use of facial recognition technology, Amazon, data brokers’ use of sensitive data, child privacy rules, rulemaking on junk fees, data security and commercial surveillance…

Are the Courts Likely to Listen?

Marc Van Der Woude (General Court, EU Court of Justice): it’s difficult to change the law as judges must preserve legal certainty. 1980s-2004 there was a market opening phase, when regulation needed to be developed, including Article 85 of the Treaty on restriction of commercial freedom and a need for exemptions, on eg restructuring of chemical and telecoms industries. This created a deadlock before the national courts due to lack of resources of EC, leading ECJ to come to a more liberal interpretation of Article 101, the more economic approach. This led to fewer cases and fewer exemption cases. Currently, there is an enforcement problem. The court’s rules of procedure are not fit to deal with eg Intel case, but it’s also the conduct of the parties. Second problem: is the system responding to societal needs, such as environmental standards? Three signs of change: courts are becoming reluctant to put such a focus on an effects-based approach, shifting to the object; changed approach to exemptions; and much more explicit rules of the DMA.

Marcus Smith (UK Competition Appeals Tribunal): courts should not encroach on the legitimate policy choices of regulatory agencies, such as what to investigate; what penalties to impose. There is a policy question on the role of the market in a mixed economy, which lies at the heart of competition law. But policy considerations should not apply in the application of law. The court’s understanding of economics is informed by expert evidence.

Market power dysfunctions

Doha Mekki (US DoJ): antitrust law is not co-extensive with IO economics; there is 100+ years of antitrust law to build on. Some economic models are not well suited to current economic facts, such as on vertical mergers. DoJ staff have done a great job of explaining to courts why large firms sometimes should not be able to accrue more power via mergers, such as the Penguin/Schuster case. In JetBlue/Spirit the judge found the merger “does violence” to competition, understanding the impact on the cost-conscious consumer.

Isabelle Weber (University of Massachusetts Amherst): energy, food, essential raw materials, transport firms ended up experiencing massive price and profit spikes. These are largely commodity sectors where there is market power but firms are not setting prices. Downstream there are firms with price-setting power; these cost shocks are a kind of coordinating mechanism to increase prices to protect profit margins — a form of passive collusion, even if perceived as more acceptable by consumers (excuseflation).

Gabriel Zucman (Paris School of Economics): in 1950s, effective US corporate tax rate was about 50%. Today this is down to about 20%. Much reflects rising evasion, eg profit-shifting and reflects a deep failure of enforcement.

Jan Eeckout (UPF Barcelona): Europe has its own Big Tech. ASML is an absolute monopolist growing very fast. AI concentration comes from a concentrated supply chain, which is becoming much longer globally, making it easier for companies to have bottlenecks and exploit their position. And this type of equipment requires capital spending universities and even governments cannot match.

AI might reduce the college premium (which has grown from 50% to 100%) even for non-routine cognitive jobs, but if there is so much further increase in concentration, profits will go up, with higher compensation for superstars, with increasing inequality in income and wealth. Competition policy can restore efficiency and redistribution through a reduction in superstar compensation coming from market power

Florian Ederer (Boston University): IO economics is dogmatic on the questions it studies, the methods admissible, and exclusionary in terms of people studying certain types of question. Eg an immense focus on short-run prices and markups, it’s the proverbial lamppost illuminating the area, excluding much more interesting questions such as privacy, innovation, etc.

Large asset management firms are largest owners of several direct competitors — will they discourage competition? We now have evidence it does, with airlines, consumer goods, pharmaceuticals, and on innovation and entry as well as prices/markups (although not in breakfast cereals!) Such common owners don’t have such a strong incentive to push for price competition and innovation, and make their CEO compensation less performance-intensive. It can lead to a deadweight loss of 2-5% from a redistribution of consumer surplus now accruing to higher profits — partly benefits richer consumers who also hold shares, but not others.

Industrial policy

Nathan Lane (Oxford University): well-designed industrial policy takes account of concerns with efficiency and preserving competition

Ufuk Akgicit (University of Chicago): every country has its own specifics, so using micro data to understand them is important. Those far from the tech frontier – openness to FDI and talent migration works. At the frontier we have a less good understanding. Human capital is most important to prioritise but most industrial policy debates are about subsidising firms. Market power is a major issue. New players in markets have stronger incentives to innovate; dominant players start becoming defensive (evidence from Italy: patent quality goes down, number of former politicians employed goes up). Business dynamism is declining to in US: new entrants used to be about 15% of market, now down to 7%. Job reallocation rate falling significantly. Markups and market power increasing while labour share of revenues is decreasing. Number of inventors (on patents) has more than doubled relative to the population in the 2000s; there has been a more than 50% shift towards incumbents. Inventors are now trying to look for jobs at large firms rather than founding startups.

Heather Boushey (White House Council of Economic Advisers): industrial strategy is thinking about what is made in US and how it’s made; thinking about its impact on national competitiveness, good jobs, national security… Two pillars are empowering and educating workers; third is open, fair and competitive markets. The strategy is about making smart public investments in specific industries, in left-behind places, paying attention to equity and small businesses; to mobilise private investment; and a competition policy to enable investment in strong industries rather than national champion firms (eg using technical standards in car charging plugs to enable interoperability and facilitate entry). Competition analysis now being applied to new regulation, alongside cost-benefit analysis.

Rene Repasi MEP: industrial policy is not an ugly word in Europe, unlike the US. Following the next EU elections the headline will be competitiveness (not green deal). This cannot be about deregulation or subsidies for national champions, but states intervening to ensure markets follow public interests. On competition policy state aid control is about subsidies and antitrust is about efficiency; there needs to be more focused on innovation and R&D. For digital markets we need new challengers, protecting startup ideas to let them grow rather than be squashed by incumbents; and a change of mindset in universities to let researchers rather than write funding proposals.

UA: (patent) productivity goes down when inventors move from startups to large firms, even if wage goes up sharply. Small business sector has been shrinking for a long time but the pandemic didn’t help; they are fragile. Employment share has been declining since rise in interest rates; these businesses are increasingly using credit cards for financing.

HB: focus on innovation benefiting American people, economy at large. Good university research doesn’t automatically get out into the wider economy. Will new industries create jobs across the country, also for non-graduates? Does competition workers and their facilities through labour markets and opportunities for small businesses?

RR: we need the competitiveness debate until the election, stimulated by the Draghi and Lette reports in March, to popularise them, in the DMA, the DSA, reforming the merger regulation, cases pending at CJEU… we have bits and pieces going in the right direction. Industrial policy is a progressive idea on how we can make markets work.

Connecting Trade, Competition and Industrial Policy

Katherine Tai (US Trade Representative): trade and antitrust are facing several similar issues driving evolution: globalisation/free-trade paradigm was built on maximising efficiency as an end in itself, with a benefit of low prices, meaning a race to the bottom: a cutting of costs, the exploiting of people and planet, and a focus on consumers. We need to think of individuals as well as workers; consider public interest factors, not just what is good for biggest US firms.

What is digital trade? Began as e-commerce chapters in trade agreements, where technology was an enabler of traditional trade, so looked for rules to liberalise digital flows to facilitate trade flows. But that approach doesn’t work today because eg liberalising the flow of data has a big impact on privacy and concentration, and previous US positions on also eg localisation, source code need further development.

The US is not the only country looking at how trade and antitrust policy can democratise opportunity. KT has recently talked to South Africa about its similar approach.

Franziska Bratner (Germany State Secretary on Economic Affairs): Our objectives are efficiency yes, but also innovation, resilience/security. There are trade-offs on eg govt support on price vs national producers. There are questions on data monopolies eg with Teslas in Europe exporting data but Mercedes in China not allowed to, this is a competition issue.

James Hodge (South African Competition Commission): S Africa has proposed ending the WTO moratorium on digital tariffs because it lets every digital transaction go through Ireland without any tax. Few Big Tech companies make any significant investment in S Africa, facilitated by the zero-tariff rule. Digital economy is very broad and local entrepreneurs are often then acquired by global north firms, which could lead to an even bigger digital divide, and concentrates markets gradually. We see the same global merger creep in other markets, such as food. There needs to be more international cooperation on antitrust.

(Some) competition economists complain it’s very difficult to assess consumer welfare alongside other public interest factors, but trade does it all the time.

KT: “bigness” is distorting, unfair, bullying. Trade negotiators are concerned with movements between markets while competition enforcers are concerned with national markets; but trade needs to think more about antitrust between markets. Ricardo’s idea of comparative advantage doesn’t translate to the real world. One problem would be it would encourage two countries to build monopolies and trade with each other.

JH: out of apartheid came a highly concentrated economy excluding most people, so capitalism could only be acceptable in S Africa with the competition requirements in the post-apartheid constitution.

FB: Ricardo did not pay attention to security/resilience.

Problem with EU is trade policy as an EU competence, industrial support a national one, with competition a mixed competence. The next Commission has to work hard on bringing these together, and to create a better partnership worldwide on a more equal footing, cooperating more with countries such as S Africa and Brazil. The justice question on how we divide the gains of capitalism will be key to democracy.

KT: democracies need to empower their people to participate in their political systems, to maximise their potential and have access to economic opportunity for themselves and their children/grandchildren. Democracies have so much in common on political and economic inclusion.

The great reordering”

Stephanie Yon-Courtin MEP: since 2019 EU has realised competition policy and industrial strategy are compatible; while the pandemic, Russian invasion of Ukraine and the US IRA has been quite a shock, requiring a renewing of concepts and new conversations on innovation and global competition, infrastructure investment such as telecoms, completion of the Single Market.

The EU has made a lot of progress will legislation like the DMA and DSA, and needs to deliver more eg designating AI as a DMA Core Platform Service. Global approaches are needed for tech giants and AI.

The big question is who will be in charge after the next EU and US elections.

Barry Lynn (Open Markets Institute): we have an opportunity in 2025 to put in place a radically new economic system, the open and cooperative liberal world we have dreamed of — but only if we accept monopoly platforms are crushing the free press today, amplifying disinformation subverting our democracy today, that catastrophic concentration in China and Taiwan threaten the global economy. Europe needs a vision of how to address all these threats. Its absence increases the threat of more significant global conflict. There is likely to be another Biden administration — how will the EU meet their vision?

We need to dream of a much better world. Neoliberalism is a language restricting how we see the world, disguising power and bringing back.

Rohit Chopra (US CFPB): we’ve seen a learned helplessness from regulators around the world the last 40 years, watching from the sidelines. But these markets are shaped by rules of competition, and the private sector should want govt to ensure a race to the top not bottom occurs; supporting innovation and humans broadly, not just a clique at the top. Tech conglomerates are entering finance in ways that are pivotal for every central bank and financial regulation. Libra showed central banks and regulators were unprepared. Fortunately Meta failed, but we now have new ways Big Tech companies want to create currencies and payment systems which will have significant national security implications. Regulators can’t sit around and study this for 10 years, but need to act. AI is just another example of where we can let a couple of inventors throw their discovery into the world and see what happens, or have an agency by agency approach to make sure it doesn’t turn into a disaster.

For the first time, we’ve seen the classic monetary policy approach to inflation being challenged by data on corporate profits contributing, and begging the question of the role of competition in ensuring price stability. We have a lot to do, sector by sector.

Both US and Europe realise we are creeping to a path where there are three or four foundation models that collect our geolocation and its impact on our credit ratings or job listing ads, and that’s something for all the US regulators to confront. Will Big Tech companies become private governments, through eg creating their own currencies?

Fireside with US Assistant Attorney-General Jonathan Kanter

Jonathan Kanter (US DoJ): we are opening up the conversation about antitrust to the people, farmers, small businesses… everyone the laws were originally created to protect. Those people use words like power and democracy (not specialised economic terms).

The new merger guidelines are about the rule of law — enforcing the statutes based on their text and case law. The economy and the way people conduct business have changed and law enforcement must reflect this.

The JetBlue/Spirit judgment talked about how the merger would affect not just those two companies, but all the other firms in the market, and their customers.

AI monopolistic practices can be in the chips, in the datasets, in the development of algorithms, in the platforms for distribution, in the APIs… We have invested heavily, including with our own technologists, to ensure we have what it takes to enforcing the law. AI can and will be used in lots of different parts of business, lots of different industries, in lots of different flavours. We need to dig in so we can have a sophisticated approach to how we think about these issues. These markets have massive feedback effects, so the danger of them tipping, becoming dominant chokepoints, is perhaps even greater than other types of markets — with massive impact on society. Where there are violations we need to take action, and we have a number of active investigations.

Antitrust enforcement is an essential part of competition policy, but the latter is something which should be considered and evaluated across the government.

There’s a huge enthusiasm in the wider public — eg university students — to talk more about antitrust, as they feel it’s so critical to a free society.

DoJ’s ambition is to enforce the law faithfully, to bring cases, to put forward enforcement policies which resonate with and protect the public. Markets have changed hugely in 30 years and we have to deal with them as they are now, not then.

Antitrust as agent for change

Sarah Cardell (UK CMA): we have to be relentless in advocating the case for competition, always grounded in the reality of situations facing people and businesses in their daily lives; laser focused in choices of work; and not stay in a competition bubble.

Innovation is a key driver of change, including productivity and growth, and competitive markets are essential for it, which is why so many agencies are focused on digital markets, ensuring monopolists are not squashing disruptive innovators. Robust merger control is critical, like Meta/GIPHY, Adobe/Figma. Highly sceptical of arguments robust merger control deters investment and innovation. New powers in new legislation important.

CMA cannot singlehandedly address root causes of inflation, but can make sure markets like food are as competitive as they can be, that people can get the best deals possible in those markets.

Regulators need to look ahead as well as deal with older issues, but if they wait until they know everything it will be too late. Eg generative AI review. Increasingly working with other antitrust agencies as well as parallel regulators like data protection and online safety, and using their research to inform policy development in these neighbouring areas.

Digital Markets Unit should have its now powers by the end of this year so using them will be top priority. Need to keep a focus on the future.

Benoit Coeuré (French AdlC): competition enforcers pride themselves on using new tools, taking effective action, pushing boundaries into new domains like sustainability and labour markets; but everywhere free trade has almost entirely lost popular support; competition has been perhaps spared for serendipitous reasons (cost of living crisis) but inflation is down, interest groups are back and they have big megaphones; and the geopolitics are against us with drives for “moated castles”. We need to show fellow citizens how competition fits into the broader policy framework and reinforce other policy objectives, like “pro-competitive” industrial policy; connect competition and regulatory policy as a mix of solutions are needed to many problems. Some issues seen in cloud services can be addressed with antitrust; some better with contract or consumer protection law (like opacity in contracts); interoperability needs regulations and market standards. We have a close relationship with CNIL, who in the past has deferred part of GDPR enforcement to platforms; AdlC wants CNIL to minimise enforcement concerns when enforcing GDPR. And we need to better connect competition and trade policy. The Foreign Subsidy Regulation is beautiful in theory but the Commission is under-equipped to enforce it.

BC wants next Commission to deliver what has already been promised — DMA implementation and enforcement (should it be narrow and limited, or a dynamic tool with cloud services, AI etc. — if they don’t will come back to antitrust but might not be most efficient.) Needs to be a new compact between industrial and competition policy. Might need some tidying up of merger control

Nuno Rodrigues (Portuguese Competition Authority): competition enforcers can support innovation, the green transition eg electronic mobility needs a dense charging network, worker mobility (has sanctioned no-poach agreements). Access to inputs will be key for AI: foundational models, cloud computing, data. Digital markets break territorial links familiar from traditional markets; competition between enforcers is essential, including to deal with Big Tech’s outsized bargaining power.

EU has avoided damaging tit-for-tat retaliation on trade, introducing instead the Foreign Subsidy Regulation and Chips Act.

We need implementation; advocacy to firms and market; and coordination with ECN; and enforcement.

John Newman (former Director US FTC): FTC’s Illumina order had a powerful narrative about life-saving technology and how important competition was to it developing. The Amazon and Google adtech complaints show incumbents wielding the power they have where they have already won that race. Agency heads should spend less time with CEOs and more with ordinary workers.

Aviv Nevo (Director of FTC Bureau of Economics): how to translate high-level goals into day-to-day work? The merger guidelines are an example of doing that. They contain new tools; sharpen old tools; but also change the narrative. Where were courts not buying the ways issues were previously put? And what is needed to look forward? The guidelines stress the statutes’ use of probability, not certainty. They talk about platforms and dynamic competition; harm in the labour market; entrenchment and extension of dominant positions through mergers (all new). Sharpening: change of narrative on vertical mergers, which is intuitively harder to convey

Conversation with US FTC Chair Lina Khan

Lina Khan (US FTC): Americans are increasingly connecting problems in their day-to-day lives with competition policy decisions in Washington DC. LK has spent a lot of time out of DC talking to the impact of business practices on customers and workers eg on the impact of the incursion of private equity into healthcare, leading to burnout for doctors and reduced quality care. Helps FTC prioritise work, and track changes in real-time.

As the social web emerged, policymakers mostly decided to step back, allowing predatory and damaging business models emerge; allowed dominant firms to build and strengthen their moats against competitors. LK wants to learn from those experiences and avoid this missteps a second time. FTC reaction to AI tools is an opportunity to do better this time. The emergence of new technologies can be inflexion points and FTC now has a team of technologists to look “under the hood” of AI to ensure it has an accurate understanding to get ahead of potential problems.

For the next four years: we have made a lot of progress but it feels like we are just getting started, on litigation, ensuring antitrust protects consumers as well as workers, finalising rules on non-compete…

Building up large volumes personal data can reinforce monopolies, which in turn make it easier for those firms to collect more data. FTC approach has been to draw red lines on very sensitive data, such as stopping resale or reuse of geolocation or health data.

For natural monopolies, policies such as designating common carriers, nondiscrimination rules, interoperability obligations… might be necessary sometimes complementary to antitrust.

AI Act, DSA and DMA Implementation

Roberto Viola (DG CNECT, European Commission): managing expectations is important with the AI Act, and DMA/DSA. This is a dynamic path — definitions can evolve, system risks can be adapted — stakeholders, regulators, scientific community will work together. If something goes wrong it can be corrected. It takes time to deploy. Largest high-risk models will not face hundreds of regulators, but one AI Office (even though it works with an ecosystem of national regulators). They will hire 100 top experts (for interest, if not salary). It will work hand in hand with the DSA and DMA regulators. Then national conformity assessment will take place on high-risk products.

So far, very large algorithms have been produced by very large companies, so it is a proxy for market power. But the AI Act does not directly look at it.

When we see how generative AI is used in enhancing the offering of search engines — then it’s a search function. So it is likely many services will come under existing DMA and DSA definitions. But DMA can be expanded if necessary, adding new services [and obligations/prohibitions].

DMA is much closer to telecom regulation than people think, esp. on interoperability. Latter is based on reference offers and then it’s the job of regulators to look at these 20,000 pages to assess the details. This is ultra-technical and ultra-specific to the company’s technology

https://www.ianbrown.tech/wp-content/uploads/2024/01/Screenshot-2024-01-31-at-16.10.43-1024x732.pngRana Forooha interviews Prof. Erik BrynjolfssonAI Awakening — making it Pro-Human Despite the Tech Industry?

Darron Acemoglu (MIT): similar innovations can develop in very different directions — nitrogen fixing is essential to fertiliser, but similar processes produce explosives. We are at the cusp of significant changes in technology thanks to AI; that can be pro-human, but right now is the tech ecosystem pushing in a disempowering direction: tasks are automated at breakneck speed, data is collected without any guardrails, a small elite can use algorithms over other humans. We need to tackle both economic power and persuasion power — the tech industry has sold a narrative AI will be used for the good of society, helped by the media sector in many places.

Relentless automation and monetisation of data are creating significant problems, but could be addressed by tax policies. Many countries subsidise capital and tax labour — we should try to equalise this. And a digital ads tax — ads are the lifeblood of current digital technologies, which create a very pernicious narrative about how data is expropriated and monetised — could enable other business models.

Erik Brynjolfsson (Stanford University): we are entering a time of radical uncertainty, especially around AI, the most powerful technology we’ve had. Uncertainties around productivity gains (could take decades to play out); industrial concentration (but can smaller models be as powerful and valuable?); and the trade-off between automation and augmentation of human workers. Both approaches can increase productivity but there’s far too much emphasis on that by technologists. So much more can be achieved through augmenting, as well as get a widespread distribution of benefits. This means changing tax policy, management practices, and how we conceive what the technology can do. It’s also much more likely to be organisationally adopted, if all parties benefit.

Policy should be encouraging lots of competition in smaller models. And enforce/encourage standards/interoperability. There’s a gap between the private incentives to silo people and the public interest. Interoperability enables the benefits of scale, of networks, but also of competition. But it doesn’t happen organically.

The European grand regulation project

Filomena Chirico (European Commission DMA Task Force): the DMA is meant to correct problems we have already observed. Compliance implies change is happening. Status quo is not what we expect.

The DMA is opening holes in gatekeeper services that new innovators can go through and build services to give users new choices.

Alberto Bacchiega (European Commission DG Competition): what will important is how the market (businesses and users) reacts to changes made for the DMA: effective compliance, not just on paper. Some compliance solutions we will know only when we see it working. Some proposed we don’t think comply with the law, and we will need to take action on those relatively quickly. EC has already organised a public workshop post-7 Mar for gatekeepers to explain their compliance measures; EC will listen and then “very quick[ly]” take action (but there is not a formal deadline)

Johnny Ryan (Irish Council for Civil Liberties): the stakes here are incredibly high, so incremental change is not what is called for. Competition is not a “side dish” (a remark this morning by Olivier Guersent).

Gatekeepers should fear regulators, not be having “nice conversations”.

Francesca Bria (former president of Italian National Innovation Fund): I am more optimistic after today on EU and US cooperation on strategic economic reform to better serve people. Europe needs an industrial policy that’s forward-looking because we are too dependent on Big Tech firms. We don’t have a European tech stack which aligns with our values, our democratic principles — with chips, with cloud, with AI, with data… Is the EU requiring interoperability, open standards, ethics and privacy by design when it gives out subsidies? We are trapped between the US private sector and Chinese big state models. We need public digital infrastructure and institutions. How about urban data to fight climate change, kept in a data trust owned by citizens? What about public participation? What about social media manipulation leading to polarisation? Public interest does not mean state control. We need infrastructures to mobilise people with public returns, allow political participation where the data is kept as a digital commons.

How can public investment funds ensure successful startups it funds aren’t just bought up by private equity and sovereign wealth funds?

Amba Kak (AI Now Institute): EC and US governments are planning multi-billion euro investments in public AI resources (also UAE, India…) but we should ensure a narrative of “regulation kills innovation” does not take hold. And what is the connection between these investments and concentration of power? The narrative is of “democratising AI” but there’s a question of scale to contend with: Big Tech firms are spending orders of magnitude more. And the scope/vision seems on industry terms: US plan was originally for cloud procurement; current version looks at compute and data credits, and other ways for AI firms to contribute, but big firms like OpenAI and Anthropic are being given a big say in the innovation path.

The Biden Executive Order is clear the answer to foreign monopolies is not to tolerate domestic ones.

Joanna Bryson (Hertie School of Governance): for most problems you do not need these giant AI models (which hallucinate cleanly and fluently when you ask for a prediction off too little data).

The EU does have AI development commensurate with its economic size; only the US is an outlier there. Google uses the world’s talent, uses the world’s data, has fibre optic cables wrapping the world — we would call this infrastructure. Why aren’t they regulated as a utility? It’s essential infrastructure.

Brando Benifei MEP (AI Act rapporteur): big AI developers have tried to write their own rules for the most powerful models, and sceptical they aren’t left to self-regulation.

AI Act needs the standards, so will take time; but no-one in the rest of the world is making their voluntary schemes mandatory, with fines and the AI Office to enforce [IB: China?]

We need international cooperation to pursue some safety objectives for the most powerful models (general-purpose AI with systemic risk).

https://www.ianbrown.tech/2024/01/31/notes-from-the-next-world-order/

image/png

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • magazineikmin
  • Youngstown
  • khanakhh
  • ngwrru68w68
  • slotface
  • ethstaker
  • mdbf
  • everett
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • cisconetworking
  • rosin
  • JUstTest
  • Durango
  • GTA5RPClips
  • anitta
  • tester
  • tacticalgear
  • InstantRegret
  • normalnudes
  • osvaldo12
  • cubers
  • provamag3
  • modclub
  • Leos
  • lostlight
  • All magazines