techchiili, to microsoft
@techchiili@mastodon.social avatar
remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The overwhelming message that emerges from these books, ironic as it may seem, is a newfound appreciation of the collective powers of human creativity. We rightly marvel at the wonders of AI, but still more astonishing are the capabilities of the human brain, which weighs 1.4kg and consumes just 25 watts of power. For good reason, it has been called the most complex organism in the known universe.

As the authors admit, humans are also deeply flawed and capable of great stupidity and perverse cruelty. For that reason, the technologically evangelical wing of Silicon Valley actively welcomes the ascent of AI, believing that machine intelligence will soon supersede the human kind and lead to a more rational and harmonious universe. But fallibility may, paradoxically, be inextricably intertwined with intelligence. As the computer pioneer Alan Turing noted, “If a machine is expected to be infallible, it cannot also be intelligent.” How intelligent do we want our machines to be?"

https://www.ft.com/content/32f6a003-e5b4-442a-9a5d-37bdc1c6d392?desktop=true&segmentId=7c8f09b9-9b61-4fbb-9430-9208a9e233c8#myft:notification:daily-email:content

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #Privacy #DataProtection #AIRegulation: "AI today is both old and new. The technologies branded as "AI" today are actually old technologies that are working more effectively given vast increases in data and computing power.

It is important to avoid "AI exceptionalism" — treating AI as if it were so unique that we are unable to see how its privacy problems are often outgrowths of existing privacy issues. The privacy problems associated with AI largely revolve around practices privacy laws have long dealt with, such as the collection and processing of personal data. To be effectively addressed, these privacy problems should be tackled holistically, not just in the context of AI. Rarely is there a magic line separating privacy issues in AI from those in the digital age generally.

AI can increase existing privacy problems, add dimensions and complexities to them, or remix them. Merely addressing AI is like trying to remove weeds without digging up their roots."

https://iapp.org/news/a/a-regulatory-roadmap-to-ai-and-privacy/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #EU #AIAct #AIRegulation: "5 flaws of the AI Act from the perspective of civic space and the rule of law

  1. Gaps and loopholes can turn prohibitions into empty declarations
  2. AI companies’ self-assessment of risks jeopardises fundamental rights protections
  3. Standards for fundamental rights impact assessments are weak
  4. The use of AI for national security purposes will be a rights-free zone
  5. Civic participation in the implementation and enforcement is not guaranteed"

https://edri.org/our-work/packed-with-loopholes-why-the-ai-act-fails-to-protect-civic-space-and-the-rule-of-law/

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

: "For the last three years, AlgorithmWatch has worked in coalition with a broad range of digital, human rights and social justice groups to demand that artificial intelligence (AI) works for people, prioritizing the protection of fundamental human rights. We have put forward our collective vision for an approach where “human-centric” is not just a buzzword, where people on the move are treated with dignity, and where lawmakers are bold enough to draw red lines against unacceptable uses of AI systems.

Following a gruelling negotiation process, EU institutions are expected to conclusively adopt the final AI Act in April 2024. But while they celebrate, we take a much more critical stance, highlighting the many missed opportunities to make sure that our rights to privacy, equality, non-discrimination, the presumption of innocence and many other rights and freedoms are protected when it comes to AI. Here’s our round-up of how the final law fares against our collective demands." https://algorithmwatch.org/en/ai-act-fails-to-set-gold-standard-for-human-rights

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "The UK’s AI economy remains narrow, larger on paper than in its footprint in our society. Those advantages it does enjoy over its European peers are precarious and in certain respects are being eroded by underinvestment. And the shape, pace, and direction of AI development in the UK is dictated not in Westminster or Whitehall, but overwhelmingly in the boardrooms and pitch decks of Silicon Valley.

This is at least in part because of our attachment to the founding myth of British AI policy: that of the arms race. Arms race narratives are implicitly linear, positioning individual states as able to influence the pace but not the direction of economic development and technological change. They take for granted that increased support for UK firms will lead to the UK becoming a global leader in AI development, and that achieving this position will—by virtue of “winner-takes-all” dynamics and the putative tendency of wealth to “trickle down”—deliver sustained value for the public.

The arms race offers a fantasy of independence that masks deeper structural dependence on a paradigm of AI development led by, and wholly dependent on, funding and infrastructures provided by Silicon Valley. In this sense the question we started with from Ian Hogarth is misframed: it is not clear to what extent DeepMind ever represented a truly “independent entity,” given how intertwined its early history was with US venture capital68 and how wedded its aspirations were to the existing Silicon Valley model."

https://ainowinstitute.org/publication/a-lost-decade-the-uks-industrial-approach-to-ai

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "A draft treaty to protect human rights, democracy, and the rule of law, agreed at the Council of Europe (CoE) on Thursday (14 March), leaves it up to countries to decide how to include the private sector in the development of artificial intelligence (AI).

The exemptions for the private and defence sectors have been a key matter of contention in the negotiations on what has been called the world’s first international treaty on AI. The document claims to ensure that the technology does not hurt human rights.

In the latest iteration of the convention, countries are left to address how they will ensure the private sector will be in line with the treaty, according to information made public by three individuals with knowledge of the matter."

https://www.euractiv.com/section/digital/news/council-of-europe-ai-treaty-does-not-fully-define-private-sectors-obligations/

TechDesk, to ai
@TechDesk@flipboard.social avatar

Associated Press reporter Jesse Bedayn looks at why we should all care about the rapid advancements of artificial intelligence, from the technology's impact on job seekers to people simply looking for an apartment.
https://flip.it/qIlyYG

#AI #TechNews #ChatGPT #AIRegulation

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #AIRegulation #FDA: "- An ‘FDA for AI’ is a blunt metaphor to build from. A more productive starting point would look at FDA-style regulatory interventions and how they may be targeted at different points in the AI supply chain.

  • FDA-style interventions might be better suited for certain parts of the AI supply chain than others.

  • The FDA model offers a power lesson in optimizing regulatory design for information production, rather than just product safety. This is urgently needed for AI given lack of clarity on market participants and structural opacity in AI development and deployment."

https://ainowinstitute.org/publication/what-can-we-learn-from-the-fda-model-for-ai-regulation

remixtures, to uk Portuguese
@remixtures@tldr.nettime.org avatar

: "‘The Government should be given credit for evolving and strengthening its initially light-touch approach to AI regulation in response to the emergence of general-purpose AI systems. Ministers are right to acknowledge that AI is already causing harm in many everyday contexts and poses a broad range of risks to society. The Government’s work to build in-house expertise on AI through the establishment of the central AI risk function and the AI Safety Institute, as well as its development of standards on algorithmic transparency and AI management, are promising first steps. However, much more needs to be done to ensure that AI works in the best interests of the diverse publics who use these technologies.

‘We are concerned that the Government’s approach to AI regulation is ‘all eyes, no hands’: it has equipped itself with significant horizon-scanning capabilities to anticipate and monitor AI risks, but it has not given itself the powers and resources to prevent those risks or even react to them effectively after the fact. While an uplift in regulatory funding is welcome, £10 million falls well short of the hundreds of millions pounds per annum that we allocate to safety in other critical industries."

https://www.adalovelaceinstitute.org/press-release/statement-on-uk-ai-regulation/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #OpenAI #AIGovernance #AIRegulation #ParticipatoryDemocracy: "The company wanted to find out whether deliberative technologies, like Polis, could provide a path toward AI alignment upon which large swaths of the public could agree. In return, Megill might learn whether LLMs were the missing puzzle piece he was looking for to help Polis finally overcome the flaws he saw in democracy.

On May 25, OpenAI announced on its blog that it was seeking applications for a $1 million program called “Democratic Inputs to AI.” Ten teams would each receive $100,000 to develop “proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow.” There is currently no coherent mechanism for accurately taking the global public’s temperature on anything, let alone a matter as complex as the behavior of AI systems. OpenAI was trying to find one. “We're really trying to think about: what are actually the most viable mechanisms for giving the broadest number of people some say in how these systems behave?” OpenAI's head of global affairs Anna Makanju told TIME in November. “Because even regulation is going to fall, obviously, short of that.”"

https://time.com/6684266/openai-democracy-artificial-intelligence/

remixtures, to uk Portuguese
@remixtures@tldr.nettime.org avatar

#UK #AI #AIRegulation: "Ministers have been warned against waiting for a Post Office-style scandal involving artificial intelligence before stepping in to regulate the technology, after the government said it would not rush to legislate.

The government will acknowledge on Tuesday that binding measures for overseeing cutting-edge AI development are needed at some point – but not immediately. Instead, ministers will set out “initial thinking for future binding requirements” for advanced systems and discuss them with technical, legal and civil society experts.

The government is also giving £10m to regulators to help them tackle AI risks, as well as requiring them to set out their approach to the technology by 30 April.

However, the Ada Lovelace Institute, an independent AI research body, said the government should not wait for an impasse with tech firms or errors on the scale of the Post Office scandal before it acted."

https://www.theguardian.com/technology/2024/feb/06/dont-wait-for-post-office-style-scandal-before-regulating-ai-ministers-told

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #GenerativeAI #ChatGPT #AIRegulation: "The year 2023 marked a new era of “AI hype”, rapidly steering policy makers towards discussions on the safety and regulation of new artificial intelligence (AI) technologies. The feverish year in tech started with the launch of ChatGPT in late 2022 and ended with a landmark agreement on the EU AI Act being reached. Whilst the final text is still being ironed out in technical meetings over the coming weeks, early signs indicate the western world’s first “AI rulebook” goes someway to protecting people from the harms of AI but still falls short in a number of crucial areas, failing to ensure human rights protections especially for the most marginalised. This came soon after the UK Government hosted an inaugural AI Safety Summit in November 2023, where global leaders, key industry players, and select civil society groups gathered to discuss the risks of AI. Although the growing momentum and debate on AI governance is welcomed and urgently needed, the key question for 2024 is whether these discussions will generate concrete commitments and focus on the most important present-day AI risks, and critically whether it will translate into further substantive action in other jurisdictions."

https://www.amnesty.org/en/latest/campaigns/2024/01/the-urgent-but-difficult-task-of-regulating-artificial-intelligence/

itnewsbot, to machinelearning
@itnewsbot@schleuss.online avatar

At Senate AI hearing, news executives fight against “fair use” claims for AI training data - Enlarge / Danielle Coffey, president and CEO of News Media Alliance; Pr... - https://arstechnica.com/?p=1995191

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #AIRegulation #GenerativeAI #EU #AIAct: "The EU is also working on another bill, called the AI Liability Directive, which will ensure that people who have been harmed by the technology can get financial compensation. Negotiations for that are still ongoing and will likely pick up this year.

Some other countries are taking a more hands-off approach. For example, the UK, home of Google DeepMind, has said it does not intend to regulate AI in the short term. However, any company outside the EU, the world’s second-largest economy, will still have to comply with the AI Act if it wants to do business in the trading bloc.

Columbia University law professor Anu Bradford has called this the “Brussels effect”—by being the first to regulate, the EU is able to set the de facto global standard, shaping the way the world does business and develops technology. The EU successfully achieved this with its strict data protection regime, the GDPR, which has been copied everywhere from California to India. It hopes to repeat the trick when it comes to AI."

https://www.technologyreview.com/2024/01/05/1086203/whats-next-ai-regulation-2024/

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

#EU #AI #AIAct #AIRegulation #Surveillance: "Indeed, the European Parliament’s own press release on the landmark law admits that “narrow exceptions [exist] for the use of biometric identification systems (RBI)”—police mass surveillance tech, in other words—“in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime.” As part of that exemption, the law allows police to use live facial recognition technology—a controversial tool that has been dubbed “Orwellian” for its ability to monitor and catalog members of the public—in cases where it’s used to prevent “a specific and present terrorist threat” or to ID or find someone who is suspected of a crime.

As you might expect, for groups like Amnesty, this seems like a pretty big blind spot. From critics’ perspective, there’s no telling how law enforcement’s use of these technologies could grow in the future. “National security exemptions—as we know from history—are often just a way for governments to implement quite expansive surveillance systems,” said Satija."

https://gizmodo.com/eu-ai-act-government-surveillance-pope-microsoft-1851092738

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #AIGovernance #AIRegulation: "Establishing institutions that will “set norms and standards” and “monitor compliance” without pushing for national and international rules at the same time is naive at best and deliberately self-serving at worst. The chorus of corporate voices backing nonbinding initiatives supports the latter interpretation. Sam Altman, the CEO of OpenAI, has echoed the call for an “IAEA for AI” and has warned of AI’s existential risks even as his company disseminates the same technology to the public. Schmidt has invested large amounts of money in AI startups and research ventures, and at the same time has advised the U.S. government on AI policy, emphasizing corporate self-governance. The potential for conflicts of interest underlines the need for legally enforceable guardrails that prioritize the public interest, not loosely defined norms that serve technology companies’ bottom lines.

Taking the IAEA or IPCC as models also risks ignoring the novelty of AI and the specific challenge of its regulation. Unlike nuclear arms, which are controlled by governments, AI capabilities are concentrated in the hands of a few companies that push products to market."

https://www.foreignaffairs.com/premature-quest-international-ai-cooperation

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

#EU #AI #AIAct #AIRegulation #GPAI #OpenSource: "The effect of this Frankenstein-like[5] combination of tiered obligations and a limited open source exemption is a situation where open source AI models can get away with being less transparent and less well-documented than proprietary GPAI models. This creates a strong incentive for actors seeking to avoid even the most basic transparency and documentation obligations to use open licenses while violating their spirit. It is hard to imagine that this is what the EU legislator intended. As the text of the Act is still undergoing technical clean-up, there is still an opportunity to better align the provisions dealing with open source AI. The best way to do this would be to limit the open source exception to AI systems while relying on the tiered approach to structure the obligations for GPAI models."

https://openfuture.eu/blog/a-frankenstein-like-approach-open-source-in-the-ai-act/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

: "More broadly, several regulatory approaches under consideration are likely to have a disproportionate impact on open foundation models and their developers, without meaningfully reducing risk. Even though these approaches do not differentiate between open and closed foundation model developers, they yield asymmetric compliance burdens. For example, legislation that holds developers liable for content generated using their models or their derivatives would harm open developers as users can modify their models to generate illicit content. Policymakers should exercise caution to avoid unintended consequences and ensure adequate consultation with open foundation model developers before taking action."

https://hai.stanford.edu/issue-brief-considerations-governing-open-foundation-models

earthworm, to ai
@earthworm@kolektiva.social avatar
remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

: "After a 36-hour negotiating marathon, EU policymakers reached a political agreement on what is set to become the global benchmark for regulating Artificial Intelligence.

The AI Act is a landmark bill to regulate Artificial Intelligence based on its capacity to cause harm. The file passed the finishing line of the legislative process as the European Commission, Council, and Parliament settled their differences in a so-called trilogue on Friday (8 December).

At the political meeting, which set a new record for interinstitutional negotiations, the main EU institutions had to go through an appealing list of 21 open issues. As Euractiv reported, the first part of the trilogue closed the parts on open source, foundation models and governance."

https://www.euractiv.com/section/artificial-intelligence/news/european-union-squares-the-circle-on-the-worlds-first-ai-rulebook/

Narayoni, to technology
@Narayoni@mastodon.social avatar

At the root of the fragmented actions taken by lawmakers worldwide is a fundamental mismatch. AI systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace. As a result, companies like Google, Meta, Microsoft have been left to police themselves as they race to create and profit from advanced A.I. systems. Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers.

https://www.nytimes.com/2023/12/06/technology/ai-regulation-policies.html

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #AIRegulation #AIEthics: "The key idea is to require AI developers to provide documentation that proves they have met goals set to protect peoples' rights throughout the development and deployment process. This provides a straightforward way to connect developer processes and technological innovation to governmental regulation in a way that best leverages the expertise of tech developers and legislators alike, supporting the advancement of AI that is aligned with human values.

This approach is a mix of top-down and bottom-up regulation for AI: Regulation defines the rights-focused goals that must be demonstrated under categories such as safety, security, and non-discrimination; and the organizations developing the technology determine how to meet these goals, documenting their process decisions and success or failure at doing so.

So let's dive into how this works."

https://www.techpolicy.press/the-pillars-of-a-rightsbased-approach-to-ai-development/

remixtures, to ai Portuguese
@remixtures@tldr.nettime.org avatar

#AI #EU #AIAct #AIRegulation: "EU negotiators involved in the AI Act are suggesting a tiered approach to curbs on foundation models, so that the level of regulation is gradual, matching the impact of the models. That means only a small number of companies will come under its scope, allowing others to grow before being hit by regulation. And as foundation models are still evolving, the new rules should leave room for adjustment as the technology changes.

It is not surprising that the final phases of the negotiations are trickiest — this is always the case. Those worrying that the tensions may spell the end of the AI Act seem unaware of how legislative battles unfold. For all EU stakeholders, a lot is at stake and the world is watching to see what law the EU ends up voting for. Let’s hope political leaders manage to agree on a future-proof set of rules. With big tech power should come big responsibility, no matter whether French, American or German engineers built the system."

https://www.ft.com/content/99ff5c84-ece7-43bf-bed1-2edb30d4d6df

remixtures, to Bulgaria Portuguese
@remixtures@tldr.nettime.org avatar

#EU #AI #AIRegulation #AIAct: "The double thinking of EU governments on these issues betrays the lack of substance to their arguments.

The recent US Executive order on AI, for example, showed that it is indeed possible to include law enforcement and national security agencies in the scope of AI rules.

Our governments must be willing to put their money where their mouth is, ensuring that state uses of AI are subject to the same reasonable rules and requirements as any other AI system.

And in the small number of cases where the use of AI has been shown to be just too harmful to make safe, we need prohibitions.

Without this, we will not be able to trust that the EU’s AI Act truly prioritises people and rights."

https://www.euronews.com/next/2023/12/04/eu-governments-hypocrisy-is-on-full-display-over-dangerous-police-ai

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • Durango
  • megavids
  • InstantRegret
  • cubers
  • GTA5RPClips
  • cisconetworking
  • ethstaker
  • osvaldo12
  • modclub
  • normalnudes
  • provamag3
  • tester
  • anitta
  • Leos
  • lostlight
  • All magazines