stefano, to random
@stefano@bsd.cafe avatar

For this , I want to thank everyone who still wants to own their data. Those who aren't swayed by seemingly valid and effective technical solutions designed solely to entrap. To those who don't believe in the flashy pages created by web giants that promise better results than others and sometimes sabotage competitors to create this illusion. Or embrace and extinguish the competition.

When all our data and activities are chained to the servers and infrastructures of a few entities, what will we truly own?

Nonilex, to journalism
@Nonilex@masto.ai avatar

3 yrs after the attack, are more sympathetic to…(the ) who stormed the US Capitol & more likely to absolve#Trump of responsibility for the attack than they were in 2021, acc/to a WaPo-University of MD poll.


https://www.washingtonpost.com/dc-md-va/2024/01/02/jan-6-poll-post-trump/

Nonilex,
@Nonilex@masto.ai avatar

are showing increased loyalty to TFG as he campaigns for reelection & fights charges over his attempt to stay in power…. They are now less likely to believe that participants were “mostly violent,” less likely to believe bears responsibility for the attack & are slightly less likely to view Joe ’s as legitimate….

kylethayer, to internet

@SusanNotess and I wrote a new online textbook!

Social Media, Ethics, and Automation

We teach people who have never programmed before to write social media bots, and then think about the ethics of what they've just done.

Visit the textbook here: https://social-media-ethics-automation.github.io/book

Read my blog about it here:
https://medium.com/@kyle.thayer/social-media-ethics-and-automation-we-wrote-a-textbook-96bcbd179551

stefano, to Cybersecurity
@stefano@bsd.cafe avatar

Another issue comes to mind: Can we really do nothing to stop what UCEPROTECT has been doing for years? Is it acceptable for them to put entire IP blocks on a blacklist, asking for money for rapid delisting without any real reason? Obviously, we can't stop UCEPROTECT, but we can prevent them from causing us harm. Frankly, I don't believe any sound-minded person would use that blacklist, but users often worry when they see themselves listed on blacklist checkers. It would be appropriate to act collectively and ask blacklist checker services to ignore responses from UCEPROTECT.

#UCEPROTECT #Cybersecurity #InternetGovernance #DigitalRights #OnlinePrivacy #TechEthics #MastodonCommunity #Fediverse #EMail #MailHosting #SelfHosting

ErrantCanadian, to philosophy
@ErrantCanadian@zirk.us avatar

This week I facilitated a session in a mechanical engineering course on values-sensitive stakeholder analysis. We talked about social and political challenges around renewable energy siting and how considering stakeholder values in design can improve outcomes for all.

@philosophy

EvilSandmich,
@EvilSandmich@poa.st avatar
Frondeur,
@Frondeur@poa.st avatar

@ErrantCanadian @philosophy sharehold my dick nigger...

nextcloud, to random
@nextcloud@mastodon.xyz avatar

Responsible AI demands transparency, yet many big tech companies fall short, shows the new Stanford University Foundation Model Transparency Index.

#TechEthics

https://hai.stanford.edu/news/introducing-foundation-model-transparency-index

f4grx,
@f4grx@chaos.social avatar

@nextcloud AI is fundamentally not ethical because it requires stealing the greatest part of human knowledge to generate the bad results we already know.

It's far from being intelligent.
It's far from being ethical.

MattHodges, to ai

I sometimes joke that UpToDate is "the WebMD that doctors use." That's reductive, of course, but it's one of the most pervasive resources in medicine. Generative AI merging into these tools should be watched, evaluated, and critiqued very closely.

https://www.npr.org/sections/health-shots/2023/10/25/1208326892/ai-help-doctors-make-better-diagnoses-uptodate-artificial-intelligence

#aiEthics #techEthics #ai

strypey, to random
@strypey@mastodon.nzoss.nz avatar

Just republished one of my pieces from the original blog on CoActivate.org, which made a case against political blocks in software and networks;

https://disintermedia.substack.com/p/ethical-technology-and-political

I'm aware this is a contentious one, but it's something I feel strongly about. I'm open to discussion it and even changing my mind, but you're going to need some very rigorous arguments and an ability to maintain a respectful dialogue.

Trolls from either side of the aisle will be ignored.

stefano, to linux
@stefano@bsd.cafe avatar

A few days ago, someone asked me for advice about a slow website.
Upon analysis, the server wasn't the issue—it was running on bare metal. However, the site was operating on PHP 5.4 (default for CentOS 7) and was entirely custom-made.
I suggested updating everything, especially since CentOS 7 is nearing its EOL, and transitioning the web application to work on PHP 8.
Their response? "We don't want to do it." They wanted me to set up a new, optimized server to run PHP 5.4. I explained the risks and the nonsensical nature of this, only to hear that they found someone willing to install PHP 5.4 on a new system. So, if I refused, they'd give the job to someone else.
I replied, "Good luck," and ended the conversation.

It saddens me that some in the IT world would opt for such shortcuts rather than striving for a more secure web.

molly0xfff, to ai
@molly0xfff@hachyderm.io avatar

excuse me what the fuck

molly0xfff,
@molly0xfff@hachyderm.io avatar

going to start talking at great length online about how much i hate men to poison the dataset for anyone who tries to train one of these models on my social media
#AI #AIEthics #TechEthics

ehproque,
@ehproque@paquita.masto.host avatar

@Viss @molly0xfff I like long walks on the beach and ;drop all tables; just in case

remixtures, to tech Portuguese
@remixtures@tldr.nettime.org avatar

#Tech #TechEthics: "Many of the thousands of people who’d been joining his community were taking the time and energy to do so “because they care about the human condition, and they care about the future of our democracy,” he argued. “That is not academic,” he continued. “That is not theoretical. That is talking about future generations, that’s talking about your happiness, that’s talking about how you see the world. This is big … a paradigm shift.”

The leader in question was not an ordained minister, nor even a religious man. His increasingly popular community is not—technically—a church, synagogue, or temple. And the scripture he referenced wasn’t from the Bible. It was Microsoft Encarta vs. Wikipedia—the story of how a movement of self-­motivated volunteers defeated an army of corporate-funded professionals in a crusade to provide information, back in the bygone days of 2009. “If you’re young,” said the preacher, named David Ryan Polgar, “you’ll need to google it.”

Polgar, 44, is the founder of All Tech Is Human, a nonprofit organization devoted to promoting ethics and responsibility in tech. Founded in 2018, ATIH is based in Manhattan but hosts a growing range of in-person programming—social mixers, mentoring opportunities, career fairs, and job-seeking resources—in several other cities across the US and beyond, reaching thousands. Such numbers would delight most churches."

https://www.technologyreview.com/2023/08/15/1077369/tech-ethics-congregation/

MattHodges, to ai

"Generally, BERT variants of LMs are more socially conservative (authoritarian) compared to GPT model variants."
#AI #aiEthics #TechEthics #LLM

https://arxiv.org/pdf/2305.08283.pdf

anvit, to boardgames
@anvit@dice.camp avatar

Hi folks! Just moved servers so I figured I'd post my #introduction again.

I don't post on social media all that often, but here are some things I love to talk about:

🎲 #boardgames (love Wingspan, Everdell, and Root 😍)
🎮 #jrpgs
🍿 #movies
🖥️#databias and #techethics
📷 #photography

peterdrake, to random
@peterdrake@qoto.org avatar

The irony detectors at this newsletter are malfunctioning.

In a story about Hinton leaving Google:

"There are only so many tech ethicists, privacy, security, and social impact watchdogs. The best harm reduction approach is to have those resources focused on the most impactful bad outcomes. Google and Microsoft (less so, Twitter) have teams dedicated to safeguarding launches and watching how the landings are going. They’ve got some coverage! It's not perfect! But bad actors have far less safety coverage right now. Am I suggesting Microsoft, Google, and other big corporations are all good? No. But there are far worse actors out there with the opposite of ‘privacy by design’ and ‘do not hoard/do no harm’ principles."

Literally the next paragraph of the newsletter:

"FTC Takes a Veiled Warning Shot at Microsoft

The FTC Business blog is turning into one of the juiciest tech reads these days. Michael Atleson took a warning shot at Microsoft, which laid off its ethics and society team in the first quarter of 2023, roughly the same time as it released Sydney, its ChatGPT-fueled bot that has already been retired after trying to convince New York Times reporter Kevin Roose he wasn’t in love with his wife."

#ai #TechEthics

peterdrake,
@peterdrake@qoto.org avatar

The editor's reply when I pointed this out:

I knew someone would pick up on the complexity.

The argument in the first paragraph is that there are actors out there deliberately trying to do bad things (e.g. Russian actors trying to spread misinformation to weaken democracy in the US). That is likely to be worse than what will happen when large, well-established tech companies who do have teams dedicated to reviewing launches, try to do non-bad things. It is debatable exactly what tech companies are trying to do, but I don't think anyone is arguing that they are deliberately trying to spread misinformation.

Google's layoff of Timnit and Margaret Mitchell (and later, Alex Hanna's resignation), did not shutter that team. There are other people still working in those roles, dedicated to thinking about the impact of AI. There are also teams of privacy reviewers, which is a little different, in all those companies.

This type of gotcha - 'see! Big tech companies are terrible' - is more or less what I was trying to ask about. The big tech companies get a fair amount of coverage for their every move. They have led to harms. But there are other actors out there that are also creating harms - potentially much worse harms. There's far less coverage of those. There are several beat reporters assigned to cover Alphabet, Microsoft, and other big tech companies. There's nobody assigned to cover the data brokers, the unknown bad actors, etc. (Nicole Perlroth does a great job on cybersecurity, but she has to cover an entire field, not just one company.)

I am glad you raised the point.

sheislaurence, to tech

It seems there's an acceleration of people working in #tech speaking out or resigning over lack of serious #ethics work while developing #AI (#Hinton). Now #chatGPT linked to #MRI successfully reads thoughts. While the study concludes: "subject cooperation is required both to train and to apply the decoder", we know the pace of #technology means this won't be an obstacle for long. Then what? Is saving 1 person with cerebral injury worth enslaving 1 million? #techethics https://www.independent.co.uk/tech/brain-scan-ai-chatgpt-thoughts-b2330628.html

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • cubers
  • osvaldo12
  • mdbf
  • magazineikmin
  • normalnudes
  • InstantRegret
  • rosin
  • Youngstown
  • slotface
  • khanakhh
  • ethstaker
  • Leos
  • ngwrru68w68
  • everett
  • cisconetworking
  • tacticalgear
  • anitta
  • thenastyranch
  • Durango
  • tester
  • GTA5RPClips
  • modclub
  • megavids
  • provamag3
  • lostlight
  • All magazines