@dredmorbius Is this the alt right reddit thing?

The HN comments are sad. They have no clue how this thing works, what "free speech" is for (as opposed to gratuituous verbalisation of hate), and these guys make Facebooks and Twitters and Youtubes.

@dredmorbius Many comment's I see there miss the point w.r.t. free speech vs. weaponised speech. I.e. the top comment begins "Voat was founded as a neutral free-speech platform." where these platforms are never neutral nor are they ever concerned with speech or freedom. They are purpose built tools for dissemination of misinformation, hate and conspiracy, for a terrorist-imperialist agenda. The top answer to that comment says if Reddit et al. didn't expel the shitties of the shittiest (...)

@dredmorbius they wouldn't end up in places like Voat and get radicalised, whereas Voat and Gab and whatever are places where already radicalised people expressly go to enjoy the many sorts of hate found there.

The fourth comment from the top says "can we discuss x" and "can we discuss y", which range from BS to actual controversy, when, yes, we can discuss anything, so long as we're nuanced enough.

It's not all the comments in there, but a lot of them are unaware of these social aspects (...)

@dredmorbius of internet and by extension webdev. It's kinda telling of how schools fail at teaching devs ethics and social aspects of software.

I can kinda imagine how detached from actual internet phenomena any ethics courses CS or CE programs might be. IMO these people should also study recent social scholarship regarding internet as part of their curriculum. When I've asked around in the past, they either just didn't do any ethics or it was some formality.

cw: WMD/ genocide

@cadadr There is of course some work in the field, though it's not given the emphasis it deserves. Several commentators have noted that CompSci, unlike physics, has not yet had its Hiroshima moment (Ex-Googler and G+ architect Yonatan Zunger, trained in physics, among them).

That notion itself may be flawed: consequences of computer-based moral failure rarely arrive as blinding insights of unignorable magnitude airdropped with precision and creating both tens of thousands of martyrs and witnesses.

The same conflict which birthed Little Boy also snuffed the souls of 6 million Jews (and others) with less haste, but punch-card precision, tabulated and enumerated by IBM as per contract operating in and for Nazi Germany, recording data with serial numbers, some of which still remain tatooed on the arms of survivors.

And yet computer science is almost wholly unaware of its Holocaust past.

1/

There's William J. Rappaport's Philosophy of Computer Science, still in development, which includes chapters on ethics and ethics in AI specifically:

cse.buffalo.edu/~rapaport/510.

@cadadr

3/

There is the startlingly prescient writing by Internet (or proto-Internet) pioneers such as Paul Baran (co-inventor of packet-based networks), writing at RAND in the 1960s on issues of ethics, morality, and social responsibility.

rand.org/pubs/authors/b/baran_

(Those writings are now published free of charge online at my request.)

@cadadr

4/

Another problem is that computer science, or rather, computer practice, is not, and possibly has never been, a specific profession with dedicated training, certification, and a career track.

Computers are more like phones, or cars, or jackets: nearly everybody has one, many people own or use several, they're ubiquitous and part of virtually all work, entertainment, social engagement, and government. Phones and cars are computers these days, jackets may soon be.

If ethical training is required it needs to be universal. Or simply cultural, akin to religion in pervasiveness if not necessarily methods or structure.

Even within the tech sector, non-CompSci graduates in advanced roles (Zunger, myself) are the norm if not majority.

@cadadr

6/

@cadadr Peter G. Neumann's Risks Digest (SRI/ACM) often revolves around ethical issues, though that's not a core focus. I thought PGN had written on the topic (and he very likely has), but I'm not finding references readily.

en.wikipedia.org/wiki/Peter_G.

en.wikipedia.org/wiki/RISKS_Di

catless.ncl.ac.uk/Risks/

7/

Follow

@cadadr And a late add: Ethics of AI / University of Helsinki

The Ethics of AI is a free online course created by the University of Helsinki. The course is for anyone who is interested in the ethical aspects of AI – we want to encourage people to learn what AI ethics means, what can and can’t be done to develop AI in an ethically sustainable way, and how to start thinking about AI from an ethical point of view.

ethics-of-ai.mooc.fi

8/

Sign in to participate in the conversation
Toot.Cat

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!