@dredmorbius Is this the alt right reddit thing?
The HN comments are sad. They have no clue how this thing works, what "free speech" is for (as opposed to gratuituous verbalisation of hate), and these guys make Facebooks and Twitters and Youtubes.
More or less.
@dredmorbius Many comment's I see there miss the point w.r.t. free speech vs. weaponised speech. I.e. the top comment begins "Voat was founded as a neutral free-speech platform." where these platforms are never neutral nor are they ever concerned with speech or freedom. They are purpose built tools for dissemination of misinformation, hate and conspiracy, for a terrorist-imperialist agenda. The top answer to that comment says if Reddit et al. didn't expel the shitties of the shittiest (...)
@dredmorbius they wouldn't end up in places like Voat and get radicalised, whereas Voat and Gab and whatever are places where already radicalised people expressly go to enjoy the many sorts of hate found there.
The fourth comment from the top says "can we discuss x" and "can we discuss y", which range from BS to actual controversy, when, yes, we can discuss anything, so long as we're nuanced enough.
It's not all the comments in there, but a lot of them are unaware of these social aspects (...)
cw: WMD/ genocide
@cadadr There is of course some work in the field, though it's not given the emphasis it deserves. Several commentators have noted that CompSci, unlike physics, has not yet had its Hiroshima moment (Ex-Googler and G+ architect Yonatan Zunger, trained in physics, among them).
That notion itself may be flawed: consequences of computer-based moral failure rarely arrive as blinding insights of unignorable magnitude airdropped with precision and creating both tens of thousands of martyrs and witnesses.
The same conflict which birthed Little Boy also snuffed the souls of 6 million Jews (and others) with less haste, but punch-card precision, tabulated and enumerated by IBM as per contract operating in and for Nazi Germany, recording data with serial numbers, some of which still remain tatooed on the arms of survivors.
And yet computer science is almost wholly unaware of its Holocaust past.
There's William J. Rappaport's Philosophy of Computer Science, still in development, which includes chapters on ethics and ethics in AI specifically:
There is the startlingly prescient writing by Internet (or proto-Internet) pioneers such as Paul Baran (co-inventor of packet-based networks), writing at RAND in the 1960s on issues of ethics, morality, and social responsibility.
(Those writings are now published free of charge online at my request.)
Another problem is that computer science, or rather, computer practice, is not, and possibly has never been, a specific profession with dedicated training, certification, and a career track.
Computers are more like phones, or cars, or jackets: nearly everybody has one, many people own or use several, they're ubiquitous and part of virtually all work, entertainment, social engagement, and government. Phones and cars are computers these days, jackets may soon be.
If ethical training is required it needs to be universal. Or simply cultural, akin to religion in pervasiveness if not necessarily methods or structure.
Even within the tech sector, non-CompSci graduates in advanced roles (Zunger, myself) are the norm if not majority.
Relatedly, that William J. Rappaport book Philosophy of Computer Science mentioned earlier in the thread looks very interesting.
Reading through the table of contents is weirdly reinvigorating my interest in thinking about my philosophical interests as being a kind of computer science, even as I am actively distancing myself from the day-to-day work of software engineering.
@vortex_egg @michel_slm @dredmorbius @cadadr I think when I was at Uni, the CS department was transitioning to join the College of Engineering. It was controversial at the time as the accredited traditional engineering roles didn't see the CS degree as requiring the same rigor.
One thing done to appease this was CS adopted a lot of requirements unnecessary of their accreditation bit req for others.
Suddenly, CS students in Eng Ethics!
@vortex_egg @michel_slm @dredmorbius @cadadr When I took my engineering ethics courses, it was very obvious to see there was one group of students who treated it as a joke, something funny, or generally inapplicable. That group was exclusively Comp Sci students. The rest of the class was learning about both traditional engineering and computer-caused disasters with and without loss of human life. All of which was uninteresting to that group.
I've long wondered about various computing folk using engineering titles without engineering education like you describe, but also without regulatory accountability like licensing or meaningful certification. Maybe some of those US AGs in on the antitrust suits should roll up on some of the firms with these sorts of positions and ask for credentials ....
for all the grief directed at the idiosyncracies and inadequacies of the free software movement, it has made and held open space for talking about ethics in computing, where earlier fulminations faded, failed, or failed to build any legacy.
the ethical source efforts have made a run at the problems, recently, hearkening explicitly back to some of those earlier but fruitless efforts
@dredmorbius Thanks a lot! You've given me some nice reads too. FWIW I'm fully autodidact too.
I like that last analogy of yours, but then there's the fact that e.g. anyone can use a level, but not everyone is allowed to build a wall with it, or supervise the building of a wall.
Tho ofc whether or not CS students or new entrees to computer practice get ethics training is more a tangentially useful thing that's nice to have rather than a silver bullet, because under wild global capitalism (...)
@dredmorbius it's naive optimism to expect that most people will quit or reject implementing when faced with something like Facebook or Reddit (esp. when major cesspools like r/WPD or r/T_D remained far longer than they ever should). After all capitalism likes two kind of people best: detached entrepreneurs unbothered with human suffering they may create, and 9-to-5 wage slaves that are too tired and vulnerable to make the connection between what they do and what's its greater effects. (...)
@dredmorbius And these things like Voat and Gab are created outside even that sort of capitalism, by networks of fascism that attract the expressly bad people and survive on the hate economy where the creators of these things are under the patronage of little hate lords.
This is inevitably one of those global policy problems of the third millennium AD where as nations outlaw one thing the next is birthed on another one (tho the US has been welcoming enough to these terrorists for a long (...)
All we can hope for now is that the business of "bad apples" fail despite the help of those who have a vested interest in the society going berserk.
That Hiroshima moment vs. IBM mention has me thinking, esp after having watched https://www.youtube.com/watch?v=RCRTgtpC-Go
Evil brews slowly before our indifference. We never actually _had_ the Hiroshima moment. Genocide's but a fancy name we found post-factum to an eternal disease of human soul.
Maybe we need to _heal_ as a species. IDK 😞
@cadadr Peter G. Neumann's Risks Digest (SRI/ACM) often revolves around ethical issues, though that's not a core focus. I thought PGN had written on the topic (and he very likely has), but I'm not finding references readily.
@cadadr And a late add: Ethics of AI / University of Helsinki
The Ethics of AI is a free online course created by the University of Helsinki. The course is for anyone who is interested in the ethical aspects of AI – we want to encourage people to learn what AI ethics means, what can and can’t be done to develop AI in an ethically sustainable way, and how to start thinking about AI from an ethical point of view.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!