cw: WMD/ genocide
@firstname.lastname@example.org There is of course some work in the field, though it's not given the emphasis it deserves. Several commentators have noted that CompSci, unlike physics, has not yet had its Hiroshima moment (Ex-Googler and G+ architect Yonatan Zunger, trained in physics, among them).
That notion itself may be flawed: consequences of computer-based moral failure rarely arrive as blinding insights of unignorable magnitude airdropped with precision and creating both tens of thousands of martyrs and witnesses.
The same conflict which birthed Little Boy also snuffed the souls of 6 million Jews (and others) with less haste, but punch-card precision, tabulated and enumerated by IBM as per contract operating in and for Nazi Germany, recording data with serial numbers, some of which still remain tatooed on the arms of survivors.
And yet computer science is almost wholly unaware of its Holocaust past.
One initiative of which I'm aware is Moshe Vardi, at Rice University in Texas:
There is the startlingly prescient writing by Internet (or proto-Internet) pioneers such as Paul Baran (co-inventor of packet-based networks), writing at RAND in the 1960s on issues of ethics, morality, and social responsibility.
(Those writings are now published free of charge online at my request.)
Another problem is that computer science, or rather, computer practice, is not, and possibly has never been, a specific profession with dedicated training, certification, and a career track.
Computers are more like phones, or cars, or jackets: nearly everybody has one, many people own or use several, they're ubiquitous and part of virtually all work, entertainment, social engagement, and government. Phones and cars are computers these days, jackets may soon be.
If ethical training is required it needs to be universal. Or simply cultural, akin to religion in pervasiveness if not necessarily methods or structure.
Even within the tech sector, non-CompSci graduates in advanced roles (Zunger, myself) are the norm if not majority.
@email@example.com Peter G. Neumann's Risks Digest (SRI/ACM) often revolves around ethical issues, though that's not a core focus. I thought PGN had written on the topic (and he very likely has), but I'm not finding references readily.
@firstname.lastname@example.org And a late add: Ethics of AI / University of Helsinki
The Ethics of AI is a free online course created by the University of Helsinki. The course is for anyone who is interested in the ethical aspects of AI – we want to encourage people to learn what AI ethics means, what can and can’t be done to develop AI in an ethically sustainable way, and how to start thinking about AI from an ethical point of view.
On the internet, everyone knows you're a cat — and that's totally okay.