Now that my initial rage has settled a bit, a slightly more composed post on #chatGPT. Luckily by now a bunch of folks have said it better than I could.
It's colonial: https://hachyderm.io/@avdi/109467735532593848
It's going to be a disaster for search engines: https://dair-community.social/@willbeason/109455955446761302
It will lead to dangerous misinformation that only an expert can distinguish https://mastodon.social/@tommorris/109460645976102033
It's inherently conservative, since it can only work off of, and reproduce, information from the past https://mastodon.social/@martinvanaken/109461755435315412
The Turing test is no longer a useful metric https://mastodon.social/@intelwire/109457359779504693
Finally, some of my own thoughts. I firmly believe that the value that #chatGPT and similar models deliver to society is negative. Being able to generate at the press of a button plausibly and smart sounding content that might be wildly but subtly incorrect, with no citations, no sources, no indication of providence, is a recipe for poisining the information environment.
I see programmer types concluding that this will "replace programming" and "make us obsolete". I find it shocking that folks have such a low view of their own profession. The worst, most dangerous code isn't code full of bugs. It's code that "works", but that is incorrect in ways that are pernicious but not obvious.
The job of a programmer also isn't to reproduce some algorithm, it's to understand and model the world, to understand and anticipate user needs, it's to collaborate and shepherd a code base over a prolonged period of time. No language model will do these things.
To quote someone who's very dear to me: "PEOPLE AREN'T USING THEIR BRAINS". Please, use your brains, and be grateful for all the things it can do that a pattern matching model can't.
@plexus This reminds me of that story for which the punchline was "...but it compiles!"
@plexus I am undecided about whether this is a positive or negative development. In any case, it is impressive, despite all its flaws. For me, as a university professor, it is tricky. Not only because the students can cheat using it, but also because some students may be put off by the fact that the machine can answer questions that they can't, in particular programming questions (but much more, like writing proofs in e.g. Agda or even natural language).
@plexus I'm oscillating a bit between "oh gosh, in a few years they'll have taken the lead" and "yeah, the dumbness will show, eventually", but at any rate, I found this exchange rather perplexing. I told it about a fragment of an obscure programming language I had made up on the fly and it appeared to make good progress towards learning it. No idea how far it would have gotten if the conversation hadn't broken down. https://blog.hespere.de/a-first-course-in-oerfzelang
poisoning***
@plexus maybe suspecting a plausible text can be synthetic will force many to use their brains. Maybe this is what Bernard Shaw called homeopathic education.