Briefly, my concern with ChatGPT
My last post, a sketch specifically about the weariness of tech debunking, got a few more eyes than intended, and folks wanting to engage on "Why is ChatGPT bad?". This is my why.
Meta: I’ve suddenly got a bunch of new subscribers and I’m still mentally adjusting to this, so… welcome if you are new here, be warned I’m expressly not a single-concern person so I won’t be offended when you realise that I talk about a bunch of things and hastily unsubscribe. To my regulars: I promise the next one won’t be ChatGPT-related!
It’s tough to quickly summarise my core problems with ChatGPT1. In many ways, it is qualitatively more of the same weaponised nonsense that we see in SEO link farms, Social Media adversarial bots, email spam. It’s an incremental development, not a breakthrough. And yet.
My issue with ChatGPT is not a strictly technological one. It is a concern with how the economics of the new tools will combine with existing forces to break many of the systems I care about.
I tried to summarize it in a reply to a comment on my last post, but I figure it’s worth putting out as a quick general line of thinking.
1. Tech is a lever for scaling effort - we already have farms of people in impoverished but educated countries writing copy, bolstering website chatbots, managing fake profiles, calling my grandparents to try to scam them. I expect this labour to change to include funneling ChatGPT into similar work, at 100x scale.
2. The mistaking2 of "plausible nonsense" for "information" will cause this tool to be mis-deployed by institutions in so, so many places, eventually polluting the very source these tools draw from.
3. In fact, the perception alone of ubiquitous nonsense will serve to further undermine public trust in communication. Think how existing institutional processes for accepting job applications, tenancy applications, academic essays are already paranoiac and computer-driven.
4. Only people who sit above the algorithmic line will be (relatively) free from these effects
5. ... and far fewer people will sit above the line than imagine they will.
6. ... specifically, the really rich, the owning class. Not millionaires, but hundred-millionaires up. Do you own your own utility company or are you mates with someone who does? If not, you probably aren’t exempt.
7. ... the owning class always chooses to use machines to break labour power - see Jacquard looms and CNC machines, even if the machine costs more and works more poorly.
I speculate the following:
Projects with processes that accept public submissions will break3 in short order: journal submissions, Wikipedia, open source projects, government community engagement.
Non-public submissions processes, like job applications, tenancy applications, academic work, will be made to suck even more than they already do - layers of “AI detection” will be added with brutal false positive rates.
Institutions will close up phone lines, force people into chats nearly totally unbacked by human operators, and tune these systems not for enablement but for KPI’s that suit them. For instance, why would an insurer not want to dial in a % target of claim attempts that they allow through?
The counter-rise of DoNotPay-alikes to act as clients for these broken systems. Nonsense v Nonsense. ELIZA vs Zippy the Pinhead.
Very strange and unexpected changes to our language will occur as we try to navigate our way through these hostile systems and retreat to magical thinking to try to maximise our chances of success.
Oh boy a lot of murderously bad software is going to get written and pushed out as Github Copilot and similar are forced upon the remaining developers to boost their “productivity”.
And then... the new LLMs will get trained on all the crap that the old LLMs filled the internet with. The Bullshit Singularity approaches.
I appreciate that this is a dense, running dash through the shape of my concerns. For a better article, might I recommend you read Dan McQuillan’s article, We come to bury ChatGPT not to praise it.
I’d love to know whether you agree with my reasoning, or think I’ve missed an obvious trick.
Thanks again for the read. I have many, many Non-ChatGPT drafts in the pipeline, but I had to strike while the iron is hot.
By which I mean, broadly, all of the Large Language Models (LLMs)
This “mistake” is deliberately being cultivated by people who know better.
Where the “break” will be obvious, as in the process is turned off, or subtle, as in the process is adapted to work in ways that exclude or hurts groups of people caught up in the defense.
I think people are underestimating the sheer volume of quasi-meaningful nonsense that this machine is going to put into the world. You have pointed it out in your article the issues already being seen by Clarkesworld and similar but it's really the tip of the iceberg. Chasing down these magazine submissions for the pittance of pay is the first and most obvious incentive, but the list of grifts is just going to keep growing. I can easily forsee a future that renders the internet all but useless as chatbots are constantly querying and answering their own bilge and flooding SEO optimised sites.
Your point about the use of machines to drive down labour is the thing that really, really worries me though. Companies have almost no incentive to continue to pay more money for a human, and labour is already strained from market deregulation and fallout from the pandemic.
It's very worrying, and I'm currently writing a series that's arguing my own concerns which are almost directly aligned with yours.