4 Comments
Feb 28, 2023Liked by Daniel Zollinger

If you are worried about the Pascal quote ("apparently') I can put your mind at ease, it is genuine and a pretty good translation of a passage form letter 16 of the "Lettres provinciales "

[...] mes Lettres n'avaient pas accoutumé de se suivre de si près, ni d'être si étendues. Le peu de temps que j'ai eu a été cause de l'un et de l'autre. Je n'ai fait celle-ci plus longue que parce que je n'ai pas eu le loisir de la faire plus courte"

Though to be terribly pedantic, that letter is from 1656, the provincales were send from 56-57. To prove I'm not a chatbot full reference: Blaire Oascal, seizième lettre, 4 décembre 1656, in Œuvres complètes, texte établi et annoté pars Jacques Chevalier, Paris, Gallimard, coll. «Bibliothèque de la Pléiade», 34, 1962, p. 865

Expand full comment
author

I was, a little, yeah... mostly because when I went looking for the sentiment I immediately found that it is another one often misattributed to Mark Twain. Thank you! 🙏🏼

Expand full comment

I understand the concern about human content being decimated by machine content, but comparing this endless wave of AI to the failure of Crypto is idiotic.

Both things don't intersect, at best models of generative art have emerged to kick NFTs out of the boat for good. Real humans have always had the free choice to write "plausible nonsense" anywhere, bearing this in mind, what is your real concern? I would say your concern is the quantity and ease of creating this type of content right now, but AI is just a tool, who gets to choose how to use the tool? we, the human nerd behind the screen.

We deal with spam and scams in our emails on a daily basis, all spread through programmed actions, but nobody has ever tried to end the programming languages ​​that make this possible, because it doesn't make sense, they are not the real villain of the story.

Targeting AI models is not targeting the real villain.

Expand full comment
author

So I disagree with a few of your contentions here, but I want to focus on two things we seem to agree on: that "who gets to choose how to use the tool" is a central question, and that the technology on its own is not the real villain.

My core issue with ChatGPT is not a technological one - in fact, if anything, I like procedural generative tech. I'll see if I can summarize my concerns.

1. Tech is a lever for scale - in the same way that the low-cost of spam forced centralisation of email, so will reducing the cost of generating industrial levels of nonsense.

2. The mistaking of "plausible nonsense" for "information" will cause this tool to be misdeployed in so, so many places, eventually polluting the very source these tools draw from.

3. In fact, the perception of ubiquitous nonsense will serve to further undermine public trust in communication. Think how institutional processes for accepting job applications, tenancy applications, academic essays are already paranoiac and computer-driven.

4. Only people who sit above the algorithmic line will be (relatively) free from these effects

5. ... and far fewer people sit above the line than imagine they will

6. ... specifically, the really rich, the owning class. Not millionaires, but hundred-millionaires up.

7. ... this class *always* uses machines to break labour power - see Jacquard looms and CNC machines, even if the machine costs more and works more poorly.

This is an economic and societal argument, really, not a technological one. We live in a Capitalist hellscape, and LLMs etc will be used to break e.g. Wikipedia, open source submissions. It's Clay Shirky's "Here Comes Everybody" in reverse.

Expand full comment