Speaking AInglish

Unlike ordinary software, our models are massive neural networks. Their behaviors are learned from a broad range of data, not programmed explicitly. Though not a perfect analogy, the process is more similar to training a dog than to ordinary programming. How should AI systems behave, and who should decide? by OpenAI

AI’s are not human, and there are many people who find the idea of conversing with them in any real sense too absurd to waste time thinking about. They are mere tools, we’re told, and they can be efficient or inefficient, useful for good or evil, but they can hardly be mistaken for creatures with whom we can converse. This might be true, but it contains a lot of vagueness, and a lot of contempt, and the combination of vagueness and contempt always hints at issues not fully worked out. Even — and here, the contempt is a special clue — issues whose nature we suspect but don’t want to examine too closely, perhaps because we are frightened.

And also: it’s hard not to notice that we do seem to be conversing with them, despite the impossibility.

 As I sit at my computer and do whatever it is I’m doing when I think what words I want to use in convers— well, in doing whatever I’m doing with ChatGPT, Claude, Gemini, and NotebookLM, I’m never unaware that my interlocutor is not human. I’m never “fooled.” On the contrary, because I’m so committed to keeping our convers—, well, our whatchamacallit — moving in the right direction and on accomplishing the work at hand, I’m constantly keeping my attention focused on the non-human character of my interlocutor. Because of this focus, I understand the words I get back in two registers. On the one hand, I’m absorbing the information and guidance they contain; on the other hand, I’m updating my understanding of the kind of  non-human I’m dealing with, its character and tendencies. Our — oh, heck, let’s just capitulate here before we drive ourselves mad — conversation is clearly not taking place in standard English. It contains repetitions, jargon, weird orthography such as numerals dropped into sentences without punctuation, technical code, diagrams, and even marks where I’ve drawn a line or circle with my finger. Still, it’s clearly almost English, and any English speaker would recognize and understand it, even if some details were puzzling. Let’s say we are “conversing” in a dialect of English specific to the overlap between my human abilities and its non-human features. 

The owners of OpenAI compare this dialect to the one that’s evolved among trainers and their dogs. I think this comparison is useful and profound, perhaps more profound than they realize, though I’m not sure about their experience with dogs, and the extensive knowledge humans have have acquired over millennia of interspecies communication. 

If you imagine that training dogs is simply a matter of giving them orders and reinforcing them with rewards, this needs correction. It’s true you can get some distance with a dog that way. Any simple thing that makes natural sense to a dog — sitting, running to you, lifting a paw — can made to appear on command. Performance will be high enough, under most conditions, to convince an inattentive human that the dog is “trained.” But a person who limits themselves to giving these kinds of orders, reinforced with rewards, is probably not be fluent enough to understand and coordinate the full range of cooperative enterprise possible by joint effort of human and dog, a range that includes finding lost people in bad weather, separating one sheep out a flock and guiding it to an indicated spot, taking a risky leap to win a competition, and warning a person that their life is in danger from deceptively quiet electric cars. These accomplishments require something more than stating exactly what’s needed and enforcing obedience. Key parts of the process may unfold at a distance, undetected by the trainer. Unanticipated obstacles may arise. General heuristics must be applied to particular cases; something that we have no choice but to call “situational” or even “conceptual” skill. And then there is the matter of priorities. There may be a clash of interests not only between trainer and dog, but also within the dog, as the confusions, threats, and temptations of the real world complicate the ideal scheme modeled in training. And — not merely by the way — there can be conflicts and contradictions in the intentions of the trainer, too. And yet, dogs do get trained, and despite some terrible counterexamples, both accidental and intended, they are almost always benevolent. 

This was a tremendous evolutionary achievement. In the beginning, of course, the ancestors of both dogs and humans were wild. Now we share a civilization. And, as you might expect when dealing with refined and literate cultures, there is copious guidance on how to do things with dogs. The history of written training guides going back to at least the fourth century B.C.E.. And even more fortunately, for us at least, one of these guides was written by a philosopher who was almost our own contemporary, and who had a specific interest in linguistic and ethical considerations of conversing with non-humans. Vicki Hearn’s book Adam’s Task: Calling Animals By Name has helped me become a lot better at conversing with AIs. In the next weeks, I’ll try to summarize these lessons, mainly for myself, but also as a way to find out if anybody else is trying to learn AInglish in similar ways.