Nic Rowan is managing editor of The Lamp and a fellow in the Robert Novak Journalism Program through The Fund for American Studies.
About a decade ago, the editor David Samuels wrote a brilliant profile of Ben Rhodes for the New York Times, focusing on the foreign policy advisor’s relationship to Barack Obama. Everyone in the White House agreed, Samuels reported, that Rhodes and the president had achieved a “mind meld.” It wasn’t that the former did all the thinking for the latter. Rather, Rhodes had comprehended Obama’s mind to the point that his thoughts were the president’s thoughts. Sometimes, he admitted, it was hard to tell “where I begin and Obama ends.”
Something similar is happening with people who habitually use large language models as a writing aid. The effect is only just becoming noticeable. Much has been said in the last year describing the extent to which many Americans have, seemingly without hesitation, integrated artificial intelligences into their lives: College students frequently consult them when writing papers, advertising companies enlist them to produce copy, and even governments have begun experimenting with them in composing laws. At the same time, however, little has been said about how the frequent use of A.I., and, indeed, even frequent exposure to texts composed by L.L.M.s, fundamentally alters the way the same people think, speak, and act. I have come to think of this unspoken phenomenon as the A.I. mind meld.
By this phrase I don’t mean anything futuristic and certainly nothing optimistic. I only wish to point out that the texts we read and those we write—or, as the case may be, only sort of write—form our minds to a much greater extent than many of us care to admit. In the next few years I expect many changes, perhaps only marginal at first, to the way that most people interact with language. The end result of the A.I. mind meld is not illiteracy, as many more hysterical critics fear, but instead a sort of sub-literacy, where words are still written and read, but in such a way that devalues them of their worth.
There is a long literary tradition of such mind melds that stretches back well before artificial intelligence of any sort. It used to be confined to plagiarism. One fairly inventive example, which I have found helpful in understanding the justifications for A.I. composition, comes from Tobias Wolff. In his quasi-autobiographical novel Old School, Wolff writes of a character who wants to merge his own voice with that of Hemingway’s so badly that he copies out the author’s short stories with a typewriter and begins speaking in his cadence. He soon finds that this is a useful exercise, performs it with every author he admires, until, almost by chance, he plagiarizes a full short story. When confronted with this fact, he still attempts to claim some authorship: “I couldn’t reconcile what I knew to be true with what I felt to be true.” Of course, what he felt to be true was, in a limited way, actually true: By copying out his favorite authors, by thinking in their phrases, he had succeeded in merging his mind with an existing body of work. But, like those who write with L.L.M.s, he never moved on to the next step in authorship, which is to decouple oneself from influence, momentarily, and compose one’s own writing.
Not long ago I experienced something similar in my own life. I was working on Another Project with a recent college graduate. My portion of the work was to edit his writing. Everything seemed to be going along swimmingly. He filed at a reasonable time, and I allotted a long weekend to make my changes. But when I sat down at the computer, I was baffled. What he had turned in, while more or less covering our area of study, was somehow completely smooth and contentless in its expression. There was no sense to any of it, and yet it wasn’t exactly nonsense either. I tried to edit, to impose some structure, to undergird the pile-up of S.A.T. adjectives with solid facts—but it was no use. His writing was amoebal. To cut any of it was to cut it all.
I then attempted to read the work aloud. This proved disastrous. What roughly made sense on the page sounded like gibberish when spoken aloud. By this point, I had become suspicious, so I plugged in a few paragraphs to an A.I. detector—several detectors, just to be safe—and sure enough, the results came back positive. No human being could have produced this work.
And yet the more I talked to my (soon-to-be former) colleague, the more I suspected that a human being could have produced this work, and, for all I knew, perhaps one had. After all, my colleague’s syntax and word choices in his emails, his texts, and even in his speech were not all that different from those of the L.L.M. He, too, spoke of “key points” and “actionable items”, not to mention “sustainable models.” When I talk to other young people in a professional context—or, as is often the case, overhear their job interviews while I work at the university library near my house—I am often treated to a similar show. If these instances are any guide, the brightest members of my generation want “to dive into their work”; “to boldly navigate” its “landscape”; “to enhance,” “to emphasize,” “to revolutionize“ their “industries“ with “hard-hitting solutions,” often all at once.
These phrases are not new. This is the tired patois in which corporate handbooks were written fifteen years ago—the exact material from which L.L.M.s aggregate many of the words they use to produce text. It is strange to hear so many people my age (and younger) speaking in this tongue as if it were natural. Many of them have never worked in an office, and few have had formal exposure to the Human Resources apparatus that made this language—and the persistently peppy tone in which it is expressed—ubiquitous in American professional life. But all of them are regularly exposed to A.I.-generated text—in the classroom, on social media, even, I am told, in religious services. And, if the anecdotal evidence is to be believed, most of them use A.I. argot to express themselves in these contexts as well. Two years ago, the technology was novel; today, its use is accepted, even expected.
It would be unfair to claim that the A.I. mind meld is confined narrowly to the twenty-five and under set. Everyone, myself included, is affected by it in some way. This is just how these things work. In his profile of Rhodes, Samuels notes that once the advisor had achieved his mind meld with Obama, he made sure that everyone else following the White House merged their minds with his, too. It was not a difficult task: All he had to do was supply the words, and the press corps would repeat them as if they were original. Pretty soon everyone was repeating the same phrases in the same way. “They literally know nothing,” Rhodes bragged, and with his help, most knew even less.
I foresee something similar occurring when the A.I. mind meld is complete. More pessimistic observers will say this has already occurred. The thing works like a boa constrictor: as more people use L.L.M.s and come to rely on them, the range of expression narrows, especially when the A.I. of the future is itself trained on A.I.-written texts. The future of language will be relentless, groundbreaking, and elevated.