A Categorical Terminological Imperative

I’ve recently dabbled some in AI discussions, for a writing project, and here’s a thing I’ve found out: lots of people really aren’t very careful about the concepts they use. I’ve two related things to say here. The first is: I think we’ll have a really tough time dealing with how we think about AI because our language seems tailor-made to confuse us. The second is: I’m hereby prescribing to everyone writing on AI the use of the categorical terminological imperative, which is:

Use only those concepts whereby you can at the same time will that they apply everywhere.

Hey, have I lost you already? Sorry. I mean to say the following: if you trawl especially the more optimistic writing on AI, you’ll very frequently encounter the use of such really complicated concepts such as “know,” “understand,” “intelligence,” “better,” “feel,” and so on used as though it were self-evidently correct to say that, for instance “ChatGTP-4 knows what you ask it.” On the one hand, I think that’s an artifact of the deficiencies of our languages to pithily name what it is that a LLM is and does. We recur to “know” because it’s a short descriptor of internal processes we are familiar with: correct response to stimulus, where even “correct” often merely designates “plausible,” rather than “true.” We say “understand” because we’d like to put a name to the automatism of inputàoutput that names the process by which input is translated into output. But, not to put too fine a point to it, none of these terms really grasp the algorithmic nature of AI, unless you really want to say “understand” is purely pattern recognition and “knowledge” is a good name for millions of data manipulations that translate the input “Who was the 45th President of the United States” into “Donald Trump,” or indeed the question “what color jacket would suit me” to  “brown.” The issue here is less that AI does not know, or does not understand, or indeed, that it does not feel: the issue, rather, is that the easy equivalencies these terms draw between what happens in the black box of AI and what happens, for instance, in the far less black box of my own brain is too facile.

So: we need a categorical terminological imperative. You only get to say “AI knows” when you’re actually willing to say when you know, and when AI knows, there’s really no difference. If you say “AI knows,” you must be okay with genuinely understanding the processes of AI knowledge and your knowledge as the same; in simple terms, you must be okay with understanding all knowledge as stochastically computational, or conversely, to misunderstand AI’s “knowledge” as not. Both of these conceptions are, it seems to me, at least difficult to sustain.

In simple terms: when you say “know” in relation to entities which may not be adequately described as “knowing” anything, then explain what you mean when you say know. If you mean “know” in a specific way, not easily covered by our more common ways of understanding the term—then define it for your purposes. I realize that sometimes, when writing for one of those flimsy little popular surveys, there’s not the space to do so—or so you think. But those are the most important cases for being careful with your concepts. If you throw the idea that “AI knows stuff” and “understands what you want” at an audience often enough, that audience might not realize anymore how these words may not really apply. The categorical terminological imperative, like that other imperative, is most useful in the everyday.