Skip to Content
Search Icon

Impossible Things

On a project to clean up human behavior.


Stanley Fish is the presidential scholar in residence at New College, Florida. He is author most recently of Law at the Movies: Turning Legal Doctrine into Art (Oxford, 2024).


Way back in 1979 I wrote an essay against blind submission, the policy of removing from articles submitted to learned journals the identifying marks of the author’s name and his or her academic affiliation. The idea was to facilitate a disinterested and objective assessment of the piece made independently of the distracting factors of disciplinary prestige and previous publications. Why not let the submission speak for itself without the unfair and irrelevant advantage it might have if it were read through the lens of the author’s body of work? In that way, it was said, the editors would be judging the article’s intrinsic merit—the merit it has as a stand-alone object without a pedigree—rather than responding to the halo of borrowed merit conferred on it by its predecessors.

The simplest and most devastating response to the “intrinsic merit” argument is that looking at a piece of academic work apart from the professional networks within which it emerged is not a possible thing to do. Should you somehow succeed in setting aside or discounting all the circumstances within which the text before you was produced, what would you see? The answer is nothing, or, to be more precise, nothing whose meaning could be pinned down. You would see a collection of words embedded in a syntax, but independent of the contexts you have excluded—biography, disciplinary history, professional influence—there would be no way to assign a specific significance to those words. You would be in the position of someone who found a message in a bottle or a manuscript in your files with no title page. How would you read such an orphaned text? You would read it as the work of someone with an imagined or hypothetical profile, someone interested in this or that question with a history, a member of this or that school, a participant in this or that controversy. In short you would be hazarding a series of guesses in an effort to put back all the “extraneous” information provided by the missing title page. Only then would what you are looking at have a shape to which you could put interpretive questions like “What is it arguing?” or “Where does it fit in?” Absent that speculative contextualization, you wouldn’t have an “it.” Reading the words themselves without any speculative assignment of authorship won’t get you anywhere, or—and it is the same thing—it will get you everywhere, for the meanings that might be attached to words and sentences divorced from an intentional anchor constitute an infinite set; you can always ask what this would be saying if Kant wrote it, or Wordsworth wrote it, or Elvis wrote it. There is no natural end to the exercise; you could go on forever, trying out this or that hypothetical author, and never get any closer to determining the text’s meaning. Blind submission is not only a bad idea; it is an unrealizable idea. It can’t be done; you can’t read a text as if it came from nowhere and no one. And if you think you’re doing it, what you’re really doing is substituting for the actual, excluded interpretive contexts the interpretive contexts you have been forced to fabricate in the absence of the real ones. That can hardly be called objectivity. Indeed, if you want to arrive at an objective account of a text, the last thing you should do is remove from the field of interpretation everything that might guide it. I should add that there is no such thing as intrinsic merit; there is only the merit that might accrue to a writing when it is seen as a specific, particular disciplinary effort.

I am not saying that implementing blind submission will have no effect, only that it will not have the effect claimed for it, the effect of making the process purer and more objective. It might have the effect of improving the chances of scholars from out-of-the-way schools; they would no longer be in open competition with well-established names. And it might have the effect of increasing the percentage of publications authored by minorities; implicit bias triggered by non-Anglo-Saxon names would be negated. But these would be political changes, not changes, necessarily, in the overall quality of what got published. Do you want to democratize scholarship by shaking up the mix (a possible objective, but not an academic one)? Then blind submission might well be your cup of tea. Or do you think that scholars with track records will generally have more to offer than newcomers to the game no matter how smart they may be? Then blind submission will seem to you to be a social-justice experiment of which we should be wary because it elevates diversity over insight and illumination. Either way, there will be advantaged and disadvantaged constituencies, but in neither case will you have an advancement in the direction of a general fairness or an across-the-board regime of equal respect. That’s not going to happen—if it did the result would be universal acceptance and the elimination of judgement. (It’s like giving a prize to everyone for just showing up.)

Supporters of blind submission often point to orchestra auditions and wine tasting as validations of the policy. If those who audition perform behind a screen, their race, ethnicity, age, and gender will not influence the assessment of their performance; and if the vintage tasters taste is without a label, assessing its quality will not be influenced by knowledge of the vintner or of a particular field. But this is to compare apples with oranges. Evaluating journal submissions is an interpretive act that involves (among other things) an identification of the contested issue, knowledge of the disciplinary conversation centering on that issue, an awareness of the key participants in that conversation, and a sense of the history of that conversation and its current state. Evaluating a musical performance requires only an ear (and of course some ears are more acute than others) and evaluating a wine requires only a palate (and of course some are more discerning than others). Determining what something sounds like and determining what something tastes like are not the same as determining what a text means and whether its meaning is one we approve. One is a sensory judgement and the other is a cognitive judgement. Now, it might be said that hearing and tasting are not themselves experiences independent of social/cultural/political circumstances. An observant Jew who after eating what he thought was hamburger is told it is ham or after hearing a piece of music he enjoyed is told it was by Wagner might have a bad taste in his mouth and a discordant sound in his ears. Intrinsic or pure taste and intrinsic or pure hearing might be as much chimeras as the intrinsic merit of journal submissions.

Blind submission is a small instance of a larger, perennial project, the project of abstracting away from human perspectives in order to get a clear view of human productions, especially verbal productions. In what follows, I want to line up some other versions of that project in a list one might find in a Borges text: legal realism, textualism, corpus linguistics, critical legal studies, anti-professionalism, interdisciplinarity, consciousness raising, transparency, artificial intelligence. I will be arguing that each of these, in somewhat different but finally similar ways, offers the same false hope that data untethered from established contextual moorings can serve as the basis for a grounded normative regime.

Legal Realism

In the case of legal realism, the relevant data are the atheoretical, simply observational descriptions of lawyers and judges at work as opposed to the formal abstract vocabulary those same lawyers and judges often deploy. The thesis is that the formal vocabulary—talismanic words such as justice, fairness, and equity and purported entities such as due process, corporations, property rights—has little or nothing to do with how decisions are actually made. The items in that vocabulary are the content of what Karl Llewellyn called “paper rules,” rules that supposedly govern legal process, but are just the empty honorifics (Felix Cohen derides them as “transcendental nonsense” in the title of a famous article) legal actors like to invoke when they are asked for an account of what they are doing. What they are really doing is governed by what Llewellyn terms “real rules,” the rules—and Llewellyn is quick to say that they are not really rules at all but, and here he follows Holmes, “statements of likelihood”—one can extract from closely observing “the actions of the courts.” If it is to be accurate and helpful, the act of observation and the description that follows from it must be performed with as little recourse to abstract categories as can be managed. “To classify,” says Llewellyn, “is to disturb,” and what you disturb is the “data under observation,” which, ideally should be observed “in as raw a condition as possible.” That is, if you want to understand the springs of legal action, describe in the plainest of language and without the distorted imposition of categorical concepts what legal actors do. The raw data of legal performance should be used as a control on the categories rather than the other way around. We should discover, Llewellyn declares, whether the “traditional categories really cover the most relevant of the raw data,” and if they do not we should “undertake such modifications in the categories as may be necessary.” Data first and foremost, categories only when you can’t avoid them.

The telltale word in Llewellyn’s directive is “relevant.” By what measure is relevance to be assessed? How do we distinguish between the relevant data and the irrelevant data? The answer—and it is the only one available—is by a measure derived from one of the very categories whose deployment he urges us to avoid: relevant in relation to equity, or relevant in relation to due process, or relevant in relation to unjust enrichment, and so on. Insofar as the data are to be useful at all—are to be more than an aggregation—they must be seen from the angle of some formal legal concern. If no such concern presides over their collection and assessment, the data will just sit there, inert, and nothing can legitimately (fudging and interpretive leaps of faith can always be performed) be done with them. Here a distinction John Searle offers in his classic book Speech Acts between brute and institutional facts might be helpful. Imagine, Searle says to us, a description of the actions of football players that takes no account of the point of the enterprise the players are engaged in. Two men take four steps backwards. Another runs horizontally across the field. Others face one another in a crouch and then try to knock one another down. The description could go on forever and become ever more detailed, piling brute fact upon brute fact, but at no point would the accumulated detail amount to anything, amount to a motivated (as opposed to arbitrary) assignment of significance or meaning. If, however, you introduce or recall what any action performed in a football game is aiming at—to advance the ball toward the goal line or prevent such advancement—everything every player does immediately acquires a significance and a meaning and is open to assessment. Was that the right thing to do? Did it work? Brute facts have given way to institutional facts. And while it is always possible to detach institutional facts from the practice that gives them shape and intelligibility and thus return them to the status of being brute, you can’t go in the other direction. You can’t go from brute facts massively assembled to a specification of their significance without adding or presupposing the conceptual category or categories the brute facts neither contain nor imply. Mere data gathering—if such a thing is possible, and in a moment I will argue it is not—is interpretively impotent.

My point is supported by an analysis Felix Cohen performs in his essay “Transcendental Nonsense and the Functional Approach.” After he excoriates the abstract and (he believes) empty vocabulary of traditional jurisprudence, Cohen turns to explaining and extolling the “functional approach,” which will, he claims, revolutionize in a good way any area of study. One of his examples is religion, which, he says, should be studied by paying attention not to doctrinal tenets or “theological propositions” (the equivalent in religious practice of mysterious notions like due process and laches) but to the way people who have self-identified as religious act. You ask not what religion is, but how it works—how, as he puts it, “does it serve to mold men’s lives, to deter from certain avenues of conduct and expression, to sanction accepted patterns of behavior?” In effect you conduct what we now know as a Pew research survey. Where do religious people live? What foods do they eat? What movies do they watch? What schools do their children attend? What sports do they play? To what political parties do they belong? You could ask such questions from now to the Second Coming and the answers you receive will certainly tell you something, but they won’t tell you anything about religion as a form of commitment to which people adhere because they believe, for example, that God appeared to Moses in a burning bush or enabled the same Moses to part the Red Sea or sent His only begotten Son to live on earth and be crucified so that He could take upon Himself the burden of all our sins and open up the way to an eternal life beyond mortal death. The fact that persons whose beings are informed by such beliefs congregate in certain parts of the country or gravitate toward certain professions is of sociological interest (another name for legal realism is sociological jurisprudence) but is of no religious interest. While you can reason from beliefs held by strong religionists to some of the behaviors believers engage in, you can’t go in the other direction, from the survey of raw behaviors (they eat here, they live there, they buy this) to an understanding of what motivates them, to an understanding of what religion is or how it structures the very perceptions of its adherents. Cohen looks forward to the day when the anecdotal knowledge of judicial behavior will be succeeded by detailed indexes presented in “scientific fashion.” But no matter how comprehensive the index (we would now call it a database), there will never be a moment when its accumulated weight and density generates an interpretation of its items.

It would be a mistake to oppose the bare facts yielded by Pew-type surveys to the facts that emerge from the angled perspectives of formal jurisprudence and traditional religion. Sociology—the study of observed behaviors—is itself a perspective, and the facts it delivers are as angled and theory-laden as any others. The “raw data” celebrated by Llewellyn, Cohen, and other realists do not exist, and therefore any claim that one is resting on them is spurious. The point was made early on by Roscoe Pound who observes that the “new realist . . . seeks the pure fact of fact”; but, says Pound, he will never encounter that chimera because “significant facts must be selected” and “What is significant will be determined by some picture or ideal . . . of the subject of which it treats.”

In short, you cannot point to something as something—you can’t even perform the minimal act of looking—without already having assumed, and thus having been operating within, the norms and formal vocabulary of the practice (law, literary criticism, sociology, whatever) of which it is an instance. It follows then that “nothing is more unreal,” as Pound puts it, “than to conceive of the administration of justice . . . as a mere aggregate of single self-sufficient determinations.” Almost seventy-five years later Martin Stone reaches the same conclusion when he rejects the conception of a practice as “a disordered heap of instances.” Instances must be instances of something, and therefore there cannot be any such thing as “simple observation” of “independently given data points,” a “theoretically innocent . . . description of what lawyers do.” What lawyers do is what lawyers—not electricians or preachers or anthropologists or football players—do, and therefore any action of theirs that is described will owe its shape and significance to the general activity within which it is intelligible. To make the now familiar point again: getting rid of all traces of that general activity will leave you with nothing but a collection of disaggregated and empty items.

Textualism

Both blind submission and legal realism fail because the object each urges us to attend to—intrinsic merit in one case and the raw data of lawyerly performance in the other—does not exist, and the object does not exist because it can only be spied by abstracting away from the full context of its production and that is not a possible thing to do. Were you to succeed in this impossible endeavor—peeling off the layers of the onion, as it were—you would not have clarity and objectivity; you would have nothing. The same is true of textualism, a method of legal interpretation which counsels us against recourse either to authorial intention or to legislative history, and insists that we focus on the words themselves. Intention, say textualists like Antonin Scalia, is an interior phenomenon; you have to guess at it, and it is all too easy to find in that black box the meaning you prefer. And while legislative history can tell us something about the opinions of legislators in the course of their deliberations, it is a less reliable source of meaning than the words they finally wrote. Of course as everyone knows, words can have more than one meaning, and therefore textualism is in need of a constraint that will narrow the range of possible meanings. It finds one in the notion of “original public meaning” (O.P.M.), the statistical determination of the mark-meaning correlations—this word is understood by the relevant audience to mean that—in place at a particular time. Faced with a text written at that time, the interpreter can apply the code of O.P.M. and calculate what the text means without going outside of it.

Well, yes and no. The interpreter can indeed perform that calculation, but it will only yield the text’s meaning, as opposed to some meaning, if O.P.M. is the code the speaker is deploying. If the speaker is deploying another code—and nothing prevents him or her from doing so; the choice of code is always the speaker’s—O.P.M. can still be determined, but the determination will be of no interpretive significance; it would be as if you used the dictionary of dialect X to decipher a text written in dialect Y. We need to know prior to any interpretive effort what language the text is written in, and O.P.M.-style textualism can only be saved if the text includes an identification of its code, if the text, that is, somehow announces, “I have been composed in dialect X, not Y.” But, as Larry Alexander has been proclaiming for decades now, texts do not announce their codes the identification of which must come from the outside, from some independent-of-the-text specification of intention to speak in one language rather than another. Nor would it do any good to require that texts come tagged with the appropriate code because we would then have to ask what code the tag was written in, and we’d be right back in the soup.

Consider the perfectly straightforward text “You have no consideration.” If it is uttered by a lawyer to his client, it means that an element of his breach-of-contract claim is missing. If it is uttered by a wife to her husband, it means that he thinks only of himself. The text alone won’t tell you which it is. Only an external identification of authorial intent or of a controlling set of institutional conditions (the lawyer’s office or the marital bedroom) will establish that this and not that language is presiding over the communicative situation. And it follows that there is no base or core meaning of “consideration,” no meaning that emerges after you’ve set aside or bracketed all the circumstances that are supposedly extraneous or accidental in relation to the core you claim as a linguistic anchor. And here our discussion rejoins my earlier examples. Like blind submission and legal realism, textualism centers on an object—not intrinsic merit or judicial activity innocently observed, but words in and of themselves—that is not only unavailable but does not exist. Not only can textualism not redeem its promise to guide and constrain interpretation; it can’t even get started because its starting point—the bare text, the equivalent of raw data and of journal articles birthed by no one—can never come into view. It is hard, to say the least, to put your faith in a method that cannot take its first step.

Corpus Linguistics

The same argument applies to textualism’s digital variant, corpus linguistics. Corpus linguistics might be characterized as a deep form of textualism. Its method is to assemble a massive database—of cases, legislative history, or, in the humanities version called “digital humanities,” poems, novels, lyrics, play texts—and then begin to put questions to it with the aid of sophisticated search engines. Like standard textualism, its goal is to establish a relationship between the observed physical features of a text and the specification of meaning. The difference is that the patterns upon which corpus linguists focus exist at a level of depth and frequency no mere human could discern. As humans our attention spans are limited, as is our capacity to process huge reams of information. Therefore any conclusions reached by our acts of observation would be based on an inadequate sample. But, say corpus linguists, if we bring together a massive and comprehensive compilation of information with a search engine capable of detecting patterns no one of us—or indeed any group of us—is capable of seeing, we will have results that might be termed, at least relatively, objective.

But what can you do with these results? The answer is nothing, or nothing legitimately. Suppose you want to know what the phrase “keep and bear arms” in the Second Amendment means. Or suppose you want to know whether Title VII’s prohibition against discrimination “because of sex” includes a prohibition against discrimination based on sexual orientation. Or suppose you want to know whether the phrase “public use” in the Fifth Amendment’s “takings clause” refers only to a use directly related to public purposes (such as the construction of a highway) or could be read more liberally to include uses by third parties (e.g., developers) that are arguably of benefit to the public. Well, if you are a corpus linguist you might try to determine by statistical analysis what “bear arms” was ordinarily understood to mean in the late eighteenth century. And you might employ the same statistical methods to determine whether in 1964 the words “gender” and “sex” appeared together in ways suggesting that one encompassed the other. And you might attempt to discover, again by statistical analysis, whether the restricted sense of “public use” was more common than the extended sense. Of course, the phrase “statistical analysis” is too general a designator and oversimplifies; at the present moment, the field is structured by any number of disputes, including a dispute about whether frequency or dispersion (both technical terms in the literature) is the best measure of common usage. But even if the matter were settled and we had a clear path to the determination of common usage, the next interpretive step could not be taken unless it were independently established that the framers or the statute-makers had bound themselves to the code of common usage. Perhaps they had some other code in mind when they wrote, and if they did it would be irresponsible and productive of a fiction to determine what they meant by reasoning from a code they were not deploying. Time for more historical investigation, not of lexical patterns, but of what the authors had in mind. After all, statistical patterns of usage, however identified, did not write the statutes and constitutional clauses. Human beings with intentions and the power to choose whatever mark-meaning correlations they prefer did, and once the full force of that truth is taken into account, corpus linguistics like the textualism of which it is a variant, is shorn of its supposed explanatory power.

Critical Legal Studies

Corpus linguistics, textualism, polemical legal realism, and blind submission are all epistemological projects. They want to get things right. They want to know with certainty what something means, and each offers a method that will, it is promised, get to the heart of the matter by eliminating extraneous and obfuscating factors. Other projects that urge a similar casting-off are not merely epistemological, but frankly political. Critical legal studies (C.L.S.), for example, was animated by a worry—no, a conviction—that the processes of adjudication and interpretation currently in place are inflected and infected by base motives, primarily the motive of a ruling political/economic class to shore up its power and privilege. Where legal realists urged the jettisoning of traditional legal rules (Llewellyn’s “paper rules”) because they were empty, proponents of C.L.S. (known as “Crits”) added to the indictment the accusation that those same rules are tools in the effort to maintain an illegitimate political and cultural hegemony. What makes these tools useful is their indeterminacy, that is, their failure to generate a specific set of interpretive outcomes; they are thus normatively weak (to say the least) and can be made to point in any direction a skillful manipulator desires. Because the solemn-sounding incantations of jurisprudential discourse have no direction of their own, they are the perfect (because empty) vehicle of a content—an agenda—that rather than announcing itself (as in “the goal is to further corporate interests”) presents itself as the inevitable product of doctrinal inevitability. It just so happens, we’re supposed to believe, that when the law is rigorously followed and its hallowed vocabulary is set in motion the result will be favorable to big business and male hegemony. Law schools that teach doctrine as if it were coherent, logic-driven, and apolitical are, according to Duncan Kennedy, in the business of “ideological training.” Students are being prepared “for willing service in the hierarchies of the corporate welfare state.” And Mark Kelman tells us that professors of law, or at least some of them, know that the legal arguments they are teaching won’t hold up under interrogation: “All the fundamental, rhetorically necessary distinctions collapse at a feather’s touch. . . . Law professors are, in fact, a kiss away from panic at every serious, self-conscious moment in which they don’t have a bunch of overawed students to kick around.”

The chief distinction that must be seen through and collapsed is the distinction between law and politics, a distinction that is necessary if law is to be considered an “impartial third” that does not side with either party in a controversy, but provides a disinterested judgement of the opposing claims. Law, in this traditional view, cannot be interested in outcomes; it can only be interested in the rigorous unfolding of its own procedures; any outcome those procedures generate is legitimate. It is this picture of law—basically the picture of liberal rationalism—that the Crits declare to be a cheat and a scam that can only be maintained if the law’s political bias in favor of the status quo is hidden or camouflaged. That is the work of law schools. Teachers, Kennedy complains, “convince students that legal reasoning exists and is different from policy analysis.” In short, teachers convince students that law is autonomous, that there is something called law that is more than various interest groups jockeying for political/economic advantage and using the shaky, ramshackle edifice of legal reasoning to support their unannounced causes.

This is where C.L.S. and legal realism at once meet and diverge. They meet in the conviction that the edifice is supported by nothing and supports nothing; it’s transcendental nonsense. But where legal realists then turn to social science and say, “Let’s look at the facts of legal practice directly and without the distorting lens of an abstract vocabulary,” Crits extend the critique to legal practice as whole, which is in their view complicit at every level in the conspiracy against the public performed by the powers that be. The job the law currently performs is the maintenance of “status hierarchies . . . founded, at least in significant part, on sham distinctions,” the very distinctions that are the content of legal reasoning as it is currently formulated. And the job Crits assign themselves—in essence a job of deconstruction—is to “unfreeze the world as it appears to common sense as a bunch of more or less objectively determined social relations and to make it appear . . . as it really is,” as Robert Gordon has put it, a landscape structured by self-interested forces that hide behind the mask of legal neutrality. If we rip the mask away, we shall see that the hierarchies common sense now presents to us as natural are in fact constructed by malevolent political/economic agendas, and we will then be able, Peter Gabel and Jay Feinman tell us, “to take control over the whole of our lives, and to shape them toward the satisfaction of our real human needs.”

So that’s the C.L.S. program: demystify and remove the structures of perception, classification, understanding, and evaluation that impose themselves on us, and then . . . And then, what? This should by now be a familiar question for the readers of this essay. What remains when the categories of thought and action within which we routinely move and have our inauthentic being are delegitimated and discarded? And the answer should be familiar too. What remains is nothing, because in the absence of such categories—of any demarcations of the world that identify possible paths of negotiating it—thought has no direction either to maintain or move beyond, is nowhere and so can go nowhere.

But like all theorists, Crits are committed to going somewhere, and where they are committed to going is hinted at in Gabel and Feinman’s invocation of “our real human needs.” What exactly are real human needs, and what are the unreal human needs they are opposed to? Real human needs in this polemic are needs that exist beyond or to the side of the manufactured needs that oppressive and illegitimate agendas create, as capitalism can be said to create the need for eighty-inch T.V. sets and off-road S.U.V.s. Real human needs are the needs we have by virtue of just being human, needs we experience before we experience the false needs foisted on us by alien and alienating cultural/institutional pressures. The key value in this picture is authenticity. “Existing legal thought,” says Gabel, “helps to maintain the alienated character of our current social situation.” Because we fall into the roles demanded of us by categories of self-presentation we did not choose, we act in an inauthentic manner. Gabel offers the example of a bank teller whose every gesture and word is pre-scripted and insincere; she “affects a cheerful mood” and suggests that she is “glad to see Gabel,” though, in fact, she is just “playing the role of being a bank teller, while acting as if her performance is real.” Gabel in turn performs the role of a cheerful customer with an equal “artificiality.” In concert each “withdraws” from a true self “and adopts a false self.” No genuine contact is made. In the legal realm, this same kind of alienation is produced by a regime of rights that encourages individuals to think of themselves as discrete silos without any genuine connection or obligation to others. The rights-cosseted self is concerned only to protect his or her territory from external incursions—“I’m alright, Jack”—and has no impetus to engage in communal cooperation. “Rights talk” leads us to “represent each individual . . . as being a passive locus of possible action rather than as in action with others.” The result is a “collective passivity,” which contributes to the maintenance of the status quo and its built-in inequalities.

Is there a remedy? Alan Freeman and Elizabeth Mensch think they have one. Fashion a new kind of community “where relationships might be just ‘us’—you, and me, and the rest of us—deciding for ourselves what we want, without the alienating third of ‘the state.’” Deciding for ourselves? And what “selves,” exactly, would be doing the deciding? One presumes selves without institutional or professional or commercial or educational or political attachments. But there are no such selves for the same reason that there is no intrinsic merit or innocently observed lawyerly performance. To be a self is to be located in a pre-given network of possible roles in relation to which choices and actions are intelligible and performable.

Performances are real. I might be a father, a Republican, a worker in the mechanical trades, a churchgoer, a patriot, a passionate partisan of an athletic team, a believer or non-believer in climate change, a city dweller or a rural recluse, a high-school dropout or the recipient of a Ph.D. Each of these associations and the thousands I did not list point me in some potential direction or other. A being inclined in no direction or affiliated with no project would have nothing inside it, no reason to move here or there, no route to the making of a decision because no measure for weighing evidence and opposing propositions would be available. Neither the unencumbered self (a phrase of Michael Sandel’s) nor the decisions that self supposedly makes are conceptual possibilities. Authentic selves with authentic (not socially imposed) needs, just you and me, join the chimeras of intrinsic merit, raw data, pure social fact, just what lawyers do, and words in and of themselves as participants in the hopeless project of purifying human actions by getting rid of everything that is human (i.e., political, angled, situated) about them.

Anti-professionalism

Two other authenticity-based versions of that project are anti-professionalism and interdisciplinarity. Anti-professionalism is the general stance of which blind submission is one byproduct. If the standard submission process is an obstacle to the identification of intrinsic merit, isn’t this true, necessarily, of the entire machinery of professionalism—the vast apparatus of colleges, departments, dean’s offices, provost’s offices, search committees, budget committees, tenure committees, ranking systems, local and national conventions, officially recognized journals, discipline-awarded prizes, certificates of accreditation, advanced and more advanced degrees—an apparatus whose primary business, it would seem, is to replicate itself? Professions are routinely accused of existing for their own sake rather than that of some ideal (the delivery of medical service or the dispensing of justice or the celebration of poems); instead, we are told, they serve themselves and work to protect and extend the gatekeeping power they guard jealously and, indeed, zealously. One critic of professions, Burton Bledstein, speaks darkly of “arrogance, shallowness,” of “abuses” committed by “venal individuals who justify their special treatment and betray society’s trust by invoking professional privilege, confidence, and secrecy.” For Bledstein the flourishing of the profession replaces health, justice, and aesthetic excellence; this betrayal is perpetuated by educational institutions designed to consolidate and restock entrenched hierarchies, institutions where neophyte practitioners (the words are those of lawyers and law professors lamenting their experience) “become accultured to an unnecessarily limiting way of seeing and experiencing law and lawyering, a way which can separate lawyers . . . from their sense of humanity and their own values.”

Once again we see a distinction between authentic human values and the values manufactured and imposed by special-interest agendas that substitute themselves for the core human interest they claim to promote. But what are those authentic values? Where do they come from? How do you get access to them? Well, they are what you find when what Theodor Adorno calls the “prevailing realm of purposes” and Max Horkheimer calls the “categories which rule social life” are dismantled, piece by piece, to reveal—what? If, as Adorno, Horkheimer, Herbert Marcuse, and Jürgen Habermas repeatedly tell us, the prevailing realm of purposes flows into everything, including our efforts to dislodge it through the exercise of so-called critical thinking, how do we even begin this dismantling? If the realm is really so all-pervading, how could we even see it, surely a preliminary to its undoing? We can’t, because what Roberto Unger terms the “background plan,” rather than being something we can think about, is what we think within. We can’t step to the side of it or view it askance, and so the dream of getting away from it is a non-starter.

Now, the background plans of professions, the realms of prevailing purposes within which practitioners act, are not generally totalizing; they do not fill every nook and cranny of everyday life; but they do fill every nook and cranny of the professional lives of those who self-identify as lawyers or judges or literary critics or historians. If you are one of these, you live and move and have your being in what Thomas Kuhn calls a paradigm and Wittgenstein calls a “life world” and I call an interpretive community; and the paradigm or life world or community furnishes you both with the possible courses of action you might contemplate and the resources for prosecuting them. When you enter the practice’s space (and this is precisely the complaint of anti-professionalists) you know without reflection what tasks there are to be completed, the tools you bring to that task, the protocols presiding over that task, the objections you have to the work of others in the field, the responses you might make to criticisms of your performance. All this and more is given to you by the profession’s prevailing norms; and also given to you are the paths of critique that you might go down should you wish to challenge those norms. That challenge can be mounted, but it will take the shape allowed it by the very entity—vision, framework, background plan, prevailing realm—it is challenging. And if the challenge is successful, if something in the existing order is changed or even expelled, the prevailing realm will indeed prevail, although in altered form.

So I have come to my usual conclusion. Sweeping away the structures and protocols that preside over and configure a professional practice would result not in a purer form of that practice but in its disappearance. You can’t just do law or literary criticism; those activities only exist in a form defined and constituted by the formal categories and procedures that mark professional membership. (There are independent scholars, but their work follows the norms of the professional community of which they are not officially members.) To be sure, neither the shape nor the content of those categories and procedures is fixed; my account is not a recipe for the status quo. It is always possible, though not inevitable (an effort is required), to step back from the practice of which you are a member and reflect on the divergence of its present state from the ideal it is supposed to realize. You can then act in ways designed to bring the practice closer to the ideal as you understand it. What is important to note, however, is that this kind of reflection—in which all of us engage at times—does not issue from a special muscle of the mind that stands apart from all contexts but can be brought to bear, like a powerful searchlight, on any context. There is no general capacity of reflection; there are only particular acts of reflection that have been provoked by the perception of a specific disparity between what the practice promises and what it is currently delivering. Reflection always has the shape allowed and demanded by its object.

So, for example, as a legislator you might observe in a moment of reflection that persons detained by the police who are ignorant of their rights are in danger of surrendering them unaware; and in response to that danger you might write into law a requirement that police rehearse those rights before interrogating detainees. This of course is a real-world example and it has undoubtedly made a difference; but the difference is an adjustment within the prevailing realm of purposes, not its dismantling. This is what I call anti-professionalism with a small “a”: a feature of the professional landscape is subject to critique, but the critique is partial, and even when it gives rise to remedial action, the landscape remains largely as it was. Nor is it the case that a small, local act of critique can be the beginning of a cumulative process—critique piled upon critique—in the course of which more and more “defects” are removed and we are brought ever closer to the promised land of a practice purged of imbalances. Every imbalance redressed creates an imbalance in turn, not a diminishing of the entire quantity of imbalances. The requirement of a Miranda warning, critics point out, impedes police efficiency and can lead to the release of wrongdoers who are then free to continue in their criminal ways. When you tinker with a structure that is already in place in a way that arguably improves things over here, you will always, at the same time, be creating difficulties and problems over there, in some other corner of that structure’s space. Anti-professionalism with a small “a” can rearrange the institutional map in ways advantageous to this or that constituency, but it cannot, even in the fullness of time, produce a map that shows only advantaged constituencies, that exhibits no disproportions, no inequalities, no unfairness, with everyone and every point of view acknowledged and accorded respect. But that is the Eden promised by Anti-professionalists with a large “A,” who tell us that piecemeal remedies—a little bit here, a little bit there—will not do the job because the entire structure from the background plan down to the smallest detail is disfigured by inequities. Not renovation but wholesale demolition followed by total reconstruction is required for a new, fully equitable beginning. Everything must go. But if everything goes, no action in any direction, positive or negative, is possible. Without a realm of prevailing purposes in place no initiative recommends itself over any other. Indeed, without a realm of prevailing purposes in place, no initiative is even thinkable. Strong anti-professionalism, like textualism, lawyering without categories, and getting down to the “real you,” is a non-starter.

Interdisciplinarity

Interdisciplinarity is anti-professionalism’s cousin and it also comes in two forms, one commonplace and theoretically uninteresting, the other theoretically ambitious and impossible to perform. Commonplace or opportunistic interdisciplinarity involves looking to other disciplines for materials and models that might help solve a problem in the discipline you call home. If I’m a literary critic trying to figure out what a poem means, I might look to linguistics or anthropology or the history of science or psychology to see if there is something I might appropriate to assist my interpretive effort. “Appropriate” is a word carefully chosen: it means to take something from someone else’s cupboard and make it one’s own. The material or model you borrow has a particular significance and value in the discipline in which it grew. Your interest in it, however, is in the role it might play in your project, which is not a project at all in its native soil. So you pluck it out and make it grist for your mill, and when it has done the work you hoped it would do, when it takes you a bit further on the road you were traveling, you discard it or put it back on the shelf. (I apologize for the profusion of metaphors.) The important point is that throughout this process it is the perspective of the appropriating discipline that rules. This kind of interdisciplinarity doesn’t expand the discipline’s boundaries or call them into question; it just adds to the discipline’s effectiveness as defined in its own terms.

The more ambitious version of interdisciplinarity regards those terms as disastrously limiting, as the putting on of blinders, as the deliberate ignoring of the fact that disciplines and disciplinary objects do not frame themselves but become visible by virtue of a diacritical relationship with what they are not. Rather than looking at the isolated object or text or the narrow discipline in which it seems to be located, we must, says Robert Scholes, “make the object of study the whole intertextual system of relations that connects one text to others . . . the matrix or master code . . . the cultural text.” Now it is certainly possible to display on a map the whole intertextual system of relationships that carves out the spaces in which particular objects and professional projects emerge. But once you have done that, what have you got? What are you looking at? What happens when the angle of sight is widened to take in the entire panorama in relation to which the object along with the discipline that houses it is merely a dot in a constellation of dots? You no longer see the object; it has disappeared into the totality of the conditions that ultimately produce it. A strong (as opposed to an opportunistic) interdisciplinarity promises a richer understanding of the object whose properties are not its own and belong to an ever-expanding matrix; but in fact strong interdisciplinarity precludes any understanding of the object from which it abstracts away. You only see the forest, not the trees. If I want to figure out something about Paradise Lost, I turn to the tradition of literary epic, to Milton’s other writings, to his biography, to the texts he is known to have read, to the areas of knowledge (theology, cosmology, music, medicine, the history of war, the classics) he was expert in. I don’t surround Paradise Lost with these materials as if their mere contiguity to Milton’s life were explanatory. I look for evidence of the contribution they may have made to the task he had in mind (“to justify the ways of God to men”) when he took pen to paper, and I build a discipline-specific argument. That is, I stay focused and I don’t turn my eyes toward the myriad interconnections that link everything to everything else. If I did, the focus would be lost and so would any opportunity to know anything about the poem. In a critical account of the “Law and . . . ” movement, the movement that looks at law through the prism of other disciplines, the legal theorist Ernest Weinrib makes the same point: “Nothing is more senseless than to attempt to understand law from a vantage point entirely extrinsic to it.” “Senseless” is perhaps too harsh a word to describe strong interdisciplinarity, which pursues the entirely legitimate task of looking at the big picture. The mistake is to think that you can move from the big picture to a clearer view of everything it soars above; no, what it soars above is what it leaves behind, what is no longer seen.

That mistake is often politically motivated. For the apostles of strong interdisciplinarity, the narrowness of disciplines is not merely an epistemological error. It is part and parcel of a concerted effort to keep us apart and closeted so that we won’t see what is really going on. “Specialized academic disciplines,” Ben Agger complains, contribute to “overall domination by refusing a view of totality desperately needed in this stage of world capitalism, sexism and racism.” Meanwhile, cultural studies, in the words of Patrick Brantlinger, “aims to overcome the disabling fragmentation of knowledge within the disciplinary structure of the university, and . . . also overcome the fragmentation and alienation in the larger society which that structure mirrors.” So if we come to see that disciplinary boundaries are artificial, drawn with proprietary and exclusionary motives (let’s keep this little fiefdom to ourselves and refuse entry to outsiders) and designed to prevent us from participating in a common humanity that is obscured by illegitimate and malevolent distinctions, we will extend that academic lesson beyond the academy and begin the work of building a better, more capacious, freer society—with better, more capacious, and freer selves thrown in as well, as a kind of bonus. When congealed categories of description and evaluation are loosened and exposed as contingent, revisable, and dispensable rather than fixed and natural, the mind freed from their grip becomes a progressively better instrument. Cultural studies or strong interdisciplinarity not only promises a purification of the tools of knowledge; it promises a purification of the knower, who will no longer be confined to limited horizons but will grow into full and inclusive perception.

Consciousness Raising

The name of this progressive enlargement of mind is consciousness raising, and it is one more item in the list of impossible things I have been cataloguing. Consciousness raising is a concept central to many discourses—Marxism, feminism, Frankfurt School dialectic, critical theory, Habermas’s universal pragmatics, deconstruction, critical social science. In any of its versions, consciousness raising begins by declaring that our sense of what is possible is determined by deep structures—of education, political systems, entrenched social attitudes—of which we are unaware. In our unawareness we believe that our present condition is inevitable and deserved and so we live lives of resignation and obedience. The remedy is to surface these structures so that we can see the extent to which our complacent relationship to them hinders, if not precludes, any effort to understand our true situation and pursue ways of changing it. The goal, then, is liberation, as Brian Fay once put it:

a state of reflective clarity in which people know which of their wants are genuine, because they know finally who they really are, and a state of collective autonomy in which they have the power to determine rationally and freely the nature and direction of their collective existence.

The means to this goal include extended communal discussions in which women, gays, blacks, Latinos, and others tell one another their stories of being dominated, marginalized, and oppressed, and resolve, in the words of Howard Beale in Network, not to take it anymore. So women realize the degree to which they live in a world saturated with male norms. Those living in poverty learn that their economic disadvantage is owed not to their personal inadequacies, but to predatory financial practices that put them behind the eight ball from the very beginning. Racial minorities learn that the disparity between their numbers and their representation in boardrooms, faculty meetings, and upper management is less a matter of aptitude and ability and more a matter of quotas enforced by invisible and unacknowledged filters. And in the same vein, a thousand other lessons are waiting to be learned. And that’s the rub. Each of these lessons—each of these revolutions in awareness—is local. That is, it occurs in a specific context where a particular grievance is being articulated and a particular set of remedial actions is being urged. And these twined activities—becoming aware of a systemic wrong and resolving to do something about it—are performed against a background of hierarchies and discriminations of which we are not at the moment aware and in the absence of which we could not continue. We can’t take a step forward without having something to stand on, some structure of norms and protocols already in place that issue from an angled perspective of the very kind we are trying to leave behind. And if we remove or deconstruct that perspective, another one, equally angled, will already have slipped beneath our feet and constituted an unexamined underpinning that will have to be deconstructed in turn. Awareness—like reflection, for they are closely related—is not the name of a capacity that can be expanded until it overwhelms the field of consciousness. Consciousness will always be made up of a mixture of awareness and unawareness, of openness and closure. The content of these two categories can and certainly does change—the light shining here will always leave a dark corner over there—but the change will result not in a consciousness ever more aware and more open, but in one differently closed. You can’t raise consciousness generally by just adding bits of enlightenment together as if it were a quantity always capable of being augmented; all you can do, and it is quite a lot, is reconfigure consciousness so that some—not all—inequities previously invisible to those who performed and suffered them are made obvious and become candidates for redress. It is often said that the object of consciousness raising is the diminishing or even the elimination of false consciousness. But all consciousness is false if the measure of the true is a way of thinking that does not rest on a substratum of unexamined assumptions and unnoticed exclusions. There will always be a remainder, and when we turn our attention to that remainder, the very act of doing so will create another.

Transparency

The appeal of consciousness raising is the appeal we have met before, a method by means of which we can detach ourselves from fallible ways of seeing and being and approach the world and its tasks shorn of the liabilities that come along with being finite, situated, interest-laden creatures. One very up-to-date version of that method is the ethos of transparency. Transparency—if it were possible—is that state of perception in which things are seen directly and clearly without the obscuring, obfuscating, and distorting effects of screens. Screens can take many forms: practices of secrecy (such as passcodes or “partners only” meetings), fact hoarding, need-to-know-basis policies, materials stamped “CLASSIFIED,” impenetrable technical jargon, confidentiality agreements, claims of privilege, selectiv curation (by redaction and executive summary), reverence-inspiring stage scenery (like the curtain which when drawn reveals the Wizard of Oz to be an unimpressive little old man). Those who employ screens often claim benign motives: we want to protect national security; we don’t want to overload the public’s attention; too much information indiscriminately broadcast lessens the efficiency of an operation; public disclosure of one’s feelings and emotional life can lead to the loss of reputation and the rupture of sustaining friendships. But in recent decades, these considerations have been outweighed by the polemical rhetoric of transparency—openness rather than smoke-filled rooms, the public’s right to know, the priority of accountability, the imperative to detect corruption (we can’t let them get away with it behind closed doors), and, above all, the democratizing of information necessary to a free society populated by free citizens.

Free to do what? The answer is implicit in the key assumption informing the ethic of transparency, the assumption that the more data we have, the clearer will be our understanding both of where we are and where we should be. We will look at the body of ever-proliferating and unfiltered data, and it will tell us what there is to know; we can go from there to the question of what to do. This is what we might call the theology of data. You shall meet the data face to face and the data shall make you free. The data so imagined cannot be selectively assembled or arranged, for that would mean that the direction which the uncurated information was to provide would be replaced by the direction preferred by those who filtered it, and we as data consumers would be at their mercy. It is necessary to emancipate the data from those who would control it so that we ourselves can be emancipated from any control and range freely. (Again, the purification of the tools of knowledge is at the same time the purification of the knower.) The problem is—and by now you can probably anticipate what I’m going to say—data uncurated and unfiltered doesn’t have any direction; it points nowhere and anywhere. When you are confronted with a lot of stuff that just keeps coming and has no shape or innumerable shapes, you need help. You are not enlightened, you are paralyzed; you are overwhelmed. And in today’s data and information culture, “help” comes in the form of private companies such as Facebook that do for us what we cannot do for ourselves, but in a way that advances their interests, not ours.

In a series of essays and books, Clare Birchall explains the sequence. First there is the tsunami of unbounded and random data that must somehow be utilized and managed. But the skills required are not ones most people possess. Enter the “unelected mediators,” entrepreneurs with powerful algorithms who ask us “to buy back . . . the data”—supposedly—“made available to us, for us, now in a digestible market form.” But of course the digesting has already been done and in our relative incompetence we can only accept it, thus realizing a lesson Birchall attributes to Gilles Deleuze, for whom “new forms of emancipation are simultaneously forms of control.” So control—supposedly circumvented by unfiltered data—comes back unacknowledged and with disastrous consequences. Because data manipulators of course do not advertise themselves as such, they are able to prosecute their ends under the cover of what appear to be impersonal, objective statistics. In the process, we become what Birchall calls “data subjects,” more in thrall than we were before we bought into this latest promise of liberation from ourselves, from the state of being human.

The transparency promise, like the others I have discussed, is not redeemable, and those who claim to have redeemed these promises perform a sleight of hand. They do something that can’t be done—assess the merit of journal submissions from which all identifying professional marks have been removed, determine the meaning of a text independently of the author’s intention or the history of its production, specify what lawyers and judges do by bracketing the disciplinary concepts and abstract aspirations within which they act, reveal the genuine self by divesting it of everything that makes it identifiable as an entity, generate insight and direction from raw, unbounded statistics—and they do that something by bringing in through the back door everything that has been sent away up front. You guess at the authorship of an orphaned text and so restore, but in a fictional form, the credentials you have discarded; you presuppose without being aware of it the disciplinary norms you are rhetorically pushing away; you invent a role—the authentic self—and then fill its empty form with vague feel-good lefty sentiments; you tout the clarifying power of data but hide the interests that have configured it for purposes never announced and perhaps not even known. You get back what you had loudly exiled, but you get it back in a wholly unaccountable form. Angled, interested agents are plying their trade, but because they have convinced you and often themselves that they are just following the text, or merely observing behavior or bravely speaking up for true human agency or simply reporting the data, they remain largely invisible (look, Ma, no hands!) and are able to get away with almost anything.

That’s the bad news. Is there any good news? Is there anything that can be done in the face of a data revolution that has already largely succeeded and made us its victims? Birchall has a suggestion. We should, she says, “position ourselves in relation to data (rather than being positioned by it).” We are positioned by data, she explains, when we take our cues from an ever-growing base of data and perform in conformity to the statistical preferences it is said to display. What is missing—what is always missing—is any guidance about what those preferences mean and any inquiry into the question of whether or not we want to be ruled by them. We become what Birchall terms “economic nomads” who follow and adapt ourselves to “fluctuating capital streams and cultural/consumer trends” as the data reveal them. In this guise we are required to be ever open to the next stream and the next trend; this is our identity and the definition of virtue, to be perpetually open. We forget that we are more than the sum of our observable behaviors and that we can, at any time, resist the flow and change direction in accordance with the moral and political ideals to which we are antecedently committed. “Good governance,” Birchall says, “involves more than open data”—more, that is, than the display of statistical patterns to whose tune we dance. Good governance requires choices and actions rooted in a vision the data do not contain. Positioning ourselves in relation to data rather than being positioned by data means favoring political over technical solutions to our problems. Instead of asking in a supine manner what the data directs us to think and to do—generally, to go with the flow—we ask, in our present situation and given our goals, whether we should be uncritically open to and directed by data. This is a political rather than a technical question because the data do not drive it, although data might be the component of an answer. And that answer, Birchall acknowledges, might involve acts of withholding and closure if we judge that the demand for transparency issues from a suspect source or is likely to have unhappy results. Indeed, she concludes, we might even turn to “experimenting with secrecy rather than jumping on the bandwagon of transparency.” (In fact transparency and secrets are not incompatible, for once the transparency agenda gets working, the sheer bulk of inert data obscures—makes secret—what is relevant.) Once you start talking this way, transparency is no longer an absolute value; it becomes a strategic option we might or might not employ. Rather than being a universal solvent, transparency is demoted to the status of a tool; openness is no longer an end in itself guaranteed to produce benign outcomes; “Be ye open” is not the master rule all progressives must follow. “Be open sometimes, but at other times, not.” Like reflection, awareness, and consciousness raising, openness cannot be a general policy because it has no content, no built-in direction; and any content or direction it seems to possess will have been borrowed from a confining context of the kind from which it claims to free us. It’s always openness with respect to this or that situational problem, openness as defined and given shape by a background of practice; it’s never just openness (just as it’s never just reflection of awareness); there is no such thing.

Artificial Intelligence

Transparency and openness thus add themselves to the list of panaceas that promise to lift us above human frailties to the point where vision and action are purified and perfected. The promise, as I have noted repeatedly, is not fulfilled and never could be. The appeal, however, persists. The lure of a method that will transform us, usually by excising parts of us, is perennial and powerful, and there is never a lack of prophets and hucksters eager to sell us the latest good-for-all-our-ills elixir. At the present time the most durable of these lures is called artificial intelligence. The goal of artificial intelligence (A.I.) is machine simulation of human intelligence. Questions surround the project, which has been around since the mid-1950s (although its antecedents go back to Plato).

What exactly is human intelligence? How does it exhibit itself? What test will indicate whether simulation has in fact been achieved? One influential answer to the last question was proposed by the mathematician Alan Turing in the form of his famous test. Suppose you are putting questions to two entities, a human being and a computer. You don’t know which is which because they are both placed behind a screen or in another room. If, after a period of questioning and answering, you cannot tell which of your interlocutors is a person and which is a machine—if there is no detectable difference in performance—it is warrantable to conclude that both are exhibiting human intelligent behavior.

A famous counterargument was mounted in 1980 by John Searle, who proposed this hypothetical:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing test for understanding Chinese, but he does not understand a word of Chinese.

In short, the man in the room has performed an operation and the operation was successful, but he had no understanding of what he was doing; he was not engaged in conscious human activity. Following the program’s instructions, he could manipulate Chinese symbols without knowing the meaning of the symbols or consciously participating in the project (asking and answering questions) of which they were components. Those outside the room who are on the receiving end of the sequence are justified (although mistaken) in believing that the man in the room speaks Chinese, but all he has done is mechanically implement a program that simulates the behavior of a Chinese speaker but in no way matches it. The man in the hypothetical, Searle observes, is acting just like a computer—is a computer—performing calculations that generate an output without understanding either the semantic content of the symbols or the reasons driving the manipulation of them. The Turing test may have been passed, human behavior has been mimed, but it has not been duplicated. Computers, as Hubert Dreyfus explains in What Computers Can’t Do (from 1972, as it happens), are existentially dumb, as dumb as I would be if, after my car had broken down, I were listening to a voice on the telephone telling me to find a cylindrically shaped object and take it out of a hole (socket) and then (I would be following steps in a formal sequence) first to pull this object out and put that other one in without “knowing” in any interesting sense of the word what I was doing or why. The car might get started, but from my point of view, which would be the point of view of the executor of a program (in short no point of view), I might as well have said “abracadabra” five times to achieve the result I still would not understand. As I carried out the sequence, I would not be a thinking conscious being but the mechanical extension of the programmer on the phone.

The distinction between a computing being and a human being is the subject of Dreyfus’s influential and controversial book. Computers, Dreyfus explains, are not embodied and they are not in a situation. That in fact is the claim made for them; they transcend, or exist at a more basic level than, situations. At that level the facts they can pick out—various formal regularities—are always the same; relevance is not a quality or feature computers can discern or measure, although relevance can be built into a program that instructs you, for example, never to pay attention to words that begin with the letter X or always to pay attention to objects within five feet of you. (Just such a program with a set of built-in relevancies underlies the project of self-driving automobiles, of which more in a moment.) Human beings, Dreyfus argues, do not do relevance that way, by adhering to an advance formal specification of what must be attended to and what can be ignored. Rather, because “the human world is pre-structured in terms of human purposes and concerns,” that structure—which is not something the human being grasps, but something he or she inhabits—is always and already foregrounding various objects, events, and paths of inquiry. It is these objects, events, and paths of inquiry that the situated human agent will register, not as an act of will but as a constituent of his or her situational perception. Given the concern embedded in, and constitutive of, any social context (i.e., to win a game or solve a problem or decide on a course of action), relevance is pre-given and need not be sought by a search engine. It is a feature of human situations that the relevancies appropriate to them can change, and change suddenly, when new information emerges or unanticipated events occur. When that happens, the situated agent will respond on the wing, taking cues from wherever he or she finds them, re-seeing, on the spot and without formal calculation, what matters and what does not. At such moments, Dreyfus observes, “genuine thinking” of the kind we recognize will be occurring: “the challenge, the productive confusion, the promising leads, the partial solutions, the disturbing contradictions, the flash appearance of a stable solution . . . the structural changes brought about by the pressure of changing total situations, the resemblance discovered among different patterns.”

In recent decades, the possibility of machines simulating this kind of on-the-fly learning has been attached to programs based on neural networks rather than on explicit formal definitions and sequenced operations. Such programs can, in a sense, learn: initial state conditions are matched with goals and then, by a trial and error method—it’s called deep reinforcement learning—the best path to reaching the goal is determined by adding pieces of technology to deal with unanticipated difficulties. Let’s say the goal is driving a car safely. The program that implements that goal will be equipped with sensors, lasers, and cameras that will recognize and register the movements of other vehicles, interpret road signs, detect traffic lights and pedestrians, and adjust speed and direction, and it will do these things not in a fixed order but in response to changing driving conditions. (The totally irritating voice that tells me “be careful of the right lane” as I drive my 2004 Thunderbird is a low-level example.) This is no doubt a significant achievement, but is the program operating with the reflexes we associate with a skilled driver who, in the words of Synopsys Automotive Solutions, relies “on subtle clues and non-verbal communication like making eye contact with pedestrians or reading the facial expressions and body language of other drivers to make split-second judgment calls”? I suspect not; indeed, my suspicion is that driverless cars will never perform as well or better than human drivers. The reason is simple: while it is possible to add technological fixes when a previously uncontemplated glitch comes to our attention, the number of glitches is not finite and is constantly increasing, especially in open-ended and complex environments like the environment of automobile driving. No list of glitches can ever be complete, which means that the technology will always run behind the difficulties it has not yet detected. (This is obviously less true of a relatively bounded environment like the environment of a room you are trying to vacuum.) Deep reinforcement learning can never catch up with the fact that, as Ragnar Fjelland has put it, “real world problems take place in a changing world.” Human beings don’t have to catch up by adding an item to a list of variables; when a change occurs they immediately register it in a way that changes, ever so slightly, the shape of consciousness, and then they go forward within a slightly altered structure of purposes and concerns. Colin Allen makes the point succinctly: “A conscious organism—like a person or a dog or other animals—can learn something in one context and learn something else in another context and then put the two things together to do something in a novel context they have never experienced before.” John Searle is more optimistic than I am about the possibility of a fully successful driverless car, but when he asks himself about the philosophical implications of such a vehicle, he answers that there aren’t any. This is because the self-driving car (at whatever level of perfection) is an engineering achievement, not an achievement of consciousness. The complex actions the program performs are not actions of which it is aware. It literally doesn’t know what it is doing any more than a light switch (a low-level computer) knows that it is turning a light off or on.

Let’s concede, nevertheless, that computers can do many things—calculate, match, recognize patterns, play world-class chess, win at Jeopardy!, vacuum rooms, and parallel-park cars. But they do those things in the absence of strategies, desires, wonderings, curiosities, in the absence of all the motives (the list is endless) that move human beings to act, and you can’t reason back from the bare precipitates of those motives—from the numbers, frequencies, and collocation patterns a search engine can reveal and track—to the motives themselves, to the gestalt or “life world” within which shapes appear and are instantly and without reflection intelligible. You can’t sever the data from the circumstances of their intentional production and think that by looking at outputs so severed you can bootstrap yourself up to the richness of the circumstantial field. You can count, calculate, and compute till the cows come home, but you will never be a millimeter closer to the protean nature of the human ability to make sense. “Computers,” Fjelland says, “are not in our world,” and there is no way to go from the computer world—the world of calculation and formal manipulation—to our own, no way, that is, to get from computer-generated outputs to the human reasons and purpose computers cannot possess. (Searle says often that the computer scientists can’t get from the syntax to the semantics.) Computers, Fjelland argues, lack both prudence and wisdom in the Aristotelian sense, that is, “the ability to make right decisions in concrete situations” and “the ability to see the whole.” Because computers, let me repeat, are not in a situation (it is after all their claim not to be), there is no “whole,” no general framework, no background plan marking out in advance what it is appropriate to pay attention to, which hangs over and gives direction to their operations. Of course, markers of appropriateness could be built into a program, but once changing circumstances outrun what was built in—and they always will—the program will be obsolete.

As Daniel Dennett explains, deep learning programs are “parasitic on the intelligence of the utterers of all the words they’ve vacuumed up and analyzed the probabilities of.” In 2022, laMDA, a Google chat box, was declared “sentient” by a computer engineer who had been engaging with it. laMDA is certainly capable of participating in any number of conversations about any number of things. In Dennett’s vocabulary, it is good at counterfeiting human behavior and deceiving (not intentionally of course) its interlocutors. But as Richard Carrier pointed out, the program does not in any sense understand the responses it gives to questions: it just takes the words in a question and searches in a vast database for other words or phrases that appear often in conjunction with them. If you ask it to find a toilet in a residence, Carrier observes, and it just looks for “statistical associations between the two words, never at any time assessing any models of what toilets and residences even are, then it is not aware of either toilets or residences. It doesn’t know anything about those things. All it ‘knows’ is statistical associations between words.” Genuine knowing and being aware are forever beyond the chat box no matter how sophisticated the machinery. Hence Fjelland’s conclusion that “computer power cannot—and should not—replace human reason” with something less flexible and resourceful.

But of course that has been the project of all the agendas I have surveyed—to “perfect” the human by reducing it to what is measurable about its behaviors. Everything important is left out, and it is left out deliberately in the conviction that by abstracting away from the messy contingencies of human action we can arrive at what at bottom really matters: “intrinsic merit”; purely textual meaning; the pure fact of fact or raw data; the authentic, unencumbered self; professional activity divorced from the norms of any profession or discipline- generated objects seen from the vantage point of many disciplines or no disciplines or consciousness free of angles, interests, or investments; life seen and experienced directly without the interposition of screens, filters, or gatekeepers; the mind exercising its powers of calculation unaffected and unclouded by ambitions, affections, goals, fears, and life-plans. What I have been arguing is that you’re not going to get there, although you can hazard an approximation by stigmatizing and exiling considerations that you think muck up human reasoning—you can, as Habermas does in the construction of his ideal speech situation, exclude (or claim to exclude, there’s no way he could do it) instrumental and self-serving arguments—and then proclaiming that the impoverished form of life you have arrived at is superior to the layered and contingent fullness you have left behind in your brave new world.

The Clear Fountain

Why in the wake of so many failures is this project of cleaning up human behavior ever renewable? What is it that all these projectors are seeking? The short answer is “purity,” and we can see what it comprises and why it will not be achieved by looking at two traditions of thought, the Platonic and the religious. In the Phaedrus and elsewhere, Plato speaks of the human soul as having moved away from a happy prior state when it gazed on reality itself. Having declined into mortality, the soul’s memory of its former state is tenuous at best, and it is only by a great effort that it can rise above “earthly perceptions,” separate itself from “the busy interests of men,” and begin once again to “approach the Divine,” which is “without shape or color, intangible.” In the Christian tradition, the same story is told in a single verse of the Bible: “Now we see through a glass, darkly; but then face to face: now I know in part, but then shall I know even as I am known.” To know as you are known is to eliminate entirely the distance between the knower and what he seeks to know. You don’t know about; you know in the deepest sense possible, where knowledge becomes identity; the desiring self is no more because desire has been so satisfied that the self as a distinct entity is no more; it has been absorbed into the Reality it sought; it has no boundaries that mark it off from anything else; it is not framed against a background on which its identity depends; it frames itself, and because it frames itself, it is not seeable from the outside; it is invisible, “without shape or color, intangible.” That, however, will be “then.” Now, the glass is still dark and the hope of seeing clearly absent the lens of fallen humanity is just that, a hope: “Faith is the substance [not yet here] of things hoped for.”

So the desired state—of being without state, of just being, all essence no accident—is literally unimaginable. But there is a poem which, while not imagining it, does succeed in dramatizing its unimaginability. Its author is Andrew Marvell and its name is “On a Drop of Dew.” The title itself promises more than the poem delivers. When you say you are going to write “on” something, the expectation is that the something in question will come into focus. But it is the continual effort of the drop of dew to avoid coming into focus, to avoid being the object of some other agent’s seeing. As readers we never quite spot the dew; we first meet it in the line “shed from the bosom of the morn.” “Shed” names an activity that has already occurred. Wherever we get to, the drop has already been there and left. If it does pause, our effort to fix it in our sight is blocked by its refusal to be framed by anything but itself: “Round in itself incloses / And in its little globe’s extent, / Frames as it can its native element.” In order to further ward off capture, the drop is in constant movement: “Restless it rolls and unsecure, / Trembling lest it grow impure.” “Unsecure” means not tied to anything, not at rest in a way that suggests satisfaction with its current habitation. The drop would grow impure if it accommodated to its (temporary) surroundings, and so it trembles in anticipation of a heavenly return. Midway through the poem the drop is analogized to the human soul, “that ray / Of the clear fountain of eternal day.” Like the drop, the soul “does upwards bend,” “Remembering still its former height,” and as it remembers it resists earthly pleasures and “Shuns the sweet leaves and blossoms green.” The shunning is its entire action; to do anything else would be to risk attachment, and so in addition to trembling (an entirely internal motion) it performs an impossible movement: “Every way it turns away.” One would think that to turn away in one direction is necessarily to turn in another. But this turning somehow—we are not told how—turns away from all directions and never draws close to anything except itself: “So the world excluding round.” “Loose and easy,” that is floating entirely free, the drop/soul is “ready to ascend.” But this readiness, this incredible equipoise by which it maintains a separation from everything earthly, cannot be indefinitely maintained: the danger of settling back into imperfect being is ever present, and ascent cannot be actively sought lest the drop/soul risk being an agent with designs. Instead it can only wait until it is inhaled by the object of its desire: “Such did the manna’s sacred dew distill”—that is, evaporate—and “dissolving, run / Into the glories of th’ almighty sun.” Victory, in an act of self-effacing that leaves behind everything impure.

It is a glorious vision, and I blame no one for celebrating and desiring it, but unfortunately it is incapable of being realized in the vale of tears we call human life, and it certainly won’t be realized by any of the proffered paths to salvation which I have dared to call impossible things.

This essay originally appeared in the Christ the King 2023 issue of The Lamp.