Next Article in Journal
The Poetics of Physics
Previous Article in Journal
A Semiotic Reading of Aron Gurwitsch’s Transcendental Phenomenology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Practical Nihilism

Philosophy Department, University of Utah, Salt Lake City, UT 84124, USA
Philosophies 2023, 8(1), 2; https://doi.org/10.3390/philosophies8010002
Submission received: 5 September 2022 / Revised: 15 December 2022 / Accepted: 19 December 2022 / Published: 27 December 2022

Abstract

:
Nihilism about practical reasoning is the thesis that there is no such thing as practical rationality—as rationally figuring out what to do. While other philosophers have defended a theoretically oriented version of the thesis, usually called “error theory”, a case is made for a fully practical version of it: that we are so bad at figuring out what to do that we do not really know what doing it right would so much as look like. In particular, much of our control of instrumental (or means-end) rationality is illusory, and we are almost entirely incompetent at managing the defeating conditions of our practical inferences—that is, of knowing when not to draw an apparently acceptable conclusion. If that is right, then instead of trying to reason more successfully, we should be trying to make failure pay.

1.

What would—what should—you do, if it turned out you had no reasons for doing anything at all? Call this nihilism about practical reason, or practical nihilism for short, and in due course, I will say something about that terminological choice.
Now, you might think I am setting up an unanswerable question. However, I am after a version of the problem that does not turn on alleged metaphysical or semantic impossibilities, but is, rather, practical all the way down: the worry I want to motivate is that we are very, very bad at constructing good arguments for courses of action—maybe so bad at it that there is no point in even trying.1 So to sidestep the unanswerable question, I want to explore a case for practical nihilism that has consequences for how we have to manage our lives, individually and collectively, even if it is not entirely right, but only percentagewise right. And I do mean explore: at a number of points, I am going to anticipate results of in-depth investigations that no-one has conducted, so that we can think about what would follow if they turn out in ways I will describe. Thus while the arc of argument I will sketch will be clear, it will come with acknowledged gaps.
Nihilism is an extreme position, but it is worth remembering that the point of arguing for or against an extreme position is not always to convince people to accept or to reject it.2 Here I will try to convince you that if the version of nihilism I will be developing is so much as percentagewise true, we need to reconfigure and reorient our theorizing about practical rationality: to anticipate, we will have to focus not just on improving our success rate, but on making failure pay. But I will also be trying to put my finger on what seems to be the most recalcitrant obstacle to successful practical deliberation; to foreshadow, that will be the defeasibility of pretty much all of it.
In the practical reasoning literature, the mainstream view is that means-end reasoning is legitimate and well-understood.3 In any case, other and more controversial modes of practical argumentation depend on it: once you have drawn conclusions as to what is, say, worth doing, you still have to do it, which entails identifying and then going ahead with means to your ends. So the obvious way into a nihilist position is to take means-end reasoning off the table, and I will supply two rounds of argumentation to the effect that what we field as instrumental rationality just does not work. In between the laps, I will consider the bearing of the argument on a widespread view in philosophy of logic, that we settle on what the laws of logic are by converging on a ‘reflective equilibrium’; this will turn out to have been in retrospect an unsupportable approach.
As we proceed, we need to remain aware that not only does my opening question look like it is unanswerable on its own terms, but that philosophical theorizing is practical, in that one makes strategic and tactical decisions about how one’s position is to be constructed and supported. When these decisions are not made thoughtfully, the ensuing theories and arguments for them are—as any practicing philosopher can confirm in his own experience—worthless. If nihilism about practical reasoning is correct, then a nihilist should be intellectually crippled when the time comes to articulate and defend it. Since I am developing a form of nihilism, that worry sticks to me: if my conclusions are correct, they are likely to be self-undermining.4
So there is going to be a certain amount of early-Wittgensteinian kicking away of ladders, and it will be entirely in order if that prompts second thoughts. But I am going to ask you to postpone them until the picture I want to draw is fully visible.

2.

Why might one be dubious about our competence as means-end reasoners? Let us just say that humans and their protohuman predecessors have been around for several hundred thousand years. For as far back as we know about, the connection between sex and childbirth has been understood, even if the workings of the process were not (Generation of Animals [5]). And until roughly the turn of the twentieth century, childbirth was quite dangerous for the mother; the circumstances encountered by Semmelweis were not entirely typical, but he recorded death rates that occasionally went well over 20%, and even when physicians with a handwashing problem were not involved, a 3–5% chance of dying in childbirth was par for the course.5 So now, consider the following pair of practical syllogisms (one for the boys and one for the girls):
  • GIRLS
    • Sex is very dangerous—if I do it, I could die.
    • I don’t want to die.
    • I won’t have sex.
  • BOYS
    • Sex is very dangerous—if I do it, I might kill the woman I do it with.
    • I don’t want to kill her.
    • I won’t have sex.
As Aristotle once pointed out, the conclusion of a practical syllogism is an action. For all of human history and prehistory, anyone able to reason their way through these practical syllogisms, all the way to executing their conclusions, was deleted from the gene pool. So there has been very strong selection for cognitive widgets that prevent these conclusions from being drawn. Now, evolution often finds more than one solution to problems like these, and we can anticipate some of them: people who want to have sex more than they want to live; men who do not particularly care if the women they are having sex with live or die; people who intellectually accept the abstinent conclusion, but who fail to act on it—who act, as philosophers say, akratically; self-deception about risks and outcomes; externally enforced inability to act on the conclusion (as when women are given no choice about whether to have sex); a tendency to neglect small probabilities; a subject-specific inferential blindness. Moreover, one solution is evidently a generic inability to construct and manage inferences of this generic type. Against this background, it would be surprising if anyone were able to conduct deliberations taking the form of a practical syllogism routinely, reliably, and successfully.
I don’t think this just-so story amounts to a decisive reason to believe that we are incompetent instrumental reasoners. After all, sometimes there is a premium on getting your practical inferences right, and we should expect natural selection to be sensitive to those occasions also.6 Rather, it’s a warm-up consideration, one which suggests that we should not take our competence at practical reasoning for granted: that maybe we’re built broken.
Once we take it to be an open question, we can start to consider what an open-eyed investigation of it would look like, and to anticipate its results. Evidently, a good place to start would be domains where success is clearly defined, where carefully designed studies are used to determine what works and what does not, and where practitioners are required, as part of their professional training, to learn both the cumulative results of such studies, and what it takes to make a result believable. Medicine is a good example of such a domain, and here we can get a sneak preview of what I think the outcome is likely to be. Recall that, until the early twentieth century, going to a doctor was more likely to get you killed than to cure you; we remember treatments such as cupping and bleeding, but the menagerie of bizarre, pointless, often quite dangerous and thankfully mostly forgotten medical procedures was enormous.7
In retrospect, the problem was not exactly that those earlier physicians got the factual premises of their means-end reasoning wrong, but rather that they did not bother to ascertain effectiveness. It is not just the doctors: when we engage in what seems to us to be instrumental reasoning, typically, the quality of the allegedly factual premise seems scarcely to matter to us.8 We say otherwise, but we behave as if any bit of free association would do.9
If this is a representative sample, then what we will see is this. On their own, human beings are, by and large, genuinely bad means-end reasoners, and for the most part, they do not notice it. They tend to act in the blithe conviction that they know what they are doing—that they are taking means to an end—but they scarcely ever take the trouble to verify or ensure that their conviction is correct: that the announced means really are means. On the rare occasions this happens, it becomes clear that on all those other occasions they were acting with such a complete disregard for effectiveness as to make it hard to believe that they care what is a means to what. But if you act without caring what is a means to what, you are not instrumentally rational—at most, you are lucky if you get what you want.10 The instrumental reasons, on those other occasions, were—and here is a technical term—bubbameises. (From the Yiddish, literally, grandmother stories.)
That is likely to strike readers as a very big claim—too big to support satisfactorily in even a longish paper.11 That is right, but again, I am anticipating what seems to me to be a likely outcome of such an investigation, so that we can think about what the upshots are if it is even mostly correct. And already we can see that I am not suggesting we should expect complete nihilism: effective means-end reasoning is possible; it is just that I am guessing it will turn out to be the exception rather than the rule.12 Normally, people do things they say (and even think) are effective ways of achieving their ends, but there is not much of a presumption that they are. However, I acknowledge that when it matters, we can sometimes find out what works—if we are willing to spend the money (and the time, and devote the focused attention required) to do it.13
Let us glance at one more domain, just so we have more of a sense of what this looks like when it’s real: in the 1950s and early 1960s, the American space program had a problem building rockets that did not blow up on launch. When Kennedy made putting a man on the moon into a national priority, and when, over a period of some years, between 2% and 5% of the federal budget was devoted to the objective, techniques for carefully mapping out the means-end dependencies were introduced, personnel were assigned to make sure that the boxes on the PERT chart were being checked off in the proper order, and, lo and behold, steps were taken toward achieving a goal that were not only said to be directed towards it, but which actually accomplished it. Because not many industries have the sort of resources to throw around that the medical sector and the United States government have had at their disposal, this is infrequent.
I am going to proceed on the assumption that the track record will show that when instrumental reasoning works reliably, it is provided with this sort of external structural reinforcement. (Let me reemphasize that we are exploring a hypothesis about how our checking will turn out; please suspend your disbelief for the course of the exercise.) This is not exactly scaffolding: a scaffold is temporary, meant to be removed once the permanent structure is completed, and we normally think of the building inside the scaffolding as a great deal more sturdy than the scaffolding itself. So I will use a different shorthand, and refer to these expedients as cognitive exoskeletons. In medicine, we have the studies, the training of practitioners, and—inefficient though it is—the ever-present threat of lawsuits. In project management, we have devices like Gantt charts. You can reliably get to your coffee date because your driving has been proceduralized, at enormous cost: the roads have been laid out, the streets have literally been made legible, rules have been produced for operating a vehicle on them, drivers have to pass tests showing that they have memorized the rules, and enforcers are out on the street penalizing noncompliance.14
We have very recently—just over the last decade or so—stumbled across a way to leverage our investment in cognitive exoskeletons: once the rails that means-end inference can reliably traverse are in place, we can sometimes outsource (and automate) the reasoning itself. Continuing the example, our network of roads and the rules for using them (in this case, with the further addition of a constellation of orbiting radio beacons) make it possible to delegate the turn-by-turn directions to a navigation device. If it proves often enough to be the case that when a domain has been reinforced so as to support reliable instrumental inference, the reasoning can be outsourced, this ought to have important consequences for the way that we assign responsibility for executing and correcting means-end reasoning.
If all that is right, however, we have not established a nihilist conclusion: we are bad at means-end reasoning, but we nonetheless have a conception of what would count as doing it right, and even though we mostly do not manage it, sometimes we do. We do have an interim lesson to pocket; the way to live with being much worse at instrumental rationality than we think is to build rails that the all-too-weak trains of thought can run along. We will return in a few moments to the question of why we are such weak reasoners.15

3.

Let us take a break to consider some consequences for a widely shared understanding of how to theorize about logic and rationality. What is sometimes called the method of reflective equilibrium is bound to have occurred to readers who are natives of the analytic tradition; they may take it to be quite generally what serves as warrant for a principle of rationality. If appeal to the method is not short circuited, it may seem to preempt the line of argument we are exploring.16
The idea is that we begin with both views about inference rules and about the validity or correctness of particular inferences, and we work back and forth between them, adjusting the rules to match our views about the particular inferences, and dropping or adding inferences when they are required by the rules, until we arrive at a “reflective equilibrium”. As Nelson Goodman, the originator of the notion, put it, “a rule is amended if it yields an inference we are unwilling to accept; an inference is rejected if it violates a rule we are unwilling to amend”. For instance,
How do we justify a deduction? Plainly, by showing that it conforms to the general rules of deductive inference…
[but on the other hand:]
Principles of deductive inference are justified by their conformity with accepted deductive practice. Their validity depends upon accordance with the particular deductive inferences we actually make and sanction. If a rule yields unacceptable inferences, we drop it as invalid. Justification of general rules thus derives from judgments rejecting or accepting particular deductive inferences.17
Since that time, other philosophers have chipped in and insisted that we have to take other parts of our overall view into account as well; this gets called broad reflective equilibrium, as contrasted with the narrow variety introduced by Goodman.
Although most philosophers seem to be unaware of it, Goodman also gave an argument for proceeding this way. Pointing out that reflective equilibrium is how we do go about legitimizing inference rules, in both the deductive case and in the inductive case that he so famously went on to discuss—that we have used the method of reflective equilbrium to get results we accept—amounts to a reflective-equilibrium argument for reflective equilibrium: our treatments of one inference rule after another conform to the reflective-equilibrium method of arriving at conclusions in logic, and so the reflective-equilibrium rule is in reflective equilibrium with our judgments about its instances. His approach should have been unsurprising; had Goodman gone about arguing for his proposal in any other way, it would have amounted to a pragmatic contradiction.18
If I am right about the way we came to construct cognitive exoskeletons for our means-end reasoning, something is badly wrong with this picture. Although we have a sometimes-implementable conception of what means-end rationality should be, we cannot rely on ordinary judgments to the effect that in one or another case an action is being correctly chosen as a way of attaining an already chosen end. Those judgments—in a philosophers’ way of speaking, those ‘intuitions’—are quite often, perhaps almost entirely, worthless.
When it comes to practical logic, we have had the view that means-end reasoning is a legitimate form of practical inference. And we endorse many instances of means-end reasoning, in which we are natively quite confident. However, when we closely investigate these instances, one after another they turn out to be—almost always, whenever we are in a position to check, and whenever we have invested the care and effort required to check—instances of going through the motions of means-end reasoning: something that looks rather like a children’s make-believe version of an adult activity, rather than the activity itself.
Imagine that a rule of deductive inference in which we had a deep confidence—say, universal instantiation—were discovered to be like this. In the vast majority of instances in which we thought we had applied the rule, a careful second look showed either that we had not, or that deploying it somehow had given rise to a bad inference. In the reflective-equilibrium picture, this is to cut the support base out from under the inference rule. I do not know what would happen to our confidence in the deductive inference rule in that event, but notice that we seem not to have lost our in-principle confidence in instrumental reasoning. On the reflective equilibrium account, the emergence of the scientific method (and the application of statistical thinking) should have brought about a crisis of confidence in the entire notion of instrumental rationality. Instead, we redoubled our efforts, and after enormous investment, have apparently managed to structure some domains in such a way that means-end reasoning about them is, finally, likely to get our choices more or less right. (We now take care to approve drugs only when there is reason to think they are safe and effective.) But our commitment to the inference pattern outlasted what was suddenly revealed to be a confirmed track record of near-complete incompetence that is as old as the human race.
Goodman’s metaargument for his preferred mode of theorizing about rationality was unsound. Evidently we do not accept inference rules just when they are in reflective equilibrium with our judgments of legitimacy about their instances. Instead, we were so invested in means-end reasoning that, even though there were not enough supporting instances to matter, we stuck with instrumental rationality until we had invented environments in which we could plausibly claim that it worked.

4.

If we are considering the possibility that, when it comes to instrumental rationality, there is much less than meets the eye, then what are we seeing? We had a better provide an account of all that activity—activity which has struck so many philosophers as successful means-end reasoning, and seemed to make sense of the success humans collectively manage in coping with their environments. The first round of such an account is evidently going to have to be a distinction: between the structured activity and its correct interpretation.
In the Foucauldian rendering of Nietzschean genealogy, the technique looks something like this. In the institution or practice you are investigating, you first identify an underlying structure that stays more or less constant over time. Then you show that, over its history, one interpretation has succeeded another. For instance, over the last couple of hundred years, we have locked people up in prisons. Early on, the reason given was to punish the prisoners—to make them suffer to an extent commensurate with their misdeeds. Then that came to be considered uncivilized, and the reason criminals were imprisoned was to deter further crimes. Then deterence came to be considered ineffective, and the point of prisons was alleged to be rehabilitation. Come the 1980s, prisons were supposed simply to get criminals off the streets. The underlying institution remains constant; the interpretations come and go; as you rehearse the genealogy of the institution, you are supposed to stop believing that any of the interpretations have much to do with what is happening on the ground.19
Suppose that we have convinced ourselves that the vast preponderance of means-end reasoning has been too shoddy really to be means-end reasoning. The Foucauldian approach gives us one way of understanding what we are seeing. As in the genealogies, we can identify an underlying structure, in this case one that has been recently characterized by some of the Anscombians: a sequence of activities in which first one thing is done, then the next thing, then the next… and finally the last one.20
Quite likely the ability to follow the steps that make up a sequentially structured activity to its termination point is an old, powerful, and deeply entrenched cultural acquisition. We have become very, very good at it, in something like the way that, as Marshall McLuhan memorably remarks, we have more recently become very good at reading without moving our lips [28] (pp. 82–86). Over the ages in which humans have been traversing action sequences, various interpretations have been given to this structure. Of course, in the recently especially popular instrumental interpretation, earlier steps in the sequence of actions are selected in order to make the final step possible. But not all interpretations present the final step or finish line as a criterion of choice for the previous steps.
When our ancestors celebrated the same cycle of holidays, year after year, and celebrated each holiday with the same sequence of banquets, prayers, processions and so on, they did not optimize the rites in order to arrive more quickly and efficiently at the conclusion of the final rite. And we do not have to look to the distant past to find this alternative interpretation: year after year, at the Passover Seder, my family read its way through the Hagada, performing the same steps, in the same order, as in previous years; there was no suggestion that the earlier steps were performed in order to make possible the final step, which was the point of the whole activity.21
Moreover, we often enough encounter what, even to us, looks to be confusion about the most plausible interpretation of such sequences, as when ritual activities are implausibly given an instrumental-rationality presentation. For instance, in the United States, recently hired professors will have been encouraged, and perhaps required, to attend what they were told was a ‘faculty orientation’. These events are described as means to an end; they are put on in order to ensure that new faculty know what they need to start their job. As anyone who has actually attended one of these quickly realizes, the declared end does not control the inclusion or exclusion of the presentations which constitute the orientation: it does not function as a selection criterion for the stages of the activity. The real explanation of the ritual is mimetic; that is, peer institutions put on faculty orientations, and the orientations at other institutions have such and such components, so this institution does the same—even though the activity serves no independently identifiable purpose.22
Shouldn’t we have the attitude towards sequences of activity with termination points that Foucauldians have towards prisons? Here is something that human beings do. The interpretation of a sequence of steps, on which the final step is the criterion of correctness of previous steps, and is used to select them, is only one among many. These interpretations come and go; don’t believe the patter about what the function of the activity is.
One way, then, to make sense of the widespread ineffectiveness of means-end reasoning is that human beings just do produce stepwise-structured sequences of action; they just do present these (sometimes) as having means-end structure and justification. Now, we have learned how to build cognitive exoskeletons around some of the sequenced activity so as to make locally realistic the current story about how it works and why it happens; that is a little like finding that someone has finally figured out how to get a prison to serve whatever its most recently declared purpose happens to be. The claim under consideration is not that what is alleged to be means-end reasoning cannot ever be what it purports, but rather that we should not take the widespread practice, and the stories we tell about it, at face value. This puts a further question on our agenda: why did human beings keep producing means-end interpretations of their activities, before the supporting infrastructure was put in place, and why do they now keep on doing it, for activities they are not supporting in the requisite manner? What are the bubbameises doing?

5.

Allow that we are very, very bad at means-end reasoning, unless it is exoskeletized. (Call this the Exoskeletizing Claim.) Then the takeaway lesson seems to be: more cognitive exoskeletons. Of course, if the Exoskeletizing Claim were 100% true, and if we have to reason instrumentally to design and construct working infrastructure, then we would be stuck: at the outset, your reasoning about infrastructure cannot itself be conducted within the requisite infrastructure, and we would have to live with unmitigated nihilism. But if it is mostly true, then it is clear where our efforts should be concentrated. And you might find it hard to believe that the sort of investigation I have been imagining will show the Exoskeletizing Claim to be even mostly true; after all, don’t we get by in our day-to-day lives, and doesn’t our getting by show that our explanations of what we are doing must make sense often enough?
If I am right, things are worse than we have seen so far, in a way that entails that cognitive exoskeletons cannot be the final takeaway—even if that does turn out to be part of it. To explain why, I need first to distinguish two aspects of getting a means-end inference right. There is, first of all, determining one or more effective ways of achieving an end: call this raw effectiveness. If a firm’s objective is to lower its production costs and thus increase return on capital, and if stretching its supply chain across the Pacific Rim, looking for the lowest-cost components, does that, then that way of proceeding is the (or a) correct conclusion, as far as raw effectiveness goes.
And then there is defeasibility management. Means-end inference is defeasible, that is, for just about any normally satisfactory bit of instrumental reasoning, the conclusion may be defeated by some further consideration. Let us put to one side the ways that the embedded causal reasoning can be aborted: where you are wrong about what gets the job done, because you fail to pick up on the defeating conditions of the causal connection.23 Even with raw effectiveness assured, we notice that, say, although Alvernon Way really is, as the GPS is telling you, the fastest way to get to your destination, it means driving by the bar where your passenger was once badly beaten up; you do not want to upset him, and you choose a different route. The breast reconstruction your surgeon is recommending will leave you with something that looks like a breast, but it will strip out the muscles underneath; as a photographer, you have to come out of the procedure able to lift cameras, tripods and so on, and so you ask him for an alternative. Solid-fuel rockets got the Space Shuttle into orbit, but if the Shuttle had flown as frequently as initially planned, there could well have been significant depletion of the ozone layer.24 A firm’s outsourcing strategy may indeed make it more efficient, but the conclusion might properly be defeated by many further considerations: the supply chain may be vulnerable to disruption (for instance, as when flooding closes down a supplier in Thailand); the firm may lose the ability to design its products from the bottom up; it may be unable to control the quality of the components it is purchasing as closely, and so damage its brand; national security may be at stake…and it clear that one could continue this list indefinitely.25 The openendedness of the list is characteristic: for any list of potential defeaters for a given inference, you can go on to think up more, and they will end up being as substantively different from each other as you like.
Defeasible inference is contrasted with deductive inference: when the reasoning is deductively valid, the truth of the premises guarantees the truth of the conclusion; when the reasoning is defeasible, the correctness of the premises and of the inference pattern warrants drawing the conclusion only modulo defeating conditions. If you are actually about to act on it, your confidence in the conclusion of a stretch of means-end reasoning should not outrun your ability to monitor and assess its defeating conditions; after all, if you act on the conclusion of a defeated means-end inference, you will have chosen foolishly.
How confident should we be in our ability to detect relevant defeaters? It is hard to be sure, because we do not have an independently specifiable measure of success; accordingly, we do not have the sort of carefully controlled studies that make modern medicine go around; and we do not have the sort of contrasting cases we described earlier on, in which the demands have been shown to be met. Instead, I will give a somewhat roundabout argument, intended to make it plausible that we are as bad at defeasibility management as we are natively at assessing raw effectiveness.

6.

Back when I had a cat, every now and again, someone used to unfairly impugn her motivations: she was quite affectionate to people, and I got told that it was only because I was feeding her. The implication was that my cat was calculating: she wanted to be fed, she had figured out that in order to be fed, she had to act affectionate, and so she was acting affectionate. Now, this line of explanation exhibits a deep confusion about how evolution solves problems. It is true that house cats are often enough affectionate to humans because humans feed them; but the force of that because is that, for a great many feline generations, cats that got along better with humans were better fed, better sheltered, and more likely to survive and reproduce than cats that did not. As a result of this history of selection, current house cats are likely to be equipped with dispositions that help them to get along with humans. But that does not mean that evolution has turned over to the cats the job of understanding and acting on the means-end connection for themselves; that is just too demanding, and, from the as-if point of view of evolution, not nearly reliable enough.26
We’re like cats, and I am going take the liberty of continuing to personify evolution while I make this point. There are a great many things that evolution needs us to do: eat and reproduce, to list just two of the most-discussed. These are complicated activities, involving a great many steps, and these steps are often sequenced, that is, one step must first be taken in order for the next step to be possible. One solution would be to entrust to us the task of calculating what the sequence of steps has to be, and then the execution of the plan. The other would be to construct a series of triggers, where the action that ensues on the trigger normally eventuates in circumstances that provide the trigger for the subsequent step. Reproduction is clearly enough managed this way: when people have sex, they often enough are not doing it because they have calculated that this is the best way to put themselves in the position of changing diapers in the middle of the night for a year or three. Eating is pretty clearly managed this way also; people have a very hard time staying on diets, and the obvious reason is that the decision to eat something here and now is normally disconnected from larger planning processes. (By the time you realize that you are over your calorie quota for the day, the brownie is already gone.)
The first of these ways of getting things done is too demanding for evolution to rely on it. Generally, when an activity lies on what from the point of view of natural selection is a critical path, we should not assume that the activity is controlled by an exercise of practical deliberation. As Nietzsche put it once [25] (Genealogy 2:16), this is the weakest and most fallible part of our cognitive equipment. Rather, the most central of our activities will (normally, often enough: evolution typically solves problems in more than one way) bypass the process of making decisions on the basis of reasons. These activities will happen of themselves, under the radar, and appear to the agent who performs them as a series of faits accomplis. And this partially explains how it is that we can be as bad at means-end reasoning as I have suggested, and still make it through a life: the activities that really matter for getting through a life are not performed by us (not as philosophers think of performing an action: we are not functioning as autonomous agents, acting on the basis of decisions that we have made, and producing actions that are consequently full-fledgedly attributable to a person), and they are not under the control of our means-end planning.

7.

Human beings multitask, and if the various activities in which a human is engaging are not going to frustrate one another, those ongoing activities have to be coordinated. The requisite sort of integration cannot generally be managed simply by prioritizing one activity over another. Even though other species of animal do seem to solve the coordination problem this way, and even though a good many philosophers seem to think we do it this way, the sort of activities we engage in are incompetently managed by waiting to take action until one or another of them becomes the most urgent. The trick of interleaving activities requires assessments or evaluations—Elizabeth Anscombe famously called some of these “desirability characterizations”—that specify how the activities themselves, or their aims, or any of various other related objects matter.27
For instance, some activities are important under the heading of regular maintenance; if you wait until your oral surgeon tells you that he will not operate before he sees that you have adopted a dental hygiene regimen, you’re doing it wrong. Some activities have desirability characterizations that mean they require your full attention: examples might be working your way through a philosophy paper, and, in a rather different way, driving. If you multitask while you do either of these, again, you’re doing it wrong, although for different reasons; getting from point A to point B is routine transportation, and anything you do routinely should not be done in a way that is likely to substitute a morgue for your destination; the point of working through a paper is to figure out a philosophical problem as best as you possibly can. So these activities take up uninterrupted time. Still further activities are important in ways that require frontloading: education, say, is preparation, and has to come well ahead of time. And so, when you get up in the morning, you brush your teeth first thing; you go to school, even though what you are learning may not be put to use until years down the road; when you travel to school, you keep your mind on the road; when you work on the paper you are writing today, you set aside time to do only that.
What happens to this coordination scheme when the agent finds himself performing one of the under-the-radar actions, launched by some trigger in his surroundings? We can assume, without loss of generality, that the abruptly impinging activity is not a means to any end the agent already has, and that it competes for resources (time, if nothing else) with the agent’s other ongoing activities. If the coordination scheme is going to adjust on the fly, and absorb the new activity into the ongoing flow of action, a sheaf of assessments and evaluations will be required: a desirability characterization of the new stream of action itself, of its aims, and quite possibly of much else. Ex hypothesi, the new stream of action just happened; so it does not come with the needed assessments. So they will have to be generated.
When we look around, it is obvious that we come with cognitive equipment that solves this problem. The phenomenon is usually discussed under the label cognitive dissonance reduction. In the stereotypical cognitive dissonance experiment, subjects are manipulated into producing behavior that does not seem motivated by their current lights; they then adopt attitudes (often assessments or evaluations, but not necessarily) that rationalize the behavior. In this literature, the new attitude is sometimes thought of as a subconscious hypothesis that the agent comes to have about his own psychology.
For example, in an experiment conducted about two decades back, subjects who had arranged art posters in the order of their aesthetic preference were asked, of several pairs of images, which of each pair they would like to take home; then they redid the initial task. The intermediate selection task changed the ordering; it is as though subjects were reasoning: there must be some reason I picked this image from the pair; I must like (say) impressionism; I do like impressionism.28
The cognitive dissonance literature has been focused on changes to evaluations and preferences. But there is no reason to think that the assessments generated necessarily take the form ‘I like this,’ ‘I prefer that’; from the point of view of the earlier discussion, the psychologists have been investigating the generation of particularly thin desirability characterizations. Too often, you see someone who does not know how to solve an absolutely urgent problem, but who does know how to do such and such, going ahead with such and such, perfectly convinced that it is a way of solving his problem. I expect that a normal response to cognitive dissonance is calculative self-ascription, in which one may well come to exhibit a preference or evaluation of a goal, but in any case interprets one’s action or choice as instrumentally rational with respect to some goal. (Sometimes the goal is new; sometimes only the view about what is a way of attaining it is new.) That is, I am suggesting that the contents of the novel attitudes can be found to be somewhat richer than they are reported to be, especially with regard to means-end rationalizations. If this is correct, it begins to provide an account of the pervasiveness of the means-end interpretation of activities that are not plausibly tightly controlled for effectiveness: the interpretations are generated by cognitive dissonance reduction. Those bubbameises facilitate the task of integrating the interpreted activity with other ongoing activities being generated by the agent.

8.

You might think that the following two tasks involve more or less the same demands: for a prospective choice, select an action on the basis of one’s ends, and the available means; for an action one has completed, ascribe ends to oneself that provide a means-end rationalization of the action. However, it is clear that in prospect, control of defeating conditions is absolutely essential; whereas when it comes to constructing rationalizations in retrospect, control of defeating conditions does not matter: you have already performed the action, so it is too late to abort. What you did is water under the bridge, and it cannot be undone.
We understand our activities as means-end rational almost entirely in the latter manner; we are in the business of constructing bubbameises that rationalize something that we have already done anyway. There is no reason to expect the mechanism that produces these bubbameises to be sensitive to the defeasibility conditions of the inference. On the contrary, the actions it is there to rationalize in the first place are often enough actions for which there were decisive defeaters: when you are built to wake up in someone’s bed without having decided to end up there, that is in part because, if you had thought about it, there would have been conclusive reasons not to, from the point of view of your previous agency [39] (pp. 117–19, 243).
Even if we are confident in the correctness of our views about what is a way of bringing about what, our conclusions are actions, and we should go ahead with those only if we trust our control of the defeating conditions of our means-end reasoning. The preponderance of judgments that an action is instrumentally rational are produced in a way that is scarcely sensitive to potential defeaters; in fact, we can expect to be ignoring and overriding them. Our sense of when it is rational to act on such arguments is tied to the preponderance of those judgments—the turn to reflective equilibrium got that much right—and is thus deeply corrupt. We should have scarcely any confidence at all in the conclusions of our means-end reasoning.
As before, this is a plausibility argument, provided in lieu of the kind of investigation we were imagining earlier on: the one that would establish just how good we were at identifying relevant defeaters. We tend to think we are very good at this, perhaps due to our implicit comfort level with the concept: when I gave examples of potential defeaters of an inference, you understood immediately, and when I was explaining that they do not run out, and that you can always come up with more of them, doubtless you had no problem making up a few. But recognizing the particular defeaters in your own cognitive environment is a very different, and more demanding, matter.

9.

Let us pause to entertain a more optimistic possibility than the spirit of the argument so far would seem to accommodate. I began by allowing what seems to be a shared presumption, that instrumental reasoning is the most secure subdivision of our practical rationality: if we have to give up on that, we have to give up on practical deliberation across the board. But maybe it’s the other way around: maybe, if we are as bad as I have been suggesting at means-end inference, we had better make up for it by being quite good at other modes of reasoning. And if we seem to get along decently enough, those competences must be there and picking up the slack.29
We were discussing the way we acquire ends on the fly as cognitive dissonance reduction, but mightn’t our ability to pick up the messes we have made, pick up new ends that organize what we are doing, and recover from where our failed means-end reasoning lands us count as a further form of practical reasoning? Indeed, John Dewey once worked out a view in this neighborhood, though not on the basis of the assumption that we are genuinely incompetent at our calculative inferences [40]. But if we are, won’t the reasoning we need allow us to generate new ends, once we have inadvertently made our former ends unattainable, or once our bungling has made them undesirable, or once we have belatedly discovered their too-expensive side-effects?
Without dismissing the possibility out of hand, it would be a mistake to embrace it as our way out of the predicament we are considering. Let us recall some of the background to our discussion. Some time back, Christine Korsgaard responded both to error theorists—those philosophers who think that it is a matter of fact that are no facts about what is good and bad woven into the fabric of the universe—and moral realists that you still have to figure out what to do [41] (pp. 30–49, and esp. 34, 45). There is no point in worrying about either of those alleged states of affairs, because even if values are not real, your practical question, What to do? does not go away—and if values are real, there is still your question, What to do? which has not gone away. (So, do not just despairingly insist that values must be real; and likewise, do not just throw up your hands when you decide that they are not.) But now, deepening her point, so that it sticks to what she calls “procedural” (as opposed to “substantive”) realism: when you are trying to decide what to do, telling you that there is nothing that counts as doing that correctly—in particular, that there is no such thing as a good argument for doing one thing rather than another—also does not make your decision go away. By parity of reasoning, to conclude that you have to proceed on the assumption that there is a way to make decisions that is a right way—in other words, that from the practical point of view, nihilism is not an option—is too fast.30 You might be extraordinarily bad at figuring out what to do, and nonetheless your practical problem, of what to do and how to get through your life, will not go away; pretending that you are better at it than you are will not help. Rather than rushing to assure ourselves, in the manner of moral realists insisting that there must be the evaluative facts they envision, that there has to be a repertoire of effective forms of practical reasoning available to us, do not shy away from the question: What are we to do if there is not?
This is a good occasion to mark and motivate a couple of choices about exposition made throughout this discussion. Up until about the 1970s, ‘reason’—as in, a reason for doing something, not the formerly much-discussed mental faculty—was a low-key, very ordinary, and consequently usable term. Since then, too many philosophers have gotten their hands on it, and pretty much ruined it for subsequent use. However, if you have a reason for doing something, in anyway the central cases, you have an argument for doing it. And we analytic philosophers have invested a great deal of training in making inductees competent with the concept. So with occasional, largely rhetorical exceptions (e.g., at the very outset of this article), I have been working with the less damaged concept.
In the case of another term I have adopted—“nihilism”—while it also has a history, and is used to mean many different things, here is why I am opting to give it the sense it has been assigned in this discussion. Prior to its various theoretical articulations, it is typically evoked in youthful existential crises by thoughts like: Maybe there is no point in going on; maybe one has no reason for doing anything at all. And once we have deployed “argument” as the replacement for “reason,” an across-the-board inability to give arguments for one’s decisions turns up as the successor to: not having reasons for doing anything at all. This is why the focus of the practical nihilism I am exploring has been the lack of control of the basic practical inference patterns out of which arguments for doing one thing or another would be constructed.

10.

Bearing in mind that I have sketched how an argument for practical nihilism would go, but left the substantive and empirical investigation to be filled in, we are not sure to what extent nihilism is our predicament. If the conclusion we were gesturing at were 100% correct, there would not much point in trying to work up advice; instead, recalling how far the argument indirectly relied on our competence with means-end inference, we would be reduced to wondering whether to unwind it, and our situation would become uncomfortably like that of enthusiasts for Wittgenstein’s Tractatus. But if the conclusion is only percentagewise correct, we can—very cautiously—ponder what steps to take.31
In the first round of argument, I suggested that there was a way out: unsupported means-end reasoning is rarely any good, but when properly exoskeletized, it lives up to its billing. Unsupported defeasibility management is, we can expect, rarely any good. Is there infrastructure we can put in place to make defeasibility management more reliable?
No doubt there is a certain amount of it. The Darwin Awards commemorate, typically, a failure to notice a defeating condition of an initially plausible inference: throwing an explosive will make the hole in the ice, but if you have a retriever along on your ice fishing expedition, it’s not such a great idea. Perhaps the publicity these mistakes receive makes it less likely that others will duplicate the fatal blunders; in any case, we might make the approach more systematic, by compiling checklists of things to watch out for. The unfortunate individuals who fail to notice the defeaters function for the collective as expendable probes; as long as the information is aggregated and disseminated, the quality of defeasibility management should improve overall.
But realistically there is only so much of the problem you can manage this way. There are always more defeaters, and so the checklists quickly become unmanageable. In a world of specialists, the defeating conditions must often be couched in a vocabulary only a specialist can understand; you cannot even tell outsiders what to watch out for, and even if you could, the nonspecialists would not be able to identify the condition if they had it in front of them. And because the sacrifices can be much greater than an expendable individual, some defeating conditions need to be anticipated, even the very first time. When the first atomic bomb was about to be detonated, it occurred to Manhattan Project staff that they might trigger a self-sustaining nuclear reaction in the atmosphere; if they had, it would have extinguished all life on earth, and obviously, this is not something you want to find out about by making the mistake.32
Tests and simulations can be helpful. But testing is likely to find problems only if you know what you are testing for. The de Haviland Comet flew in flight tests and even carried passengers; because no one was expecting pressure changes in the cabin to produce metal fatigue, no one knew how long they needed the tests to run, and even when the planes abruptly started peeling apart in midair, it took longer than you would think to identify the problem [47] (pp. 173ff). Simulations are necessarily much less complex than the processes they model; for a simulation to catch a problem, it has to model the phenomena that produce it; you can count on it doing so only if you already know what features of the domain need to be included in the simulation. But recall that the problem to be solved is that of identifying defeating conditions; before they have been noticed, why should someone designing a simulation think to model the features that give rise to them?
And let us not forget that designing a cognitive exoskeleton is itself an exercise of, inter alia, instrumental rationality, and that there are indefinitely many potential defeaters for any particular way of doing it. If the process of designing the infrastructure does not take place in the not-yet-designed infrastructure, we should expect to do badly at it. In this cluster of problems, it is extraordinarily hard to find safe ground.33
Where we now have enough experience in building supporting infrastructure for means-end reasoning to think that doing more of it will make us better at identifying raw effectiveness over the long term, we should not take it for granted that we can improve our performance on defeating conditions in the same way. My sense is that we are much worse at producing guiderails for defeasibility management: we scarcely know how it is done, we are natively bad at telling whether we are doing well, and we do not even have good ways to determine how reliable we are.
The underlying problem is that we do not understand defeasibility, scarcely at all. And that suggests a multipart and specifically philosophical agenda.
First, since each of us has somehow gotten through life long enough to be here and wondering about how we cope with defeasibility in inference, there is the question of how we do as well as that. It is not a very high bar, but all of us (who are still around, which is not everybody) have evaded the actually fatal defeaters for our practical inferences. Thus the first task is to account for our current level of performance.
Once we have that explanation, we need to think about what potential defeaters lurk for it: is doing as well as we have dependent on local or historically contingent background facts? Could the rug one day be pulled out from under our feet? I have already indicated in passing that specialization might well be changing our circumstances for the worse in this regard. The more things it is other people’s business to know, the more things you are unaware of, and are much more likely to overlook.
We need a much better principled characterization of defeasibility and the problem of handling it. The approaches we have seen up to this point—especially, attempts to extend familiar formal methods in logic to assimilate the phenomenon and, in computer science, supplementing planning methods with brute-force search—are a bad fit with the problem itself. We need a new and clearer conception of it.
And at that point, that is, once we better understand the problem we are trying to solve, we can—perhaps—start in on the design of cognitive exoskeletons for defeasibility management.

11.

But in the meantime, the likely extent of nihilism about practical reasoning ought to motivate a deep change in how we think about practical rationality. Suppose we are quite plausibly really bad at giving arguments for practical conclusions. The way we are used to responding to failures of rationality is to ask how we can fix them: the two-stage agenda has been, first, to identify the correct rules or procedures (a task for logicians and decision theorists), and then to ensure compliance (perhaps by making students take classes in logic, or in ‘critical thinking’).
But we are now considering the likelihood that this is not going to work, or anyhow, is not going to work very much. We can build cognitive exoskeletons for some of our means-end reasoning, but, for various reasons, not nearly all of it. We may be able to provide some support for defeasibility management, but for the foreseeable future, this is not a well-understood exercise. For the most part, we just cannot fix those failures of rationality.
If we cannot fix our failures, we should exploit them. If we are bad at means-end reasoning, and bad at identifying defeaters for it, we should be designing frames for it that allow the failures to drive better performance. We do not know how to do this very well, no doubt because it is not the way we are used to doing things, and so I cannot give a recipe for it.34 Instead, let me wrap up with a handful of examples of what has to be—if nihilism about practical reasoning is mostly right, and stays mostly right—the way forward.
Markets are supposed to weed out weaker firms. A firm is all too likely to be vulnerable because, as in our examples above, its instrumental reasoning was defective: perhaps it took steps that only seemed to be means to its ends; perhaps it missed relevant defeaters. The market does not diagnose the mistakes in the argument; instead, it leaves behind stronger firms—anyway, when they are not too big to fail. More generally, Taleb recommends betting against fragility, and, incidentally, predicting the future by assuming that the fragile elements of our present will not survive to become part of it [48] (pp. 390, 310). Under the hypothesis we are now entertaining, bets by individuals on the basis of arguments are not likely to work out well, but we could look for further institutional structures that express those generic bets and predictions—that have them, as it were, built in.
We often see both individuals and firms approaching their life challenges very similarly, with consequences that can be disastrous overall. I grew up in Flint, Michigan, in the 1960s, where every able-bodied male simply assumed that he would work on the assembly line, and accordingly did not bother to develop the skills needed for a Plan B; it was standard for the boys to drop out of high school on their sixteenth birthday. But when the factory closed down, the entire town was effectively unsupported all at once. The reason we did so badly in the 2008 financial collapse was not just that we had financial institutions that were too big to fail, but that they were all prone to fail at the same time, in the same circumstances, and in the same way: our regulatory environment gives rise to institutional monocultures. An environment that produces stumbles, and which then deploys those as occasions on which agents have to choose alternative strategies, can produce the social analog of biodiversity. And there is a secondary benefit when failure makes people or companies change strategies: people who switch fields, for instance, can bring their training in their former area to bear on problems in the new area. (Which is not to suggest they will be reasoning successfully about them.) So failure can produce unexpected synergies.
Taking up a further example, we have almost forgotten what it is like to let children make mistakes. In our current childrearing practices, they are never left unsupervised, they are constantly coached, and their environment is structured in ways that make only artificial success or failure possible. Our educational system corrects the artificial failures it recognizes solely by withholding promotions. But experiencing real failure teaches people how to recover from it; there is a good deal to be said for adults who, as children, learned how to bounce back.
We are not very good at our theoretical reasoning, either; our views about how the facts stand are riddled with contradictions, and contradictions are often—although not always—a symptom of inference done badly. But we have learned to make contradictions serve us as cognitive fuel: when we identify them, that triggers a search for the theoretical errors that gave rise to them, and when it is done properly, the improvement in one’s representation of the world that results is much more impressive than simply removing the contradiction. Mining our inferential failures has, over time, made our science deeper and more powerful.
We could aim to do the same with failures of instrumental rationality. Instead of developing checklists of defeaters, or menus of defaultly effective causal pathways, we could use our regrets as occasions to rethink how we do things, and (something that is often part and parcel of that enterprise, when it is done correctly) to reconsider our large scale picture of what matters.35
The discussion we are now concluding might serve as an illustration. It is, after all, our inability to get means-end reasoning right unassisted, along with uptake of the sort I have been attempting here, that puts us in a position to appreciate how shaky our hold on instrumental rationality is. That requires us to rethink how we do our decision-making. And that in turn helps us out with two of the most difficult requirements of philosophy: the second hardest thing in philosophy, which is not anthropomorphizing one’s fellow man, and the hardest thing, which is not anthropomorphizing oneself.

Funding

Thanks to the University of Arizona’s Freedom Center for hospitality and support, and to the University of Utah for a Sterling M. McMurrin Esteemed Faculty Award.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No applicable.

Acknowledgments

I am grateful to Margaret Bowman, Allen Buchanan, Buket Korkut-Raptis, Jimmy Lenman, Loren Lomasky, Geoff Sayre-McCord, Michael Millgram, Maneesh Modi, Connie Rosati, Jan Schiller, and Eric Wiland for conversation, to Chrisoula Andreou, Sarah Buss, Phoebe Chan, Christoph Fehige, Matt Haber, Madeleine Parkinson and David Schmidtz for comments on earlier drafts, and to audiences at the “Why Hume Matters” conference at the Ashmolean Museum, organized by Oxford Brookes University, at the University of the Saarland, the Università degli Studi di Modena e Reggio Emilia, the University of Canterbury, the University of Bremen, the University of Amsterdam, Haifa University and the University of Bern.

Conflicts of Interest

The authors declare no conflict of interest; the funders had no role in the writing of the manuscript or in the decision to publish it.

Notes

1
The contrast here is with the hypothesis presented and defended as a matter of fact: roughly, the fact that there are no facts about what is valuable, what one should do, and so on. The position has recently come to be called ‘error theory,’ and is usually attributed to Mackie [1], but the thought is much older: David Hume held that, given what it takes for mental states to have contents, there could not be anything that counted as an argument that you ought to do something. Chapters 6–8 in [2] provides a reconstruction of Hume’s nihilism and his attempts, in his History of England, to provide a substitute, appropriate to political and moral contexts, for arguments proper about what to do. Refs. [3,4] are recent and typical entries in the error theory debates.
2
There is a long and insufficiently appreciated history of philosophers who have taken stands on extreme views as ways of putting on display much more nuanced positions. Just for instance, when Aristotle argued for the Principle of Noncontradiction, he took as his foil an opponent committed to affirming all contradictions; those who deny the Principle, and do so thoughtfully, insist that some, but not all, contradictions are true (Metaph. IV 3-6, K (XI), 5, 6 in [5]). Aristotle is not making the crude mistake of straw-manning his opponent, but rather exhibiting a problem his more moderate opponents have using the clearest, because most radical, case. Again for instance, Descartes develops an extreme form of skepticism, but not in order to convince you that there is no external world.
Christine Korsgaard [6] provides a good example of the way dismissals of nihilism figure into arguments for other positions. Her paper constructs a classic two-front argument, one which positions instrumentalism between nihilism about practical reasons (the more minimal bracketing position) and more ambitious views about reasons (the more maximal bracketing position, which could be Kantian, or could be a richer notion of prudence). She invites you to agree that any argument for stepping up from nihilism to instrumentalism can be converted into an argument for taking one more step, from instrumentalism to a position that accepts categorical reasons; conversely, she argues that any reason an instrumentalist can deploy against a Kantian or self-interest theorist is matched by a reason a nihilist can deploy against an instrumentalist. The upshot—the standard conclusion of a two-front argument—is that the bracketed position is not sustainable: you can slide up, you can slide down, but you cannot stop in the middle, at instrumentalism. Korsgaard, however, takes this to be an argument for a more ambitious view; the reason is that she thinks that the more minimal position—nihilism—is already known to be, from the practical point of view, not an option. If, as I am about to argue, it is an option, Korsgaard’s argument is incomplete.
3
Ref. [7] exemplifies this way of thinking. There are, however, exceptions; e.g., ref. [8] argues against the view that instrumental rationality is a matter of taking the means to one’s ends, and also denies that reasons that turn on one thing’s being a means to another make up a distinct category of reasons, marked by a distinctive force. However, Raz is not by any means a nihilist; he thinks that we have a great many practical reasons, derived from facts about what is valuable.
4
Moreover, parts of the argument to come will turn on functional characterizations of our cognitive equipment; the intelligibility and the warrantedness of such characterizations surely both depend on our grasp of and competence with means-end reasoning. (I am grateful to Christoph Fehige for pressing me on the issue.)
5
Ref. [9] is a careful overview; outside of lying-in hospitals, during the nineteenth century, anyway in England, normal maternal mortality rates seemed to be in the ballpark of half to three-quarters of a percent per birth. (My 3–5% estimate multiplies that out by the larger parity, or family size, typical of earlier periods.) But sixteenth and seventeenth century rates seem to have been higher, on the order of one-and-a-quarter percent per birth (p. 159). As Loudon remarks, “Until the mid-1930s a majority of women in their childbearing years had personal knowledge of a member of her family, a friend, or a neighbor in a nearby street who had died in childbirth” (p. 164). See also pp. 396f (which reports on mortality rates in a religious community that refuses medical care), p. 198 for mortality rates for British lying-in hospitals prior to 1880, and p. 16 for a summary of mortality rates in England and Wales 1850–1980.
6
However, it does seem to me that the argument should make us skeptical about the effectiveness of safe-sex education campaigns; it was obvious, for the longest time, that unprotected sex could get someone killed, maybe you, and that did not stop precisely the human beings who ended up reproducing from having unprotected sex.
7
See [10] for a disconcerting sample, but not all the procedures were dangerous. Here is William Whewell [11] (vol. iii, p. 222), quoting Theophrastus on the procedures for gathering medicinal plants: “We are to draw a sword three times round the mandragora, and to cut it looking to the west: again, to dance round it, and to use obscene language, as they say those who sow cumin should utter blasphemies. Again, we are to draw a line round the black hellebore, standing to the east and praying; and to avoid an eagle either on the right or on the left; for say they, ‘if an eagle be near, the cutter will die in a year.’”
8
Here I am taking issue somewhat with [12], which is generally a valuable overview of the prehistory of modern medicine. Wootton agrees that “the real puzzle with regard to the history of medicine…is working out why medicine once passed for knowledge” (144), and “why doctors for centuries imagined that their therapies worked when they didn’t” (184). His explanations cover a range of no doubt contributing factors—“the way in which people identify with their own skills, particularly when they have gone to great trouble and expense to acquire them…the risk of pursuing new ideas” (251) “…the illusion of success, the placebo effect, the tendency to think of patients not diseases, the pressure to conform, the resistance to statistics” (149)—but miss what I am suggesting is likely to have been the element that allowed these factors to determine the outcome: that (like us) the physicians of the time simply did not care about means-end effectiveness, not enough to actually pay much attention to it, even when they were very concerned about the ends.
9
Even now, evidence-based medicine is still in its infancy, and, even now, trained physicians exhibit remarkable lapses. Just for instance, quite recently, injecting medical cement (polymethylmethacrylate or PMMA) into fractured bones was a widespread practice, but with no reliable research to back it up. When two carefully designed and unbiased studies showed it to be no better than placebo, trained spine surgeons were heard claiming at a professional conference—rather like participants in the sort of church service where one bears one’s testimony—that their personal experience showed the treatment had relieved the pain in patient after patient; in other words, their training in overriding cognitive illusions about effectiveness had not overridden their own cognitive illusions, and as ref. [13] (p. 38) observes, “despite the evidence, many specialists will not abandon the procedure.” (refs. [14,15]; ref. [16] is an editor’s retrospective, and ref. [17] is a larger followup study.) This procedure has since been in part replaced (in Europe, but not the US, where standards for demonstrating effectiveness are higher) by ‘coblation,’ a procedure that looks like a new-age fad rather than real medicine; its surprisingly high level of acceptance demonstrates once again how weak our instrumental reasoning abilities are.
10
Since the standard way of thinking in one wing of the practical-rationality literature has it that your means-end inferences are correct when the means chosen make sense given your beliefs about the expedients, there is a comparison that may help here. Suppose that I am very bad at adding and subtracting numbers between one and ten; for instance, I routinely add seven and five to get eleven (and perhaps, when you ask me, I tell you that seven plus five is eleven). Suppose that when it comes to larger numbers, I correctly execute the rules I was taught in grade school—“put down the 1 and carry the 1”—so that a sum like this one
157 + 275
is made out to be 421 (because 7 + 5 = 11 , put down the one, carry the one; now 7 + 5 = 11 , with the carried one that’s 12, put down the 2, carry the 1…). If I mostly am making mistakes of this sort, if I do not seem to care to get the single-digit sums and so on right, will we still say that I am arithmetically competent?
The wing of the literature in question operates with a distinction between ‘motivating’ and ‘normative’ reasons inherited from [18]. So this is an occasion to question whether the distinction is well-motivated. Call the correct rules of arithmetic, the ones that should guide my calculations, my normative arithmetic reasons; we can imagine these systematized as a theory of normative arithmetic: perhaps what we were taught in grade school, or perhaps something fancier involving the Peano axioms. Call the mistaken views which I exhibit and that explain the digits I actually write down—in the example above, that 7 + 5 = 11 —my motivating arithmetic reasons. It is obvious that there is no call for a theory of motivating arithmetic; that such a theory would not be part of mathematics; that such errors are not a special sort of arithmetic reason; and that anyone who went on to develop such a theory or an epistemological account of such reasons would be a crackpot. There is not a whole lot of difference between the crackpot enterprise and the somehow current notion that there are two sorts of reasons for action, the normative ones and the motivating ones, and that we need a theory of each.
11
I have found that a natural response is to start listing putatively effective activities. For instance, a correspondent tells me:
If I think about what people do in a day, most of it seems to stand in an intelligible relation to their avowed ends. They brush their teeth (and in the ways designed to remove food), take baths or showers, dress “for the weather.” They go to college because they rightly believe that this will improve their employment prospects. They seek out friendships because they rightly believe that this will contribute to their happiness (and they generally interact with their would-be friends in ways that make friendships possible), etc. When they eat food that is bad for their health, they generally know it, and if they do not feel bad about their “bad habits,” this is generally because they would rather pay the costs than forego the pleasures. The same can be said for most “bad decisions” (skipping class, drinking too much, spending hours “searching” the web, etc.) I know that you take yourself to have addressed this worry. But, again, you need to say more to convince me.
Leaving to one side the personal hygiene habits—these are due to public-health campaigns, and recall that modern medicine does investigate effectiveness relatively successfully—notice that the remainder of the entries have precisely the status of those premodern medical therapies: they are thought to be effective, and no one has really checked. It is irrelevant that someone thinks that he can list effective means without the requisite sort of verification: that I need to say more to convince people (that they go on believing that what they think is effective, is) is precisely the problem.
12
Now, in the medical domain, we have gotten good at counting successes and failures; perhaps we were helped out, at the beginning, by the ease with which the dead are counted. In most other domains, it is harder to count, and it is especially hard to estimate what proportion, weighted for importance, of our expedients are effective in these domains. We should not make the mistake of treating the difficulty as a rebuttal of the argument: where we can count, it turns out that we were ineffective (until we did start to count). And we were remarkably cocksure about our own effectiveness. It would be silly to suppose that the greater difficulty of counting warrants cocksureness elsewhere.
13
Returning briefly to a remark above, in order to find out what works, we need target states that we can specify clearly. Often we cannot specify the target states—for instance, we do not know how to measure education, mostly because we do not understand what it is. This obstacle is not our present topic, but it should not be underestimated. There is a smallish literature on the sort of thinking that takes us from underspecified ends to goals that are specified tightly enough to launch trains of means-end reasoning (see [19] for an overview), and there is another emerging literature on the pitfalls of metrics meant to serve as proxies for such targets [20,21]. However, these literatures do not examine the obstacles presented by loosely specified or hard-to-measure goals to determining how much there is in the way of instrumental reasoning.
14
But can’t I just lift a cup to my lips in order to drink? And is that not effective but unsupported means-end reasoning? When simple bodily actions are effective, you are not normally able to articulate the detailed calculative structure of the action; from your point of view, it just happens. That is because evolution (and we are not just talking recent primate evolution) has developed and debugged the machinery over many millions of years. Bear in mind that I am not claiming there is nothing we can do effectively; we ought to suppose otherwise, if only because cognitive scientists are in the business of finding tasks that some of their subjects can figure out how to perform. (‘Some,’ because a task that all subjects, or none of them, can execute does not generally give you a publishable result.) When natural selection has had long stretches of time to get the mechanics of locomotion, say, or some routine social problem down, we should not be all that surprised when the machinery works. What we are considering here, however, are more elaborate, less repetitive, often culturally inflected, relatively ephemeral, and therefore relatively novel problems that evolution has not had a chance to solve by trial and error. We should not think that being good at the former sort of task means that we will also be good at the latter.
15
Since I have been invoking natural selection, let me take time out to do some compare and contrast, in this case with [22], where Sharon Street attacks realism as a way of clearing space for (what she understands as) a reflective-equilibrium constructivism. In her view, there is no Darwinian explanation for sensitivity to the sort of nonnatural evaluative facts beloved of the Moorean tradition, but it is unreasonable to suggest that we are entirely unaware of what is good and what is not. While I agree with the observation about natural selection, from where I stand, realism and antirealism in metaethics are two sides of a debate over, to put it a bit bluntly, invisible glows. It is misguided to endorse either position—or even to think there are intelligible positions to endorse [23] (Chapters 5–6).
What Street complacently pronounces to be the “implausible skeptical conclusion that our evaluative judgements are in all likelihood mostly off track” (122) seems to me not “far-fetched,” as she describes the claim a little later, but quite clearly the challenge facing anyone who takes ethics and moral theory seriously: they are wildly off-track, and the urgent question is what to do about it. Street does consider, under this heading, the possibility that “the tools of rational reflection [are]…contaminated”; she responds that “rational reflection about evaluative matters involves…assessing some evaluative judgements in terms of others…The widespread consensus that the method of reflective equilibrium, broadly understood, is our sole means of proceeding in ethics is an acknowledgement of this fact: ultimately, we can test our evaluative judgements only by testing their consistency with our other evaluative judgements, combined of course with judgments about the (non-evaluative) facts” (124). I do not belong to the alleged consensus; see [2] (pp. 7–10), for preliminary discussion of the shortcomings of reflective equilibrium; we will consider the method of reflective equilibrium in the coming section. Where Street wants to put to one side what she takes to be the unacceptable result that our evaluative reasoning is hopelessly corrupt, I am about to argue for it. Finally, Street’s focus is evaluation, primarily ethical or moral evaluation, rather than what the forms of legitimate practical inference are.
16
For an illustration, see the previous footnote.
17
Ref. [24] (pp. 63f), emphasis deleted.
18
I have had it suggested to me that the prevailing view is that reflective equilibrium is an explanation of the meaning of “valid inference rule.” While it is possible that Goodman is being misread in this way, first, I have never seen an argument for the claim so construed. Second, it would be a terrible theory of what “valid” means. And third, an appeal to such an understanding of the doctrine of reflective equilibrium would surely be an expression of the conviction, absolutely unbecoming in a philosopher, that it does not require any supporting argument.
19
That rendering is derived largely from [25] (Genealogy, 2:12–13); you can see the procedure, so understood, on display in [26].
20
See especially [27], which nicely observes that in ‘calculative’ action, you take one step after the next, until you arrive at the ‘end’—i.e., the last step of the sequence. However, Vogler endorses the means-end interpretation of the structure: e.g., when asked why you are taking a step, you can adduce the ‘end,’ the termination point of the sequence, as a reason; and if you do, it counts as an objection if taking the step will not bring about the end.
21
Even when activity is instrumentally structured, in that earlier steps are chosen in view of the final end, the means-end or calculative structure of the activity can have various functions. Adopting such termination points can be a volitional prosthesis, for instance, a way of overcoming procrastination [29]. Ref. [30] argues that long-term plans are often not there to guide activity over the long term—they will most likely be abandoned mid-way, and a self-aware agent understands that—but to frame and support choices in the here-and-now. (The Bowman-Lelanuja Thesis is a corollary: when it is clear up front that your long-term plan will not be executed all the way to the end, realism about the effectiveness of the farther-out steps of the plan is beside the point [31] (p. 92f). This may be a partial justification for the evident lack of concern with whether what one does will achieve what one says it will.) In general, we should not assume, even when activity is composed of component actions directed towards an end, that the point of that instrumental structuring is attaining the end.
22
One might cynically imagine that the bureaucrats are generating institutional irrationality as a side effect of their personal and entirely effective instrumental reasoning: e.g., the bureaucrats’ goal is being visibly busy, and a faculty orientation serves that end. Having myself been in the sort of meetings that generate these activities, I can witness to such explanations being only rarely in place.
23
I am not going to pursue this line of argument now, but notice that there is every reason to believe that we are terrible at this as well, and here is some indirect evidence. Investment fund managers attempt to do better than the index for their market segment. They make their choices on the basis of causal arguments whose raw-effectiveness cores normally seem to make sense. I am told that being right about 60% of the time on a continuing basis is very, very good. So if the problem is not after all that they cannot be bothered to find effective means, they must generally be very bad at picking out the defeaters for those causal arguments.
24
Ref. [32] (pp. 57–65) is an early treatment, and ref. [33] a somewhat defensive analysis.
25
Ref. [34] is the properly published part of a samizdat text that raised such national security concerns (the original coauthor, Akio Morita, declined to authorize an English translation of his contribution). The observation that prompted public uproar was this:
without using new-generation computer chips made in Japan, the U.S. Department of Defense cannot guarantee the precision of its nuclear weapons. If Japan told Washington it would no longer sell computer chips to the United States, the Pentagon would be totally helpless. Furthermore, the global military balance could be completely upset if Japan decided to sell its computer chips to the Soviet Union instead of the United States.
(p. 21)
26
For exposition of the logic of this point see, e.g., [35] (esp. at p. 406).
27
In the version of protohuman agency I am dismissing, urgency levels fluctuate, and the task that gets addressed is the one that feels most urgent at the time. Now, it is not that you cannot find the hardware behaving this way sometimes, but when it does, performance is liable to be poor. To take an extreme example, drivers whose jeeps break down in the Sahara will end up drinking the vehicle fluids before they die; thirst becomes that urgent [36] (pp. 53, 149–152, 188). They do this even though the motor oil, battery fluid, radiator fluid and so on are not water, will not assuage their thirst, and will in various other ways poison them. This is the failure to invoke a relevant desirability characterization, which here would be something on the order of, “drinking is desirable as a means of hydration.” Instead, the priority assigned to drinking is increased as hydration levels fall, and eventually it becomes the top priority. Although this is an especially dramatic example, not all of them are; think of how students end up pulling all-nighters. In general, managing priorities this way is such a bad way of resolving the action sequencing problem that we cannot get by on it across the board: and so we don’t.
This does not yet show that we do not use weights or strengths of desires to solve our scheduling problems in some more sophisticated way; but ref. [30] argues that the more sophisticated proposals succumb to a more complex relative of the problem at which I have been gesturing.
28
Ref. [37]; for a survey of older literature, see [38]. The phenomenon as I have just described it is familiar; this particular experiment was focused on the extent to which cognitive dissonance reduction happens in consciousness and depends on explicit memory; the results suggest, not particularly. The researchers are pushing back against the way of thinking on which we are observing reasoning conducted unawares, and suggesting that we should see cognitive dissonance reduction as merely a mechanism. And if we are as bad at instrumental reasoning as I have been suggesting, perhaps it had better be merely a mechanism: we could not manage the functionality if we had to figure out how to do it.
29
I am grateful to Madeleine Parkinson for pressing me on this point.
30
Full disclosure, I have been among those drawing that too-fast conclusion; see [42] (pp. 64–66, 175); so here I am disagreeing not only with Korsgaard but with my younger self.
31
When asking how bad we are at practical reasoning, it is natural to to wonder: compared with what? Here the obvious point of reference is the picture of the ideally rational agent since contested by the emerging bounded rationality literature. For readers interested in that contrast, a suitable representative, in the philosophical world, of the idealized view would be Donald Davidson [43] (Chapters 2–4), who argued that agents must be—as a condition of being interpretable—assumed to be very largely decision-theoretically coherent. For an entry point into the newer tradition, the psychological studies showing the idealized model to be untenable were launched by work collected in [44]—work that subsequently garnered a Nobel Prize for the surviving member of the collaboration.
32
You might think that we could make the checklists more easily searchable, by ordering them so that you find the most frequently encountered defeaters first. That will not do: as Taleb points out [45], defeaters that are so rare that we have not yet encountered them can also have impacts so devastating that we cannot afford to ignore them.
It is in any case hard to believe that checklists are how defeasibility is managed. Oaksford and Chater point out that
Prima facie in both memory and inference tasks, the richer the informational content of the materials used, the better performance seems to be. This seems mysterious on a classical account. To encode a richer stimulus should require more formulae to be stored in the declarative database, and so both storage and retrieval should become harder. On the classical view, resource limitations should become more acute with richer stimuli. However, the psychological evidence seems to indicate that in human memory, performance is not impaired but enhanced with richer stimuli. [46]
(p. 48)
33
The problem is not merely an abstractly characterized but uninstanced possibility. I will give just one example (which however requires a slightly lengthy explanation).
In order to improve the decisions made by institutional investors, the performance of fund managers is benchmarked. Investors want to be able to determine how much of the performance is attributable to a manager’s stock-picking skill, and how much to, e.g., sector allocation or risk assumption. So, almost inevitably, funds are confined to very tightly constrained classes of equities: say, small-cap growth stocks. (The fund’s performance can then be compared to the performance of other funds that are trying to do the very same thing.) The upshot is that a fund manager will be required to take positions that he understands to be against the interests of his investors: for instance, by selling a small-cap firm that is now doing so well it no longer counts as small-cap; or by buying stocks in a sector he expects to do badly, with the intention, not of making money for his clients, but of outpeforming a downward-trending index, or of not exceeding a limit on cash holdings. The cognitive exoskeleton does what it is supposed to, but produces perverse behavior over a large part of the market.
34
As Taleb discovered [48], exploring a closely related topic, we are so little used to thinking this way that he had to invent a word for the contrary of “fragility”. However, although the topics are related, they are not quite the same: he is interested, among other things, in learning from mistakes in ways that permit better decision-making down the road, while I am focused on the sorts of case where you are not going to learn from your mistakes in any way that will enable better decisions later. See, e.g., p. 192, where Taleb observes that to learn from your mistakes, you have “to be intelligent in recognizing favorable and unfavorable outcomes, and knowing what to discard”; but we may well not be that intelligent.
35
A nihilist might think that the problem is not just that we are bad at instrumental rationality, but that we are no good at figuring out what matters, even with these sorts of prompts. If the latter but not the former were correct, then effectiveness itself might turn out to be the problem: if you persistently choose the wrong goals, the better you are at pursuing them, the worse off you are. In an amusing series of asides, refs. [49] (p. 14) and [50] (pp. 44, 66f) recommend a “Bureau of Sabotage,” and suggest that “eternal sloppiness [i]s the price of liberty.”

References

  1. Mackie, J.L. Ethics: Inventing Right and Wrong; Penguin: New York, NY, USA, 1977. [Google Scholar]
  2. Millgram, E. Ethics Done Right: Practical Reasoning as a Foundation for Moral Theory; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  3. Olson, J. In Defence of Moral Error Theory. In New Waves in Metaethics; Brady, M., Ed.; Palgrave Macmillan: New York, NY, USA, 2011; pp. 62–84. [Google Scholar]
  4. Streumer, B. Can We Believe the Error Theory? J. Philos. 2013, 110, 194–212. [Google Scholar] [CrossRef]
  5. Aristotle. Complete Works; Barnes, J., Ed.; Princeton University Press: Princeton, NJ, USA, 1984. [Google Scholar]
  6. Korsgaard, C.M. Skepticism about Practical Reason. In Varieties of Practical Reasoning; Millgram, E., Ed.; MIT Press: Cambridge, MA, USA, 2001; pp. 103–125. [Google Scholar]
  7. Dreier, J. Humean Doubts about Categorical Imperatives. In Varieties of Practical Reasoning; Millgram, E., Ed.; MIT Press: Cambridge, MA, USA, 2001; pp. 27–47. [Google Scholar]
  8. Raz, J. The Myth of Instrumental Rationality. J. Ethics Soc. Philos. 2005, 1, 2–28. [Google Scholar] [CrossRef]
  9. Loudon, I. Death in Childbirth: An International Study of Maternal Care and Maternal Mortality 1800–1950; Oxford University Press: Oxford, UK, 1992. [Google Scholar]
  10. Duden, B. The Woman Beneath the Skin; Dunlap, T., Translator; Harvard University Press: Cambridge, MA, USA, 1991. [Google Scholar]
  11. Whewell, W. History of the Inductive Sciences, 3rd ed.; John W. Parker and Son: London, UK, 1857. [Google Scholar]
  12. Wootton, D. Bad Medicine; Oxford University Press: Oxford, UK, 2007. [Google Scholar]
  13. Prasad, V.; Cifu, A.; Ioannidis, J. Reversals of Established Medical Practices: Evidence to Abandon Ship. J. Am. Med. Assoc. 2012, 307, 37–38. [Google Scholar] [CrossRef] [PubMed]
  14. Buchbinder, R.; Osborne, R.; Ebeling, P.R.; Wark, J.D.; Mitchell, P.; Wriedt, C.; Graves, S.; Staples, M.P.; Murphy, B. A Randomized Trial of Vertebroplasty for Painful Osteoporotic Vertebral Fractures. N. Engl. J. Med. 2009, 361, 557–568. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Buchbinder, R.; Kallmes, D. Vertebroplasty: When Randomized Placebo-Controlled Trial Results Clash with Common Belief. Spine J. 2010, 10, 241–243. [Google Scholar] [CrossRef] [PubMed]
  16. Carragee, E. The Vertebroplasty Affair: The Mysterious Case of the Disappearing Effect Size. Spine J. 2010, 10, 191–192. [Google Scholar] [CrossRef] [PubMed]
  17. Firanescu, C.E.; de Vries, J.; Lodder, P.; Venmans, A.; Schoemaker, M.C.; Smeet, A.J.; Donga, E.; Juttmann, J.R.; Klazen, C.A.H.; Elgersma, O.E.H.; et al. Vertebroplasty versus Sham Procedure for Painful Acute Osteoporotic Vertebral Compression Fractures (VERTOS IV): Randomised Sham Controlled Clinical Trial. BMJ 2018, 361. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Smith, M. The Humean Theory of Motivation. Mind 1987, 96, 36–61. [Google Scholar] [CrossRef]
  19. Millgram, E. Specificationism. In Reasoning: Studies of Human Inference and its Foundations; Adler, J., Rips, L., Eds.; Cambridge University Press: Cambridge, UK, 2008; pp. 731–747. [Google Scholar]
  20. Scott, J. Seeing Like a State; Yale University Press: New Haven, CT, USA, 1998. [Google Scholar]
  21. Porter, T. Trust in Numbers; Princeton University Press: Princeton, NJ, USA, 1995. [Google Scholar]
  22. Street, S. A Darwinian Dilemma for Realist Theories of Value. Philos. Stud. 2006, 127, 109–166. [Google Scholar] [CrossRef]
  23. Millgram, E. The Great Endarkenment; Oxford University Press: Oxford, UK, 2015. [Google Scholar]
  24. Goodman, N. Fact, Fiction, and Forecast, 4th ed.; Foreword by Hilary Putnam; Harvard University Press: Cambridge, MA, USA, 1983. [Google Scholar]
  25. Nietzsche, F. Basic Writings of Nietzsche; Modern Library/Random House: New York, NY, USA, 2000. [Google Scholar]
  26. Foucault, M. Madness and Civilization: A History of Insanity in the Age of Reason; Howard, R., Translator; Random House: New York, NY, USA, 1988. [Google Scholar]
  27. Vogler, C. Reasonably Vicious; Harvard University Press: Cambridge, MA, USA, 2002. [Google Scholar]
  28. McLuhan, M. The Gutenberg Galaxy; University of Toronto Press: Toronto, ON, Canada, 1966. [Google Scholar]
  29. Millgram, E. Virtue for Procrastinators. In The Thief of Time; Andreou, C., White, M., Eds.; Oxford University Press: New York, NY, USA, 2010; pp. 151–164. [Google Scholar]
  30. Bowman, M. Are Our Goals Really What We’re After? Ph.D. Thesis, University of Utah, Salt Lake City, UT, USA, 2012. [Google Scholar]
  31. Millgram, E. Pluralism about Action. In A Companion to the Philosophy of Action; O’Connor, T., Sandis, C., Eds.; Wiley-Blackwell: Oxford, UK, 2010; pp. 90–96. [Google Scholar]
  32. Cicerone, R.J.; Stedman, D.H.; Stolarski, R.S.; Dingle, A.N.; Cellarius, R.A. Assessment of Possible Environment Effects of Space Shuttle Operations; Technical Report; NASA Contractor Report CR-129003; University of Michigan: Ann Arbor, MI, USA, 1973. [Google Scholar]
  33. Ross, M.; Toohey, D.; Peinemann, M.; Ross, P. Limits on the Space Launch Market Related to Stratospheric Ozone Depletion. Astropolitics 2009, 7, 50–82. [Google Scholar] [CrossRef] [Green Version]
  34. Ishihara, S. The Japan That Can Say No; Baldwin, F., Translator; Simon and Schuster: New York, NY, USA, 1989. [Google Scholar]
  35. Tooby, J.; Cosmides, L. The Past Explains the Present: Emotional Adaptations and the Structure of Ancestral Environments. Ethol. Sociobiol. 1990, 11, 375–424. [Google Scholar] [CrossRef]
  36. Langewiesche, W. Sahara Unveiled; Pantheon: New York, NY, USA, 1996. [Google Scholar]
  37. Lieberman, M.; Ochsner, K.; Gilbert, D.; Schacter, D. Do Amnesics Exhibit Cognitive Dissonance Reduction? The Role of Explicit Memory and Attention in Attitude Change. Psychol. Sci. 2001, 12, 135–140. [Google Scholar] [CrossRef] [PubMed]
  38. Cooper, J. Cognitive Dissonance: Fifty Years of a Classic Theory; Sage Publications: London, UK, 2007. [Google Scholar]
  39. Griffiths, P. What Emotions Really Are; University of Chicago Press: Chicago, IL, USA, 1997. [Google Scholar]
  40. Dewey, J. Theory of Valuation. In John Dewey: The Later Works, 1925–1953; Boydston, J.A., Levine, B., Eds.; Southern Illinois University Press: Carbondale, IL, USA, 1981; Volume 13, pp. 191–251. [Google Scholar]
  41. Korsgaard, C.M. The Sources of Normativity; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  42. Millgram, E. Practical Induction; Harvard University Press: Cambridge, MA, USA, 1997. [Google Scholar]
  43. Davidson, D. Problems of Rationality; Clarendon Press: Oxford, UK, 2004. [Google Scholar]
  44. Kahneman, D.; Slovic, P.; Tversky, A. Judgment under Uncertainty: Heuristics and Biases; Cambridge University Press: Cambridge, UK, 1982. [Google Scholar]
  45. Taleb, N. The Black Swan, 2nd ed.; Random House: New York, NY, USA, 2010. [Google Scholar]
  46. Oaksford, M.; Chater, N. Rationality in an Uncertain World; Routledge: London, UK, 1998. [Google Scholar]
  47. Verhovek, S.H. Jet Age; Penguin: New York, NY, USA, 2011. [Google Scholar]
  48. Taleb, N. Antifragile: Things That Gain from Disorder; Random House: New York, NY, USA, 2012. [Google Scholar]
  49. Herbert, F. Whipping Star; Berkley Publishing: New York, NY, USA, 1971. [Google Scholar]
  50. Herbert, F. The Dosadi Experiment; Berkley Publishing: New York, NY, USA, 1977. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Millgram, E. Practical Nihilism. Philosophies 2023, 8, 2. https://doi.org/10.3390/philosophies8010002

AMA Style

Millgram E. Practical Nihilism. Philosophies. 2023; 8(1):2. https://doi.org/10.3390/philosophies8010002

Chicago/Turabian Style

Millgram, Elijah. 2023. "Practical Nihilism" Philosophies 8, no. 1: 2. https://doi.org/10.3390/philosophies8010002

Article Metrics

Back to TopTop