RELATIVISM: COGNITIVE AND MORAL
There is only a perspective seeing, only a perspective “knowing.”
Relativism is an inherently controversial topic. The very word inspires polemics that are sometimes passionate and often hostile. Relativism seems to be a threat to intellectual certainties, on the one hand, and to moral seriousness, on the other. Here are just two examples. Pope Benedict, on the eve of his election, proclaimed that we are “moving toward a dictatorship of relativism which does not recognize anything as for certain and which has as its highest goal one’s own ego and one’s own desires.”2 And in his best-selling book, The Closing of the American Mind: How Higher Education Has Failed Democracy and Impoverished the Souls of Today’s Students, the late Allan Bloom wrote that “relativism has extinguished the real motive of education, the search for a good life.”3
Both these statements suggest that what their authors call “relativism” has already secured wide appeal, and both focus on moral relativism, which certainly does seem plausible and attractive to many people, even if they don’t use, or even reject, the label. What is it that causes those who denounce it such concern? In this book I shall try to clarify just what is at issue here. What exactly does a relativist assert, and what is distinctive about moral relativism? What is it about moral relativism that both attracts and repels? What is defensible in it and what should be rejected?
First we need to distinguish between relativism about knowledge, or cognitive relativism, and moral relativism, on which we will focus our attention.
Is what we can know determined by a world that is independent of us, or is it, in some sense, “up to us”? Immanuel Kant maintained that we cannot step outside the human standpoint—the circle of our own conceptions, theories, and reasonings—to a bare world as it is in itself, independent of them. Kant’s philosophy was built on this unnerving thought, but Kant sought to defuse the threat. He used “we” inclusively to mean all of us human beings, together with any other being that humans could understand. So “we,” in this inclusive sense, are all in the same boat with respect to knowledge and reason. Moreover, there is no cause for alarming uncertainty about what we can know and how we should reason. After all, the only knowledge available to us has to be intelligible to us. So it must be framed within the pregiven categories (such as space, time, persons, and objects in causal relations with one another) that shape our thinking and make it possible. And since we are rational persons, how we reason is not up to us but set by the requirements of Reason (with a capital R).
But Kant’s reassurances were gradually swept away and the thought became more unnerving. Friederich Nietzsche made a first major breach by advancing what is sometimes called “perspectivism”—writing that there is “only a perspective seeing, only a perspective ‘knowing’”4—according to which what we know is guided, shaped, even constituted by our desires, our passions—in short, our interests. There is no “true world” that is really objective but unknown to us humans. There are indefinitely many possible perspectives from which knowledge is to be had, and there is no prospect of their being brought to converge within a true, comprehensive theory of the world.
This thought becomes fully relativist when the idea of perspectives is tied to particular groups within humanity. Now the idea is that potentially all our ideas and theories are to be seen as local cultural formations, rooted in and confined to particular times and places, and that there is no independent “truth of the matter” to decide among them. This may in turn suggest that we as human beings have no shared standards on the basis of which we can understand one other. Now there are multiple “we’s,” each with “our” own standards of truth, reasoning, and morality. The term we is no longer inclusive but contrastive: it picks out us as opposed to others. As this idea spreads, Bernard Williams writes,
[m]oral claims, the humane disciplines of history and criticism, and natural science itself have come to seem to some critics not to command the reasonable assent of all human beings. They are seen rather as the products of groups within humanity expressing the perspectives of those groups. Some see the authority of supposedly rational discourse as itself barely authority, but rather a construct of social forces.
In a further turn, reflections on this situation itself can lead to a relativism which steps back from all perspectives and sees them all at the same distance, all true, none true, each of them true for its own partisans.5
Not all relativists travel the full distance of this reckless and giddy journey. Those who do often insist on “the socially constructed and politically contested nature of facts, theory, practices and power.”6 The very phrase “social construction”—and, worse still, “the social construction of reality”—has, for a while, had an intoxicating effect on thinkers in various social scientific disciplines. The effect was not to refutesocial scientists’ theories and explanations or to unmask ways in which their findings can serve socially or politically powerful interests, but rather to undermine the very idea that scientific explanations are superior to others. So, for example, an archaeologist working for the Zuni Indian tribe, who believe that their ancestors came from inside the earth into a world prepared for them by supernatural spirits, writes that science “is just one of many ways of knowing the world” and that the Zuni worldview is “just as valid as the archaeological viewpoint of what prehistory is about.” Another archaeologist, Dr. Zimmerman of the University of Iowa, explicitly rejects “science as a privileged way of seeing the world.”7 And the anthropologist Renato Rosaldo views social scientists’ claims to “objectivity, neutrality and impartiality” as “analytical postures developed during the colonial era” which “can no longer be sustained”: they are “arguably neither more nor less valid than those of more engaged, yet equally perceptive, knowledgeable social actors.”8
The way for such assertions was prepared by, among others, three thinkers, who raised questions about the objectivity of science itself in its very heartland, namely, natural science. One was Paul Feyerabend, self-described “epistemological anarchist,” who famously wrote in Against Method that “science is much closer to myth than a scientific philosophy is prepared to admit. It is one of the many forms of thought that have been developed by man, and not necessarily the best. It is conspicuous, noisy, and impudent, but it is inherently superior only for those who have already decided in favour of a certain ideology, or who have accepted it without ever having examined its advantages and its limits.”9 A second was his fellow historian-philosopher of science Thomas Kuhn, whose enormously influential book The Structure of Scientific Revolutions10 challenged the standard textbook picture of scientific progress cumulatively evolving toward the truth, suggesting instead that science proceeds through a succession of “incommensurable” paradigms, seen as constellations of group commitments. And the third is Bruno Latour, who engaged in anthropological studies of scientists’ “laboratory life,” claiming, for example that “nature” can never explain how a scientific controversy gets settled and proclaiming that “[i]rrationality is always an accusation made by someone building a network over someone else who stands in the way.”11 No space here, it would seem, for the role of factual evidence or of reasoning in settling disputes or advancing scientific knowledge. (Interestingly, Kuhn never licensed and both Feyerabend and Latour subsequently distanced themselves from the extreme relativist conclusions others have drawn from their writings.12)
The idea that facts, or indeed “reality,” are socially constructed is an intoxicating mix of three distinct ideas, as Ian Hacking has made clear in his book The Social Construction of What?13 Each of these ideas is heady enough, and the first step to sobriety is to consider the plausibility of each in particular cases. (There is a difference, after all, between claiming that, say, quarks are socially constructed and claiming that attention deficit disorder is.) The first is the idea of contingency: the thought that our explanatory theories could have been quite otherwise—so that, for example, there could have been an equally successful alternative physics in no sense equivalent to existing physics. The second is the idea of nominalism: the thought that our categories and classifications are not fixed by the structure of the world but by our linguistic conventions. And the third is the idea, sometimes called externalism, that we believe what we do, not because of the reasons that appear to justify what we believe, but because of factors such as the influence of the powerful or of social interests or of institutional imperatives or of social networks. This last idea lies at the origins of the discipline called “the sociology of knowledge.”
The classic founders of that discipline were reluctant to travel any significant distance down the relativist road. So Marx and Engels and later Marxists never supposed that their knowledge of history and the dynamics of capitalism was merely “local knowledge.” It was, they thought, scientifically warranted. Ideology, by contrast, was distorting and deceptive (as opposed to objective, truth-tracking) thinking, rooted in and serving class interests. Emile Durkheim, French founding father of sociology, and the Durkheimians likewise trusted the rules of sociological method to guide one to results warranted by adequate evidence and well-formed theories—including the result that cosmologies and ways of classifying the natural world reproduce features of the social structure and that our most basic categories are born out of social experience. Karl Mannheim, Hungarian founder of the sociology of knowledge, claimed that “the thought of all parties in all epochs is of an ideological character,” but he nevertheless repudiated “the vague, ill-considered and sterile forms of relativism with regard to scientific knowledge”14 and came to think that undistorted thought could be attained by “socially unattached intellectuals.” As Robert Merton, the distinguished American sociologist, observed, Mannheim’s view was that intellectuals “are the observers of the social universe who regard it, if not with detachment, at least with reliable insight, with a synthesizing eye”15 (a view hard indeed to square with the unremitting partisanship of intellectuals in the political and ideological battles of our times).
Various influences from different quarters subsequently impelled thinkers to speed further and faster down the road. From linguistics and anthropology came the so-called Sapir-Whorf hypothesis, according to which “[w]e dissect nature along lines laid down by our native languages … . [T]he world is presented in a kaleidoscopic flux of impressions which has to be organized by our minds—and this means largely by the linguistic system in our minds.”16 So Sapir wrote: “The ‘real world’ is to a large extent unconsciously built upon the language habits of the group. The worlds in which different societies live are distinctworlds, not merely the same world with different labels attached. We see and hear and otherwise experience very largely as we do because the language habits of our community predispose certain choices of interpretation.”17
From anthropology (about which much more in the next chapter) came the notion of divergent cosmologies and ways of reasoning—a notion that has intrigued a succession of philosophers. So the French philosopher Lucien Lévy-Bruhl speculated that “the reality in which primitives move is itself mystical” and their reasoning “pre-logical.”18 Sir Edward Evans-Pritchard, the great Oxford anthropologist, in his study of Zande witchcraft, oracles, and magic, sought to scotch this idea, noting that tribal peoples, living close to the harsh realities of nature, cope and survive by observation, experiment, and reason and that their mystical thought and behavior is mainly restricted to ritual occasions. Evans-Pritchard contrasted mystical with commonsense and scientific notions and had no qualms about judging Zande witchcraft beliefs as mystical, unfalsifiable, and illogical. Magical beliefs formed a mutually supportive network riven with contradictions and so ordered that they never too crudely contradicted sensory experience.19 The British philosopher of social science Peter Winch boldly contested Evans-Pritchard’s assumption that in matters of witchcraft “the European is right and the Zande wrong”: the Zande were not seeking “a quasi-scientific understanding of the world,” and it was the European, “obsessed with pressing Zande thought where it would not naturally go—to a contradiction—who is guilty of misunderstanding, not the Zande.” Winch drew from this critique the relativist-sounding conclusion that “standards of rationality in different societies do not always coincide” and that “what is real and what is unreal shows itself in the sense that language has.”20 In this Winch was deeply influenced by the philosopher Ludwig Wittgenstein, who had also reflected on anthropological examples, notably Sir James Frazer’s classic The Golden Bough and, indeed, Evans-Pritchard’s study. So, Wittgenstein asked in On Certainty, is it wrong for “primitives” to consult an oracle and be guided by it?: “If we call them ‘wrong’ aren’t we using our language-game as a base from which to combat theirs?” But, Wittgenstein continues, aren’t we offering them reasons?: “Certainly, but how far would they go? At the end of reasons comes persuasion. (Think what happens when missionaries convert natives).” 21 In a famous metaphor, Wittgenstein writes elsewhere that when I reach this point, justifications run out: “I have reached bedrock, and my spade is turned. Then I am inclined to say: ‘This is simply what I do.’”22
This debate continues. Most recently, two anthropologists have fiercely disagreed over the question of whether or not the Hawaiians who killed Captain Cook believed he was the embodiment of one of their gods. Gananath Obeyesekere is sure they did not, because they are as rational as we are. He is concerned to contest European “myth models” of the savage mind and opposes the idea of a “radical disjunction between the Western self and society and those of the pre-industrial world”: what “links us as human beings to our common biological nature and to perceptual and conceptual mechanisms that are products thereof” is “practical rationality.”23 Plainly, he maintains, the Hawaiians were capable of making the discriminations necessary to prevent them from mistaking Cook for a god. Marshall Sahlins disagrees. He disputes the relevance to the Hawaiians’ world of the appeal to what he calls “Western logic and commonsense.” This “Western” viewpoint “constitutes experience in a culturally relative way” and misleads us when we try to make sense of alternative cosmologies, epistemologies, and systems of classification, which are “completely embedded in and mediated by the local cultural order” and at odds with scientific classifications that purport “to be determined by things in and of themselves.” To apply our “commonsense bourgeois realism” to the interpretation of other cultures is “a kind of symbolic violence done to other times and other customs.” And so Sahlins proclaims: “Different cultures, different rationalities.”24
Another source of the relativist impulsion is American pragmatism, which viewed language as a tool and linked the idea of truth to what is useful to us in satisfying our needs. Pragmatism (alongside Wittgenstein) powerfully influenced the philosopher Richard Rorty, who sped down the relativist road with scarcely a glance behind. These influences are readily apparent in this much-quoted passage from Rorty. It is, he suggests, “pointless to ask whether there really are mountains or whether it is merely convenient for us to talk about mountains,” for “given that it pays to talk about mountains, as it certainly does, one of the obvious truths about mountains is that they were here before we talked about them. If you do not believe that, you probably do not know how to play the language games that employ the word ‘mountain.’ But the utility of those language games has nothing to do with the question of whether Reality as It Is In Itself, apart from the way in which it is handy for human beings to describe it, has mountains in it.”25 Rorty argued for abandoning the idea that we use language to represent the world and that truth registers a correspondence between what we say and how the world is. In consequence, we should abandon the whole vocabulary of truth, objectivity, rationality, and so on and replace it with talk of justification to and solidarity with relevant others. Knowledge is not an accurate representation of “reality.” Rather it is a belief that is justified to others and thus relative to the “grid” or framework that happens to prevail at any given time and place to determine what counts as relevant evidence. When Galileo defended the Copernican theory, on the basis of observations made with his telescope, against Cardinal Bellarmine, who appealed to the scriptural description of the fabric of the heavens, Galileo did not win the argument because his account was more “objective” and “rational,” or because the evidence compelled it, and certainly not because it was true. He won because his version of the movements of the planets was relative to one of the “educational and institutional patterns of the day,” namely, the pattern whose “rhetoric has formed the culture of Europe” and “made us what we are today.”26 In short, Galileo won because he played the game that won the day.
Rorty’s skepticism about the basis for scientific claims to objectivity and his focus on the “rhetoric” of science points to a final major source fueling contemporary relativism: that is, various contributions to the history and sociology of science itself. As I have indicated above, a key text here was Thomas Kuhn’s The Structure of Scientific Revolutions, in which Copernicus also plays a significant role by exemplifying the replacement of one “paradigm” by another. Kuhn’s claim is that “Copernicus’ innovation was not simply to move the earth. Rather it was a whole new way of regarding the problems of physics and astronomy, one that necessarily changed the meaning of both ‘earth’ and ‘motion.’”27 This illustrates what Kuhn called the “incommensurability” of competing paradigms and led him to the striking conclusion that “the proponents of competing paradigms practice their trades in different worlds.”28 The transition across incommensurable paradigms is not “forced by logic and neutral experience” but consists in a “transfer of allegiance” by individual scientists that can occur “for all sorts of reasons,” some of which “lie outside the apparent sphere of science entirely,” such as “idiosyncrasies or autobiography or personality” and even “the nationality or the prior reputation of the innovator and his teachers.”29
This last suggestion—that scientists can be motivated in their work by social factors external to science—was the guiding idea of the so-called strong program in the sociology of science. That program was “strong” because it suggested that such factors are what counts in explaining what scientists accept as good or successful science: it focused exclusively on the social determinants of scientific and even mathematical thought, and its standard-bearers forthrightly proclaimed their adherence to relativism. It is relativism’s opponents, they charged, “who grant certain forms of knowledge a privileged status, who pose the real threat to a scientific understanding of knowledge and cognition.”30 Knowledge, in their account, is “any collectively accepted system of beliefs,” and the task is to explain the causes of that acceptance, “regardless of whether the beliefs are true or the inferences rational”—by which, as relativists, they mean “without regard to the status of the belief as it is judged and evaluated by the sociologist’s own standards.” Faced with a belief whose acceptance is to be explained, the sociologist of scientific knowledge asks questions such as these: “[Is it] part of the routine and technical competences handed down from generation to generation? Is it enjoined by the authorities of the society? Is it transmitted by established institutions of socialization or supported by accepted agencies of social control? Is it bound up with patterns of vested interest? Does it have a role in furthering shared goals, whether political or technical, or both?”31
For instance, according to one well-known study, a seventeenth-century scientific controversy between William Boyle and Thomas Hobbes, in which Hobbes was soundly defeated mathematically, is presented as “an issue of the security of certain social boundaries and the interests they expressed,” and the authors draw the general conclusion that, in view of “the conventional and artificial status of our forms of knowing,” it is “ourselves and not reality that is responsible for what we know.”32 Indeed, “the compelling character of logic,” authors of this school claim, “such as it is, derives from certain narrowly defined purposes and from custom and institutionalized usage.” Its authority is “moral and social” and “the credibility of logical conventions” is “of an entirely local character.”33
Arguments such as these always encountered resistance and eventually led to the heated “science wars” of the 1990s in which the “realist” view that there can be objective scientific knowledge seemed to be challenged by a wide range of thinkers in various fields—cultural studies, cultural anthropology, feminist studies, media studies, comparative literature, and science and technology studies—all influenced by what is broadly labeled “postmodernist” thinking. Scientists rebelled at the idea of being viewed anthropologically as a tribe. They also balked at the suggestion that social factors—such as gender, sexual orientation, race and class, authority structures, peer-group acceptance, competition for prestige and funding, and “boundary work” to demarcate what is seen as scientific from what is not—were relevant to the explanation of what counts as scientific knowledge. These battles, in which each side correctly accused the other of misunderstanding, ignorance, and caricature, were not, however, just an academic squabble in which scientists confronted humanists and social scientists. They had a bearing on the contemporary rise of so-called creationist science and the claims of “intelligent design,” and on doubts about the scientific claims of global warming. As Latour remarked, “dangerous extremists are using the very same argument of social construction to destroy hardwon evidence that could save our lives.”34 The postmodernist critique could lend support to wider challenges to scientific authority, challenges that neither side in the science wars was disposed to endorse.
From the beginning, relativist ideas about knowledge have met with two kinds of resistance. One is at the level of philosophical argument, into which we will not enter here. There is an abundant philosophical literature aimed at refuting relativist arguments, in various ways: by showing them to be self-refuting (why is your argument for relativism not itself relative?); by contesting particular positions, distinguishing tenable claims from untenable conclusions that they supposedly entail; and by arguing directly for our capacity to arrive at beliefs about how the world is that are objectively reasonable, binding on anyonecapable of appreciating the relevant evidence, regardless of their social or cultural perspectives.35
The other way of resistance is, so to speak, existential or experiential. It derives from the certainty, which is everywhere apparent, that, to use the Czech-British anthropologist-philosopher Ernest Gellner’s graphic phrase, a “Big Ditch” divides the modern from the premodern world. The Big Ditch for Gellner refers to “the idea that a great discontinuity has occurred in the life of mankind, the view that a form of knowledge exists which surpasses all others” in cognitive power.36 No one really doubts that science yields objective knowledge that enables us to predict and control our environment and that there has been massive scientific and technological progress, and no one really supposes that judgments of the cognitive superiority of later over earlier phases of science or of scientific over prescientific modes of thought are merely prejudices relative to “our” local conceptual or explanatory scheme. People across the world live many-layered lives that can combine magic, religion, and science in countless ways, but no longer in ways that preclude acceptance of the cumulating cognitive power of science. When people are ill, they can believe in miracles, prayer, and surgery. Creationists and religious fundamentalists take flu vaccines whose development presupposes the truth of Darwinism, fly in airplanes, and surf the Web on computers. Members of tribes who consult witch doctors seek cures in local hospitals when they can; and although countless people in modern societies hold innumerable weird and apparently irrational beliefs, they do so against the massive background of science-compatible common sense. Those who most loudly proclaim their antimodernism never reject the whole package. Antimodernism is a modernist stance; there is no route back from modernity. Therefore, all the arguments for cognitive relativism that we have been considering so far amount to interesting challenges, not a seriously disturbing threat, to confidence in science. Cognitive relativism can only be tangential to the way we live our lives. Its proponents are responding to an academic and not an existential question.
It is different with moral relativism—which is why many of those who are firm and confident in their dismissal of cognitive relativism are less firm and confident when it comes to moral relativism, and many even reject the former and embrace the latter. For instance, Ernest Gellner wrote: “I am not sure whether indeed we possess morality beyond culture, but I am absolutely certain that we do indeed possess knowledge beyond both culture and morality.”37 For what David Hume called our “natural belief” keeps the threat of cognitive relativism at bay. Virtually no one really doubts that science progresses toward an ever truer picture of the world and that its methods generally yield explanations that, while always fallible and revisable, are capable of being absolutely or objectively true, determinable by the facts of the matter and the way the world is, and not dependent on the idiosyncrasies of our particular, local worldview. Here, when divergent theories obtain, we confidently expect them to converge on ever more veridical accounts of how the world works. Some may have less confident expectations of the social sciences, or of some of its departments, but only because they contrast these, on various grounds, with what is surer, the soft or softer with the hard. Although the rest of us take their results on trust and accept their authority, we know that the institutions within which scientists work, the professional culture they internalize, and the methods they employ enable scientists to track the truth.
In matters of morality, there is no longer such a sense of security. As Bernard Williams has put it, in “a scientific inquiry there should ideally be convergence on an answer, where the best explanation of the convergence involves the idea that the answer represents how things are; in the area of the ethical, at least at a high level of generality, there is no such coherent hope.”38 Of course, many people are certain of their moral views and judgments, but they know that many others, no less certain, have different views and make different judgments, and both groups know that many others lack their very certainty.
For these reasons we think differently about scientific and moral “truth.” We distinguish between the truths delivered and deliverable by science and what are claimed or proclaimed to be truths of morality. And the thought that the latter might be relative or local is disturbing, as the American moral philosopher Thomas Scanlon has suggested, for three reasons. The first, and weakest, is that if people accept relativism, then they will lack the motivation to accept basic moral principles, even those forbidding such things as murder. But as Scanlon remarks, “I do not think that the spread of relativism would have much effect on the amount of violence in the world. The worst mass murderers have not been relativists, and many relativists accept, perhaps for varying reasons, the basic contents of ordinary morality.” 39 The second reason is that relativism threatens to deprive us of moral confidence: of the sense that we are right to condemn the actions of wrongdoers and to think that their victims are entitled not to be wronged. And the third reason is that relativism removes the sense of conflict between apparently conflicting moral judgments by suggesting that since they are relative to different standards, they do not really conflict. When we are internally torn by conflicting moral intuitions, it really is hard to believe that our sense that there is a conflict is an illusion. (If anything in moral judgment is objective, it is surely the reality of such conflicts.) When we make a moral judgment, we appeal to a moral norm we take to have a particular authority—an authority that excludes others from having the same authority.
Moral relativism is the idea that the authority of moral norms is relative to time and place. Norms are rules that indicate which actions are required, prohibited, permitted, discouraged, and encouraged. Norms, we typically say, are external to individuals and “internalized” by individuals, and they guide individuals’ behavior: they issue instructions to act or not to act. What distinguishes moral norms from others? As we shall see, this is disputable, and the dispute matters a great deal. Let us say, provisionally, that moral norms cover matters of importance in people’s lives, where they are faced with distinguishing right from wrong. Moral norms are directed at promoting good and avoiding evil, at encouraging virtue and discouraging vice, at avoiding harm to others and promoting their well-being or welfare. In general, moral norms are concerned with the interests of others or the common interest rather than just with the individual’s self-interest. They are also distinct from the rules of etiquette, law, and religion (though the conduct they require may overlap with what these require).
There are two ways of thinking about morality and moral norms. One can view them as an external observer, anthropologically or sociologically, seeing them as forming systems of morals or ethics, or codes of conduct, which vary from society to society, culture to culture, or even group to group. This is sometimes called a descriptive view of what morality is. In fact it involves both description and (causal) explanation: describing how moral norms function and explaining how they arise and how (by what mechanisms) they shape and influence people’s thought and behavior. So we may speak of different “moralities”—the morality or morals or ethics of the ancient Greeks, say, or of Homeric warriors, or of Athenian or Spartan morals in particular. We may speak of the morality of the Renaissance courtier or of the American frontier or of the antebellum American South or of medieval samurai or of the Aztecs, or of Christian or Islamic or Hindu morality, or even of Puritan or Salafist or Brahmin morals. Viewed this way, moralities can vary widely, with regard both to their foundations and to their central concerns. They can be religious or pagan or atheistic. They can focus on military prowess or tribal feuding or reciprocal gift-giving or knightly chivalry or family honor or caste distinctions or sexual behavior or money-making or spiritual purity. Furthermore, they can embrace all kinds of practices, including slavery, racism, aggressive war-making, and ruthless oppression; and they need not exhibit impartiality or universalistic attitudes.
The second way of thinking about morality allows us, on the contrary, to view it as excluding and condemning practices such as these last as immoral. Here one is viewing morality not as an external observer but practically—from inside the practice of morality, as a moral agent or participant. One views it from a first-person rather than a third-person standpoint. In occupying this standpoint, I consider what I and others should or ought to do, what is right and wrong, what is obligatory and what is prohibited, what is good and bad, what is valuable and worthless, and so on. Moral norms now appear as principles and rules that I see as applicable to myself and anyone else similarly situated. In this view morality is single, not plural, though it may be internally complex and indeed contain conflicting principles. It is the one morality in terms of which I now make judgments (whether or not I follow its dictates). It applies to my conduct and practices and it enables me to judge those of others. From this internal standpoint, moral norms are justified and justifiable by reasons I find to be compelling, and so they are seen as a guide to conduct for myself and for all relevantly similar moral agents. We may think that many societies lack many of these norms and that many societies, even most, perhaps including our own, are to be judged in the light of them to be morally defective or degenerate or backward. The point is that in this view of morality—the agent’s or participant’s view—moral norms are appealed to and seen as binding on oneself and on all relevantly similar moral agents similarly situated, and so it is sometimes called thenormative view of morality.
But now an intriguing question arises: how far does reasoning reach? Do the compelling reasons that justify the moral norms by which we judge apply to all persons everywhere and at all times? If they do not, should they?
Relativists answer these last two questions in the negative. Adherents of what we may call “standard” moral relativism hold that our reasoning, and thus the applicability of our moral norms, does not reach beyond the bounds of whatever our morality is relative to—our culture, say, or religion or language. Bernard Williams, who has no time for this doctrine, wants, however, to stake out a place for what he calls the “relativism of distance” in respect of societies that are sufficiently distant from us, not in space but in time. He argues that “the language of appraisal—good, bad, right, wrong, and so on” is inappropriate when we are considering, say, “the life of a Bronze Age chief or a medieval samurai,” for these embody outlooks with which we are in merely “notional,” not “real,” confrontation. They are “not real options for us: there is no way of living them.”40 One can, writes Williams, “imagine oneself as Kant at the Court of King Arthur, disapproving of its injustices, but exactly what grip does this get on one’s ethical or political thought?”41 On the other hand, today “all confrontations between cultures must be real confrontations,” 42 since other cultures are “within our causal reach.” After colonialism, when confronted “with a hierarchical society …, we cannot just count them as them and us as us: we may well have reason to count its members as already some of ‘us.’”43
Thoroughgoing antirelativists, by contrast, hold our reasoning to be universal in scope, reaching across space and time. Our judgments and principles apply to all relevantly similar moral agents. But antirelativists and relativists will differ about what makes them relevantly similar. From within the normative standpoint, antirelativists will answer: their common humanity—the inclusive rather than the contrastive “us.” These days they will speak the language of human rights. Relativists of the standard variety are or should be wary of such talk, while antirelativists like Williams, who want to keep a place for relativism regarding societies sufficiently distant in time, will confine such talk to the modern world.
Of course, as those taking the descriptive view of morality readily observe, there is considerable disagreement among moral agents or participants, including moral philosophers, as to which are the rationally justifiable and applicable moral rules and principles. Anthropologists, as Mary Douglas observes, “record many diverse social forms each venerating its particular idea of justice.” 44 There is, it is true, very general agreement that certain sorts of actions, such as killing, deception, and breaking promises, are to be prohibited and that others, exhibiting a sense of fairness and loyalty and enhancing cooperation, are to be encouraged. But divergences set in once you ask under what conditions and in relation to whom these actions are to be respectively prohibited and permitted, and on what grounds. So, for instance, Kantians, utilitarians, believers in Natural Law, contractarians, perfectionists, liberals, and communitarians, not to mention adherents of different religions and of none, will give different accounts of the bases and content of moral norms. What is agreed by all those taking the internal normative view, however, is that moral norms are based on compelling reasons and binding on all who are relevantly similar and similarly situated.
Moral relativists begin from the observer’s standpoint, adopting the descriptive view of morality, observing that there is a diversity of morals. They are struck by what they see as the fact of moral diversity: by the observation that moral norms form systems of norms, or moral codes, which differ from one society or culture or group to another. This amounts to saying that there are divergent views about what constitutes good and evil, virtue and vice, harm and welfare, dignity and humiliation, and where individual and common interests lie. They are further struck by the thought that these divergences can be irreconcilable: that the moral disagreements revealed by these divergences may not be capable of being rationally resolved, that they can lead to incompatible judgments, even that they may be incommensurable, lacking a common moral framework or shared concepts and standards. Of course, some disagreements will be resolvable by clearing up vagueness or indeterminacy in how norms are expressed, and in some cases they will be due to factual disputes or logical errors. But moral relativists insist, against so-called moral objectivists or absolutists, that the disagreements that divide societies and cultures spring from irreconcilable moral outlooks. They will further insist that their view is only strengthened by the fact that the objectivists disagree among themselves about which objectivist theory is the right one and by the reasonable-seeming prediction that there is no prospect of that disagreement ever coming to an end.
From their observation of the fact of moral diversity and their view that this diversity is irreconcilable, moral relativists take the crucial step that defines full-fledged moral relativism. They hold that if the internal, participant’s normative view of morality is taken to be universally applicable, reaching across space and time, then it is untenable. There is, they claim, no unique viewpoint from which moral norms are rationally compelling and universally binding. They may say this because they hold that there is no point beyond a culture from which we can judge others in a way that is not relative to our own position. This was how the distinguished American anthropologist Clifford Geertz deviously defended relativism by criticizing “anti-relativism” for “placing morality beyond culture and knowledge beyond both.”45 And it was the no less distinguished French anthropologist Claude Lévi-Strauss’s point when he compared cultures to moving trains, reminding us that “for a passenger sitting by the window of a train, the speed and length of other trains vary according to whether they move in the same direction or the opposite way. And every member of a culture is as closely linked to that culture as the imaginary passenger is to his train.”46 They may say, like the Finnish philosophersociologist Edward Westermarck, who propounded ethical relativism at the London School of Economics, that since “there are no moral truths it cannot be the object of a science of ethics to lay down rules for human conduct” and that “the moral consciousness is ultimately based on emotions, that the moral judgment lacks objective validity, that the moral values are not absolute but relative to the emotions they express.”47 Westermarck’s view was that our moral intuitions are in fact emotional tendencies formulated as judgments, which are calculated to give moral values an objectivity they do not possess.
Moral relativists may say, with the cautious American philosopher Richard Brandt, while preserving “a healthy degree of skepticism about the conclusiveness of the inquiry,” that different groups “sometimes make divergent appraisals” though they have identical beliefs about the facts.48 They may agree with the more forthright Australian moral philosopher John Mackie, who thought that ethics is a matter of “inventing” right and wrong, that the radical differences in moral judgment that we observe “make it difficult to treat those judgments as apprehensions of objective truths.”49 The authority of moral norms comes, they may say, not from reason but from religious authority, say, or tradition or custom or convention, and its scope of application is local and time-bound. So we can only participate in morality understood in the descriptive sense—that is, in one morality or another—in this morality as opposed to others. In other words, the “first-person,” practically oriented perspective of the moral agent has to be rethought and reformulated. Moral judgments are always to be understood and expressed in a relativistic manner. Moral principles are only properly expressed when a “relativizing clause” is appended to them. In short, our judgments about right and wrong are not, as we have supposed, unqualified and absolute, but relative to our society or culture, or whatever group turns out to be the source of our moral framework.
So, for example, Mary Douglas, the anthropologist, criticizes the political philosopher Brian Barry for arguing that justice rests on principle, not convention. When Barry claims that systematic group discrimination and economic and social privilege based on birth are, even if universally accepted in a given society, unjust, Douglas comments that he “is expressing the legitimating principles of the conventions created to maintain a particular set of institutions, to wit, those of Western industrial society. Yes, for us, who have internalized the justice of these institutions, such inequality is clearly unjust.”50 And when Barry insists that if “someone can read a history of European settlements in Australia and the Americas, or a history of negro slavery, without admitting that he is reading a history of monstrous injustice, I doubt that anything I can say is likely to convince him.”51 Douglas sees this as comparable to a theologian’s justification of religious truth in mystic experience. The theologian too says, “[N]othing I say will convince him: the feeling is incommunicable.” It is, she claims, very hard to defend a substantive principle of justice as universally right without “appeal to religion, intuitionism or innate ideas.”52
This progression of thought is not, however, inexorable. You can accept descriptive relativism, acknowledging the diversity of morals in the descriptive sense—though, as we shall see in Chapter 3, this is no simple matter. You can then switch roles and, as a moral agent who participates in morality in the normative sense, view your moral principles as objective or absolute, and justified by reasons. But you will then have to take a view about moral pluralism—the alleged plurality of objectivist accounts of morals. You can deny such pluralism, as many do, holding that your morality is the One True Morality, and that the others are in error. You might be what philosophers call a “moral realist” and hold to “the idea that moral questions have correct answers, that the correct answers are made correct by objective moral facts, that moral facts are determined by circumstances, and that, by engaging in moral argument, we can discover what these objective moral facts are.”53 Or else you can try to find a way to reconcile moral objectivity and moral pluralism.
I said that moral relativists start out from the observation of moral diversity: the alleged fact of moral diversity raises the question of moral relativism. But it is important now to qualify this by noting that their conclusion, just stated, though motivated by the alleged facts of moral diversity, does not depend on the actual existence of such diversity. This mistake is often made by those who wrongly suppose that if moral norms or practices were shown to be cultural universals, such as, say, the prohibition of incest, this would tell against the case for moral relativism. But as Cook observes, the relativist will reply as follows: “A principle that is found to be accepted in all cultures is just as relative as a principle that is accepted in some but not all cultures, for (i) those that are universally accepted still prescribe conduct for only the members of presently existing cultures, and (2) it would be absurd to think that if a new culture evolved tomorrow, it would be morally inferior if it did not incorporate those principles that are now universally accepted.”54 Indeed, theoretically, even if much or all of human morality turned out to be shared in common, there would still be the potentiality of diversity emerging, and that shared morality would, in the moral relativist view, be relative to all currently existing societies. All that moral relativism requires to get going is the postulate of actual or potential diversity or both. And the relativity of morals does not mean the absence of universal acceptance, but rather the denial of universal applicability.
Of course, moral relativists, in denying the rational basis and universal applicability of moral norms, are faced with the problem of accounting for their undoubted authority. For these norms confront us with demands and requirements. As Durkheim argued, they are “social facts”—external to and independent of us as individuals and exerting a constraining influence upon us. Why, after all, do we obey moral rules and conform to moral principles and feel guilt and shame when we deviate from them? Here, then, is an answer: the source of moral authority is social. Are “morals” the same as the customs or mores of a given society? William Graham Sumner, early American sociologist and author of Folkways, who coined the very word ethnocentric, wrote that “‘Immoral’ never means anything but contrary to the mores of the time and place,”55 and the cultural relativist anthropologist Ruth Benedict briskly remarked that “morality … is a convenient term for socially approved habits.”56 This answer, in one form or another, is, as we shall see in the next chapter, as old as the ancient Greeks and has continued to be advanced and disputed until today.
MORAL RELATIVISM. Copyright © 2008 by Steven Lukes.