Media Ecology Conference June 28, 2019: Panel 2.4.10
“Critical Moral Realism as an approach to thinking about,
and teaching, ethics in communication studies.”
By Wade Rowland
A brief biographical note: about two-thirds of my working life has been spend in print and television journalism; the remaining third has been in the academy, mainly at Ryerson, Trent, and eventually at York University. As a journalist, I had found it crucially helpful to have a credible, time-tested framework of ethical standards within which to pursue my work: standards like objectivity, balance, fairness, truthfulness, and a special concern for the disenfranchised and the vulnerable.
As an academic, I wondered how journalists develop and share their values as a craft or a profession. Before journalism schools began proliferating in the 1980s, most of the teaching and sharing was done in newsrooms, and it was mostly done by example. In the newspaper world style books were common, but policy books were rare. My 1967 copy of the Winnipeg Free Press style book (there was no policy manual) devotes a total of four sentences to what could be construed as ethical instruction:
The reporter’s job is to give the reader complete and accurate news in a form he can understand. The first essential is to understand the story yourself. The second is to write it so that the reader will understand it and will be tempted to read it. Clarity is essential. So is simplicity.
There are a number of tacit assumptions here, of course. In practice, this was interpreted to mean that good reporting should be transparent—the writing should not attract attention to itself—it is simply a conduit. Thus, “understandable,” but also tempting to the reader. Why tempting? Because it is assumed that in a world of responsible journalism, if it’s in the newspaper or the newscast, it is information of significance to the reader’s life, something he or she ought to know.
And that assumption, in turn, makes it clear that reporting—responsible journalism—is a public service. By definition then, a reporter’s job, first and foremost, is to serve the public, and this is the foundation for journalistic ethics. It is also the basis of the century-old division between “church” and “state” in journalism—between public service and the financial framework—the business— that makes the provision of service feasible; it places a firewall between editor and publisher. Reporters aren’t in the game to get rich—virtue is its own reward. But publishers are.
There is a certain nobility in this idea and it’s what draws many young and idealistic people to the profession. Uncovering the underlying thought can be a useful jumping-off point to exploring some basic moral issues. What is it that makes public service virtuous, and turning a private profit somehow less so? We can turn to authors like Karl Polanyi and C. B. Mapherson and R. H. Tawney and Karl Marx for instruction here. And how can we know what is in the public interest, and what is not? These are issues that take one right back to Socrates and his timeless questions: What is the right way to live? What is a good life? And even beyond that to: What is Good? And from there to Aristotle and his Politics, and so on.
During my stint as chair of ethics in media and Ryerson University’s journalism school I would tell new students what I had been told as a young newspaper reporter: when it comes to ethics, the buck stops with you. There is no professional association to appeal to, no government regulator. In most workplaces, no union either. “All the other courses you’re taking here are designed to make life easier for you when you get into the workplace,” I told them. “My role is to make your job more difficult.” (Which sounds more than a little pompous in retrospect!)
My approach to teaching and thinking about moral philosophy, broadly defined, has evolved from the formalism of deontology and consequentialism to something more grounded in everyday experience. The essays presented in my forthcoming book, Morality By Design, reflect the approach I’ve taken, with both graduate and undergraduate students of communication studies. My intent has been to go beyond issues of situational or applied ethics, to find principles and understandings that transcend day-to-day judgements and are applicable to almost any field of endeavour.
Of course, this intellectual territory—broadly, moral philosophy— has been occupied for millennia by the world’s religions —most of which have adherents in any typical lecture theatre or seminar room where I teach, at York, where the large student population is famously multicultural and polyglot, even for a Canadian university. With that on mind, it seemed important for me to find a way to discuss morality and ethics without treading on anyone’s religious and/or cultural sensibilities.
The approach I arrived at, one that students apparently found stimulating and relevant to their lives, is critical moral realism, a flourishing branch of critical theory, a blend of scientific realism and metaphysics in which the basic, underlying, concepts are in no way antithetical to religious belief. It provides an intellectually satisfying and philosophically rigorous response to moral relativism, and to notions of tolerance that are taught in most grade schools as an innocuous substitute for real moral judgment. What is attractive to me about realism as a pedological strategy is the ability to get at basic moral and ethical concepts while staying well within the realms of common sense and everyday knowledge and experience.
•
For me, and for most moral realists, questions surrounding moral judgment are mainly epistemological issues. Each of us arrives with the mental equipment necessary to distinguish good from bad, or as philosophers as diverse as Noam Chomsky, John Rawls, Hannah Arendt, Thomas Nagel, and Zygmunt Bauman—just to name a handful of the leading, contemporary, moral realist thinkers—often refer to it as an innate moral sensibility, or an innate moral grammar. In fact, we distinguish the rare human beings who are born without this innate ability by defining them as psychopaths.
Clearly, then, healthy human beings share an ability to make moral judgments. In doing so, they are operating, unavoidably, within a body of information that can be called moral knowledge, and this places those judgements within the definition of epistemology. Thanks to our innate moral sensibility it is possible to know that something is right or wrong, good or bad. And the epistemological question then becomes, how can we have confidence in that knowledge? Can there be such a thing as a moral fact that shares a level of certainty equivalent to a scientific fact?
Academics in the past have spoken of scientific fact as being “mind-independent:” for instance the existence of gravity, and the formula for calculating gravitational attraction between masses, would remain the case even in the absence of human intelligence. Nowadays, however, in the wake of quantum indeterminacy and the critical role of the observer in quantum physics, I think it is safe to say that few scientists would be willing to argue anymore for the existence of mind-independent fact.
The current realist consensus is that in science, and in moral thought, those facts which first of all seem to accurately describe reality as we experience it, and then successfully resist repeated attempts at falsification, are the ones in which we place the most confidence. These reliable facts have achieved a broad and deep level of agreement within their respective fields of interest. As Mary Midgley says: “Facts are data—material which, for purposes of a particular enquiry, does not need to be reconsidered…. In other words, a fact in any realm of human inquiry amounts to a discussion frozen in time, halted where a particular line of inquiry has come to a standstill, for lack of further data or novel new insight. She goes on to add that “the word ‘fact,’ in its normal usage, is indeed not properly opposed to value, but to something more like conjecture or opinion.”
In other words, the idea that fact and value reside in separate and different categories of knowledge, is false. The notion of a dichotomy between fact and value, or more accurately, between moral and scientific fact, is dismissed by realist as fictitious. Fact and value are unavoidably “entangled,” simply because human experience, including observation and data collection and computation, is always to some greater or lesser degree, value-laden. We humans are not robots, and cannot avoid the infiltration of issues of value into everything we do, even the design of the algorithms we write to govern the actions of our machines. Nor can scientific realities be excluded from thinking about values, if that thought is to have any practical, positive application to the world we live in. Why, then, do we continue to talk about fact/value distinctions? Hilary Putnam has this to say:
For one thing, it is much easier to say, “that’s a value judgment,” meaning, “that’s just a matter of subjective preference,” than to do what Socrates tried to teach us: to examine who we are and what our deepest convictions are and hold those convictions up to the searching test of reflective examination….The worst thing about the fact/value dichotomy is that in practice it functions as a discussion-stopper, and not just a discussion-stopper, but a thought-stopper. [Hilary Putnam, The Collapse of the Fact/Value Dichotomy and Other Essays, (Harvard University Press, 2002), 42.]
From the perspective of critical moral realism, basic value judgments such as “genocide is wrong” cannot be both true and false. They are either true (or false) for reasons offered, or for reasons not yet fully uncovered. For the moral realist, ethical claims are similar to statements like “the sky is blue.” They can be verified by real-world observation and experience, and demonstrated to be true or false. That is, they meet the same criteria of fact as material facts do. Assertions that slavery is defensible, or the handicapped ought to be destroyed at birth, or that adulterers ought to be stoned to death, or that animals have no right to humane treatment, are quite as false as the statement that water flows uphill. Their falsity is not just a matter of somebody’s opinion: it’s a fact.
It is worth pointing out, parenthetically, that no fact in the modern scientific consensus is more than about 450 years old, while consensus on moral issues can and often does stretch back to the beginnings of recorded history. It is also noteworthy that when shifts in moral consensus occur—when slavery is abolished, or women’s suffrage is enacted, or torture is outlawed, or the right of animals to humane treatment is recognized—they tend to have a much more significant impact on daily life than even a genuine scientific revolution like the displacement of Newton’s physics by Einstein’s relativity.
•
The claim, then, is that both moral and scientific facts are established by consensus, and the broader and deeper and longer-lasting that consensus is, the more confidence we can have in the related body of knowledge. When a significant shift in that consensus occurs, we do not expect to go back to earlier views, especially where moral knowledge is concerned. At the same time, however, the possibility of falsification is never completely extinguished.
To define “consensus” further: the truth or falsity of a statement, either moral or scientific, does not depend on how well it fits with some theoretical, a priori assumption that must be accepted on faith—though these may well lurk in the background, for instance as a belief in the real existence of good, or faith in the ability of reason to see deep into nature’s workings. The truth and reliability of these statements of fact depends, instead, on a network of prior knowledge linking the statement we want to verify with others of the same kind that have their own network of evidence, and ultimately to “the whole map of our experience and of the world which we believe to surround us.”
And so, in the end, the body of fact that makes up our total understanding of the world is like a vast crossword puzzle, in which having the correct answer to the clue for 1 Down may depend on knowing the correct answer to 3 Across, and so on until the puzzle is complete. Reality, though, is for all practical purposes infinite, and so the puzzle is never done. And as long as that’s the case it is always possible that an incorrect answer will be discovered deep in the matrix, forcing reconsideration of every other related answer, in principle up to and including the whole puzzle.
Finally, Hilary Putnam makes a salient point in arguing for the factual nature of moral knowledge, via the pragmatist philosopher John Dewey:
If it is possible to do science without supposing that one needs a metaphysical foundation for the enterprise, it is equally possible to live an ethical life without supposing that one needs a metaphysical foundation. …. As John Dewey urged long ago, the objectivity that ethical claims require is not the kind that is provided by a Platonic or other foundation that is there in advance of our engaging in ethical life and ethical reflection; it is the ability to withstand the sort of criticism that arises in the problematic situations that we actually encounter…. [Putnam, op. cit. 94]
If the moral realists are correct, as I believe they are, then it is crucial that moral fact be integrated into the formal matrix of knowledge so that the two kinds of fact—both tested sources of knowledge—can be made explicitly supportive of one another, cresting a more robust and resilient structure, one that is less likely to conceal catastrophic flaws, one that reliably reflects reality in all its complexity.
The realist/consensus view does not, however, exclude the possibility of the transcendent, or the universal, as foundational to moral thought. I think that there is an empirically sound route to making a connection between consensus views of good, and the current state of our understanding of the makeup of the universe. In a nutshell: In moral thought, the most difficult question is the oldest and most basic—what is the nature of good? Moral realism posits that good is that to which our innate moral sensibility is attuned, a statement which would be a tautology without some further description of good. When pressed, then, moral realists will usually say that good is a feature of the world whose existence, much like gravity’s, is clear to us through everyday experience but which has so far eluded any complete and final definition.
To accept that good is part of the primordial order of things, folded in, like gravity, is to accept that humanity must in some way be aligned with or touched by good; that good is implicated in our make-up and constitution. How could it be otherwise? Just as we are susceptible to gravity and other natural phenomena, we must in some sense be influenced by good, and shaped by it just as physical objects in space cannot avoid the influence of gravity.
Certainly, we are able to sense good and to know it when we see it. We have various names for this ability: conscience, or moral sense, or our moral compass, or the moral impulse within us. Biologist Marc Hauser concludes in his landmark study, Moral Minds: “We are endowed with a moral acquisition device. Infants are born with the building blocks for making sense of the causes and consequences of actions, and their early capacities grow and interface with others to generate moral judgments. Infants are also equipped with a suite of unconscious, automatic emotions that can reinforce the expression of some actions while blocking others. Together, these capacities enable children to build moral systems.”
Aristotle said something very similar of the moral sense: “Neither by nature, then, nor contrary to nature do the virtues arise in us; rather we are adapted by nature to receive them, and are made perfect by habit.” Eight hundred years later St. Augustine was grappling with the mysteries of this faculty when he concluded that morality is the product of charity, which is the pondus— the gravitational force—of love that attracts us to “that which we ought to love.” Immanuel Kant in the 18th century confessed to being awed by two things: “the starry heavens above me and the mystery of the moral law within me.”
Here’s a little parable that may help to explain this, based on the scientific understanding that like everything else in the universe, we humans are products of the Big Bang; we are composed of the same basic materials as the stars. Imagine, then, that you are at home reading a book; you feel thirsty. You get up and pour yourself a glass of water and drink it. The water drains into your stomach, and from there seeps into your bloodstream, into cells, eventually reaching your brain. There, the water begins to think.
Questions arise: are we the universe thinking? Put more broadly, is consciousness itself an inherently moral phenomenon, as Aristotle and Augustine suggest? Further, is consciousness in some sense oriented toward the good?
•
Because communication specialists in general, and journalists in particular, are likely to find themselves employed by corporations, I think it is important, as a practical matter, for them to have some understanding corporate culture as a moral and ethical environment. The obvious starting point here is the ontology of the corporation itself, and that leads to a discussion of the origins of the liberal capitalist market and its Enlightenment roots in moral philosophy of Spinoza, Smith, Jevons, the Utilitarians and other early economic theorists, all of whom were in some part moral thinkers.
Bit by bit, one court case at a time, market capitalism fundamentally transformed itself during the first two-thirds of the twentieth century. The metamorphosis was below the radar of all but specialist observers, but it may be the ultimate triumph of rationalist economic thinking. The transformative change is the ascendency of the modern business corporation in domestic markets of world’s wealthiest nations, and in the international markets they share.
As wealth and market power were increasingly consolidated in the hands of fewer and fewer enormous corporations the troubling disconnect between the canonic assumption of consistently self-interested behavior by market participants, and their often idiosyncratic behaviour in practice, was dramatically narrowed. Business corporations reliably behaved the way theoretical rational economic agents were supposed to. Today, these financial and industrial behemoths enjoy surprising privileges and exert enormous influence over every aspect of our lives. And yet as the daily news attests, they, and their senior managers, routinely behave with shocking, seemingly sociopathic irresponsibility. How are we to account for this?
The term “modern business corporation” has a specific meaning that needs to be defined. First of all, it does not include the small to medium-sized corporations owned and operated by their founders or their successors as family concerns or partnerships. Numerically, these are by far the majority in the world of corporate business. What the term is intended to signify is the large business corporation that is no longer privately held, but listed on stock exchanges and owned by a large number of shareholders, a group that typically includes other corporate institutions such as pension funds, mutual funds, insurance companies and the like.
Management positions in the modern business corporation are occupied almost exclusively by university-trained professionals whose mandate is to serve the interests of the shareholders. Serving the shareholders is taken to mean one thing: maximizing the return on their investment. And this in turn means maximizing corporate profit.
I have called these entities cyber-corporations to draw attention to their novelty, and because at their core they are essentially machine entities that regulate their own operations through feedback from their environment. They are, in other words, cybernetic mechanisms, and within them human “management” is confined to narrowly prescribed roles that are delineated by the algorithms governing the corporate machine’s legal and financial existence.
The large business corporation of today had its genesis as a straightforward legal resource for the accumulation of capital. As the booming capitalist economy of Europe spun its colonizing web across continents and oceans to Africa, Asia and the New World in the 16th and 17th centuries, the potential for realizing vast wealth through trade and commerce was undercut by the enormous risks involved in such far-flung ventures. A solution was found by lawyers, entrepreneurs and politicians in the joint-stock corporation, in which individual investors were sheltered from the full impact of financial disaster brought on by, for example, the loss of a ship, or the extermination of a colony or trading outpost by disease or hostile indigenous peoples. A tool developed mainly in Britain, they operated under royal charters, and often served vital foreign policy interests such as colonization.
In these new “limited-liability” corporations, investors’ exposure to liability for unpaid loans, lawsuits, and other claims against the corporation was restricted, by law, to the amount of their personal investment. That is, no single investor could be called to account for financial loss or indebtedness incurred by the corporate entity, beyond what he or she had invested. An individual investor thus could look to the potential of enormous returns, at relatively low risk. This amounted to a way of socializing risk while privatizing profit, because it frequently fell to outside interests, usually state governments, to deal with the fallout of financial disaster. But it spurred the rapid development of capitalism and its markets by making possible the accumulation of large pools of private capital which were managed by trained professionals.
During the late 19th century, business corporations sought and were granted other privileges beyond limited liability. In a succession of court cases in the U.S. and Britain they were freed from restrictive charters that had limited them to the single, specific enterprise for which they were incorporated, freeing them to deploy their capital as they pleased. They successfully petitioned the courts for formal legal recognition of their ancient, unofficial identity as “persons” in matters of law and regulation. Then, leveraging their statutory personhood, they began an ultimately successful campaign in American courts, to gain essentially the same human rights protections and remedies as are enjoyed by human beings under the Bill of Rights.
As the 20th century drew to a close, corporations had gained access to the protections of the First, Fourth, Fifth, Sixth and Seventh Amendments, which cover rights to free speech, freedom from unreasonable searches and searches without warrants, freedom from double jeopardy, and the right to trial by jury in both criminal and civil cases. These rights were extended internationally using the leverage of international trade agreements and institutions.
The capstone of the transformation came in 2010 with the U.S. Supreme Court’s ruling in the Citizens United case, which overturned federal legislation restricting corporate spending during elections, on grounds that limiting the spending of money to support a cause amounted to restraining free speech, which is protected under the First Amendment. The result has been to exacerbate an already serious problem of corporate money determining election outcomes, and it requires a constitutional amendment to overrule the Court.
The divorce of ownership from management functions, the single-minded goal of maximizing profit, the freedom to engage in any field of business, and the ability to challenge government authority from behind the shield of human rights statutes, have combined to make of the modern business corporation something unique in history. Designed initially as a tool for making money, it has evolved into a highly complex legal entity that is essentially robotic in character. Its goal remains the same—maximizing return on shareholder investment—but its power and influence have been enormously enlarged. And not just its influence on the outside world, but on its workers and managers as well.
Over the past half-century, sophisticated management techniques and new surveillance technologies have combined to impose internal controls regulating employee behaviour at every level, from the shop floor to the corner C-suites. So effective are these tools that today it seems impossible to avoid the conclusion that the corporate entity per se manages its managers, confining them to modes of behavior that are defined entirely by the instrumental needs and goals of the corporation. Those who do not fit this mold are either re-educated to conformity through various forms of coercion and persuasion, or weeded out.
The situation of humans within the corporate entity is in many ways analogous to the role of modern military personnel, who often operate within an environment defined by the needs and objectives of their weapons systems: “The aircrew of the Apache [attack helicopter] is expected to function reliably as an extension of such machines…or weapons systems generally; [as] adjunct for some limitation the machine has due to its incomplete development.” The “machine,” in the case of the modern business corporation, is the organizational structure, or bureaucracy, that defines its existence. It is the very close integration of machine and human elements of the system suggests the term cyber-corporation to distinguish these modern innovations from their evolutionary predecessors. As with the military machine, a goal of the cyber-corporation is to replace humans with robots to the extent feasible.
University researchers have for some time been seriously examining modern business corporations as examples of a wider, cross-disciplinary research category called self-organizing systems. The studies draw on biologists’ study of “collective beings,” which are defined as complex, goal-oriented, self-organizing systems made up of large numbers of autonomous entities. In the study of collective beings, theories of self-organization and computer modelling are employed in an attempt to understand how large numbers of autonomous creatures—bees in hives, or ants in their colonies, or birds in flocks, or fish in schools—can exhibit highly coordinated behaviour without apparent overall management. A promising line of inquiry is found in emergence theory, a field of physics that studies the “emergent properties” of highly complex systems, as, for example, the spontaneous emergence of tornados from certain weather patterns, or the spontaneous emergence of consciousness from billions of neurons in our brains, or the similarly spontaneous emergence of life from complex chemical soups. A feature of emergent properties is that they often have the ability to influence and even control the very systems that spawned them.
Applied to the cyber-corporation, the new disciplines suggest that these huge bureaucracies are more than the sum of their parts: they need to be understood as systems made up of autonomous individuals (i.e. workers and managers) whose behaviour on the job is governed both by initial design parameters and an emergent property that could be called a culture. Within the system, human mangers are made to conform to the cyber-corporation’s goals and values, rather than vice-versa.
The antisocial behaviour of so many of the world’s largest corporations in every field from communications and media to pharmaceuticals to agribusiness and forestry, to mining, to automobile manufacturing, to finance, and etc., is less baffling if seen in the context of the mature cyber-corporation as a machine-like, self-regulating organism. It is designed to maximize the value of the assets under its control, on behalf of its shareholders; external considerations, up to and including human life, are factored into its decision-making only to the extent that they may have an impact on that goal, positive or negative. Interventions by human actors within the machine are limited both by the fear of getting fired, and a kind of cultivated moral myopia. According to David Luban, there are at least three ways in which corporate structures mute any sense of moral culpability:
Psychologically, role players in such organizations lack the emotional sense that they are morally responsible for the consequences of organizational behaviour…. Politically, responsibility cannot be located on the organizational chart and thus in some real way no one—no one—ever is responsible. Morally, role players have insufficient information to be confident that they are in a position to deliberate effectively because bureaucratic organizations parcel out information along functional lines. Lawyers and Justice (Princeton University Press, 1989).
In the corporate-capitalist economy as it is configured today, worldwide, it is more difficult than ever to see how moral outcomes might emerge, spontaneously and autonomously, from the dynamics of the system. The cyber-corporation has usurped the classical economist’s idealized market of many small companies competing on more or less equal terms with one another for customers and raw materials, serving well-informed consumers who are making “rational” decisions. If that highly idealized market ever existed as the norm, it no longer does. The cyber-corporation has expanded the once tightly regulated corporate niche in the market to occupy the entire organism, like a virulent parasite.
Not only does the cyber-corporation determine what goods and services the market supplies (i.e., those that are most profitable), it shapes demand through the mass media it owns and controls. The market itself can no longer reasonably be said to operate in the interests of citizens (now called “consumers” and “human resources”). It has long since turned to serving the interests of the cyber-corporations that control it. To the degree that the market engineers the satisfaction of desires, it is the desires of the cyber-corporation that are chiefly sated, and these resolve themselves into a single consideration: profit.
Looked at another way, the cyber-corporation represents the realization of classical economists’ idealized understanding of human individuals as primarily self-interested, pleasure-seeking entities. Within the modern, liberal capitalist system, the interests of the individual shareholder (whether personal or institutional) are represented in the market by the cyber-corporation, which actually does fit the model of incorrigible self-interest and endless acquisitiveness. It is the ultimate “rational economic agent.”
If there was ever a time when humanity could look to the market as a reasonable proxy for the moral systems of earlier eras, that time is over. With the cyber-corporation in the driver’s seat, market economies in their ever-increasing speed and efficiency are operating in a in a world of moral weightlessness, where in the absence of gravity, “Things fall apart; the centre cannot hold.”
As evidence of this we note the emergence and burgeoning growth what Shoshana Zuboff calls “surveillance capitalism,” in which business corporations employ big data and machine intelligence to minimize risk and maximize profitability in the markets in which they operate by eliminating—or at least minimizing—behavioural unpredictability. It is the logical extension of the evolution of the modern business corporation as a means of accomplishing the same end—reifying the classical economists’ concept of the “rational economic agent.” Data collected on individual behaviours, online and with so-called “smart city” and other real-world surveillance technologies associated, for example, with the Internet of Things, is used to create ways of reinforcing or altering that behaviour through economic messaging: through directed advertising, by offering “free” services that require small changes in attitude and behaviour in exchange of convenience, information, security, and entertainment. As with the evolution and ultimate hegemony of the modern business corporation, the goal is profitability—which classical economic theory equates with good, bringing us full circle, back to basic moral issues.
The field of communications and its practitioners are complicit in the construction and promotion of this new, potentially deeply anti-human economic frontier. While the technology and its applications may be bewilderingly new and awesomely potent, the moral obligations of workers and professionals are ultimately to be located in basic moral thought, and in particular, I would urge, in moral realism. The debate necessary to arrive at guiding ethical principles is only just beginning, but it has a profound body of consensus on which to build.
By Wade Rowland.
Copyright Wade Rowland. All Rights Reserved. Do not copy or reproduce without express written authorization from Wade Rowland.