Skip to content

Gresham’s Law in Commercial Media from Early Radio to the Web: the Mechanics of Mediocrity

© 2012 by Wade Rowland

Abstract: Market forces that are alleged to maximize quality and minimize price in consumer products systematically produce mediocrity in commercial mass media output. Consumers of commercial broadcast media do not get “what they want” from the broadcasters. Although the dynamic is widely recognized, its sources and mechanics are seldom analyzed. Identifying the product of commercial mass media as audiences rather than programming is the key to delineating the issues through an analysis of the market for this product. This entails an analysis of the determinants of quality in media. There are signs that web-based media are increasingly falling under quality constraints similar to those experienced in traditional advertising-supported media.

* * *

At the dawn of the era of electronic mass communication nearly a century ago John Reith, founding director of the BBC, mounted a spirited defense of the public broadcaster’s monopoly in the face of spreading American-style commercial sponsorship in broadcast radio. Central to his argument was the idea that that there exists in commercially-sponsored broadcasting a “cultural Gresham’s law” which ensures that “the bad drives out the good.” [1924] Eighty-five years later, we find ourselves in the midst of another media revolution, this one spawned by digital technologies. It has transformed not only the means of delivery of mass media, but the content as well. This in turn has reshaped the audience and its relationship to content. No longer is viewing or listening a social pastime, shared in groups gathered around a large receiving appliance. Today’s broadcast media are consumed to an increasing extent in private, on personal communication devices such as “smart” phones and notebook computers, which allow a high degree of interactivity. If there is a wish to share the content it is typically done, not in conventional social groups, but through social networking software, and audiences are frequently invited to contribute their own responses in threaded discussion groups.

At first glance, it may seem that Reith’s now-hoary dictum can have little relevance in this new media ecology, which seems so utterly different from early radio broadcasting. But a closer look at how web-based media are being financed makes it clear that commercial sponsorship remains a prominent if not controlling aspect of their evolution. In light of this, it is worth revisiting Reith’s proposition as it has played itself out in North American-style commercial broadcasting. Is it in fact the case that, in advertiser-supported broadcasting, the bad drives out the good? If so, why? And what might this portend for the future of the new media on the web?

Commercial broadcasting’s paradox

Gresham’s Law in its original formulation was based on the observation that debased coinage drives good coinage out of circulation, or, more succinctly: bad money drives out the good. Good money in this context was coinage whose face value closely matched its commodity value. Bad money was typically coinage that had been clipped or shaved to recover small amounts of the metal from which it was made. Or it might be coinage in which the precious metal had been diluted by an alloy. It worked this way: a customer wishing to make a four-pence purchase at a shop, using a five-pence coin, would naturally give the shopkeeper his most debased piece, wishing to keep his other, more valuable five-pence coins for himself. The shopkeeper, in returning change, would do likewise with his pennies, for the same reason. The bad thus drives the good out of circulation.

Gresham’s Law may accurately have described 16th century European currency markets, but in what sense could it be applicable to modern broadcast media operating in a mature market economy? Evidently, in a paradoxical sense. Let us accept for the moment that Reith’s assertion is supported by what many would call empirical evidence—the simple (if contested) observation that programming produced by advertising-supported broadcasters tends to be of inferior quality to programming produced by the world’s non-commercial broadcasters. The definition of “quality” is of course fraught with controversy (more on that later), but it is nonetheless a widely accepted observation among media theorists, critics, and practitioners alike that, over time, programming in commercially sponsored media has tended to devolve to a lowest common denominator in taste and ambition, while programming supported by sources other than advertising (taxation, endowment, subscription fees, etc.) tends to be more varied, challenging, and intellectually and aesthetically ambitious. If we accept these conclusions (again, if only tentatively) then observation would seem to support the assertion that, in commercial broadcasting, the bad drives out the good.

But here the paradox presents itself, because market theory would lead us to expect something very different. Broadcast media operate in a competitive market, and it is an axiom of market theory—and a prime justification for the market’s sometimes harsh discipline—that in the dynamic processes of establishing competitive equilibrium, the good drives out the bad, which is, of course, an inversion of Gresham’s Law. Customers making rational and informed selections in the market ensure that, over time, products of high quality inexorably eclipse those of low quality. How can we explain this apparent contradiction between well-established theory, and what we observe in commercial broadcasting? The attempt to do so will shed useful light on the mechanisms behind the operation of Gresham’s law in advertising-supported media.

Much will depend on how quality is defined. What does “good” mean in the context of mass media and their messages? For Lord Reith and his generation of public servants schooled in intuitionist moral theory, it was safe to assume that, all things being equal, people just know quality when they see (or hear) it. That is, each of is equipped with an ability identify and appreciate good in its various guises; an innate moral/aesthetic grammar that is susceptible to training and refining through education and experience. In today’s context, intuitionism is related to (and perhaps subsumed by) coherentist theories of reality, as in the critical moral realism of Zygmunt Bauman, Terry Eagleton, Mary Midgley, David O. Brink, and critical theorists in general. The ethical corollary of both intuitionism and realism is that the most aesthetically sensitive and knowledgeable among us are best able to make the kinds of value judgments at issue here, and are furthermore obliged to assist in the instruction of the less adept.

A competing, scientistic position gaining currency in Reith’s era considered quality to be a contingent, or even purely subjective attribute. This approach is based on Rationalist thought exemplified in Smith, Mill, and the Utilitarian theorists, and it denies the reality of trans-cultural, trans-temporal moral and aesthetic standards. In our own time, the relativist approach is exemplified in the writings of Nietzsche and Heidegger and of later deconstructionist and poststructuralist writers, most prominently, Derrida, The relativist approach to good and its expression in notions of contingent or relativistic quality norms, is an idea deeply embedded in capitalist market theory, which in turn is one of the most enduring products of Rationalist philosophy. As such, market theory both classical and contemporary incorporates important, though tacit, assumptions that are relevant to the argument being made here, and merit a brief exploration.

The political economy of good

In the standard definition of neoclassical economist and mathematician Leon Walras (d. 1910), economic equilibrium is a state wherein the consumer has maximized subjective satisfaction (utility) through the optimal distribution of expenditure, and the producer is producing at the most efficient level of output, where marginal cost equals price. What is remarkable in the condition of classical market equilibrium is the presumption that the quality of the products and services being produced will, through this process, be optimized, thereby optimizing consumer utility, or satisfaction. In this sense, the dynamic of economic equilibrium produces not only goods, but good itself (utilitarian notions of satisfaction/pleasure being equivalent to good). Market forces operating autonomously in the classical environment of perfect competition and perfect knowledge will inevitably produce the highest quality products at the lowest feasible price, and this will maximize consumer utility produced within the system. The good will drive out the bad.

Let’s look first at the consumer side of this equation. Precisely how consumers maximize their satisfaction was a question answered for neo-classical economists by Stanley Jevons (d. 1882), who created the theory of marginal utility. There is no need to go into detail here; what is relevant is that, in marginal utility theory then as now, it is assumed that what establishes the value of a commodity is the fact that it is desired—desire being expressed in a willingness to spend. If there is no money demand (i.e. no desire) for something, it has no value. If there is money demand for something, it is, by definition, valuable. In other words, whatever is desired, is desirable.

Marginal utility theory had the great virtue of allowing economists to place a dollar value on the entire range of products and services being traded in an economy, so that they could all be reduced to the same, standard, unit of measurement. By conflating the desirable with the desired, the theory greatly facilitated the process of mathematizing the budding science. Without this reductionist approach, economics would have been faced with the daunting task of trying to establish an intrinsic, normative value for individual commodities. In other words, economists would have had to try to determine whether what was desired was what ought to be desired. They would have had to try to distinguish between the bad and the good.

But much was sacrificed in the quest for reductionist clarity and calculability. In ignoring normative values, marginal utility theory was and is seriously out of step with the reality it purports to map. There undoubtedly is a connection between well-being or happiness—genuine utility—and getting what you want. It is equally clear, however, that happiness is dependent on more than that. It depends on both getting what you want, and wanting the right things—what you ought to want. One can easily imagine desiring something that will cause more harm than good, and if that is possible, then it is not safe to assume that market dynamics are reliable producers of good. Certainly, serious doubt is cast on the formula adopted by apologists within commercial media who insist that if their programs are watched (i.e. desired, demanded) by large audiences, they are by definition good.

However, even if we accept the economist’s reductionist unity between the desired and the desirable, we face other difficulties in relating program quality to audience desire. The problems lie in further, crucial distinctions on the other side of the demand equilibrium equation: (1) questions concerning the nature of the “product” of commercial broadcasters; and (2) how the word “quality” might be appropriate to this product in either the positivist economic, or normative, senses.

Commercial broadcasting and its market

The answer to question in category (1) seem obvious to most casual observers: broadcasters provide programs (the product) for audiences (the consumers of the product). But a moment’s thought will disclose that these programs cannot be a product in the economic sense, because they are provided free of charge. Unless a monetary value is placed on them, they are not economic commodities. A more accurate response to the question has been provided by a long list of authors beginning with Theodor Adorno and Max Horkheimer in Dialectic of Enlightenment [1944] through Dallas Smyth [1977, 1981] Ien Ang [1991] and Mark Andrejevic [2002]. It is, I think, no longer controversial: the product provided by commercial broadcasters is audiences (or, in some constructions, an audience’s time [Jhally, 1982]), a commodity desired by advertisers. To put it another way, advertisers are the consumers of the production of commercial broadcasters, which is audiences.

This truth has been more clearly understood and expressed within the industry as its players have themselves evolved through the long period of corporate “rationalization” of the mid-twentieth century, in which university-trained, professional managers progressively purged spurious “public interest” managerial motivation from broadcasting, in favour of pure market theory and the single-minded pursuit of profit. [Rowland, 2005]

As to the question of quality, when the commercial broadcast industry is challenged, as it frequently is, by criticism of the caliber of the programming it provides, its response is invariably that the industry provides its audiences with the programming they want. This is taken to be axiomatic, since broadcasters are alleged to operate in a competitive market environment, in which survival depends on fulfilling demand (i.e., satisfying desire). Critics are accused of elitism, for refusing to accept that the distinction between high quality and mediocrity is in the eye of the beholder. They are told, moreover, that it is undemocratic to suggest that “what audiences want” may not be what they ought to want; that what is desired may not be the same as what is desirable. This response is frequently augmented with the claim that audiences cast their votes for and against programming through their viewing and listening choices, which are systematically quantified by professional ratings organizations.

The tautological nature of this line of reasoning has been widely noted, viz.: the market, by axiom, delivers what customers want; therefore, “what customers want” must be what the market delivers. Less frequently noted is the fact that audiences cannot want what they do not know exists (or has the potential to exist); put another way, they can “vote” only for or against the programming that is offered to them. Furthermore, experience with gadgets that offer audiences the chance to avoid commercials (the remote control, VCRs, PVRs) have clearly demonstrated that advertising content, at least, is definitely not something the audience wants—as a rule, they avoid it when possible.

A corollary claim is often made by commercial broadcasters in lobbying against government intervention in the market via regulation, or government support for public broadcasting. The claim is that while the free market provides programs “democratically,” both “public interest” regulation and public broadcasters are by definition elitist in their intent. The position was reflected in a law journal essay by Mark Fowler, a former head of the U.S. Federal Communications Commission: “Instead of defining public demand and specifying categories of programming to service that demand… [regulators] should rely on broadcasters’ ability to determine the wants of their audiences through the mechanisms of the marketplace.” [Fowler 1982]

On examination, however, it is clear that in neither the public nor commercial broadcaster models does the audience get involved directly in program development, and that neither process can be called “democratic” in this sense. In both models, the programs are developed and produced under the direction of station or network management, and/or advertisers and the agencies that represent them. That is to say, in both cases, program production is a managed, top-down process. How, then, one might ask, is the commercial model less “elitist” than the public service case? In both instances, programming is created by an exclusively privileged and skilled production cadre, and in both cases their programs are ultimately shaped by corporate decision-makers who insist that they are acting in society’s best interests.

In the end, though, all of these tactical considerations take a back seat to the overriding, strategic importance of the fact that, for purposes of economic and critical theoretical analysis, the product of commercial broadcasting is not programs, but audiences, and the market that drives the industry is one in which the audience-commodity is exchanged, and in which programming is a cost of production incurred by broadcasters in the process of manufacturing audiences for sale to potential advertisers and their agents. This market does operate democratically—consumer sovereignty applies—because advertisers and their agencies (the “consumers”) do get to decide what programs are made and which among them survives. It is within this small, private market of broadcaster/suppliers and advertiser/customers that issues of “quality” in commercial broadcast programming are decided in the classical context of fulfilling the customer’s desires. In this exclusive market, marginal utility theory is in fact an accurate metric for determining price (and therefore desirability and, by inference, quality) because the transaction involved between the buyer and seller of the audience-commodity is of a purely commercial nature—there are no messy normative considerations involved.

Putting the audience in its place

As one who has participated in senior management committee meetings for a commercial television network , I can confirm that discussion concerns audiences mainly as numbers provided by ratings services; the competitive focus in the industry is almost exclusively on advertisers. Programming is evaluated according to the numbers it provides, both gross and in various demographic and psychographic splits. Here, Ien Ang’s observations in Desperately Seeking the Audience are astute:

‘[T]elevision audience’ refers first of all to a structural position in a network of institutionalized communicative relationships: a position located at the receiving end of a chain of practices of production and transmission of audiovisual material through TV channels. It is within the constraints of this structural position that concrete people become actual audiences, whatever this means further in social, cultural and psychological terms. And it is never beyond the epistemological limits set by this structural position that the institutional point of view conceptualizes ‘television audience.’ … [W]hat is discursively equated with ‘what the audience wants’ through ratings discourse is nothing more than an indication of what actual audiences have come to accept in the various, everyday situations in which they watch television. It says nothing about the heterogeneous and contradictory interminglings of pleasures and frustrations that television audiencehood brings with it.” [Emphasis in original.] (Ang 1991: 4, 169)

In the world of commercial broadcasting, then, audience numbers are of concern only for their relative size and consuming profile. (I will ignore, here, the complicating but ultimately irrelevant fact that television networks sometimes seek a reputation for quality as a means of distinguishing their product, in the eyes of advertisers, from their competitors’. Note that what is sought is a reputation for virtue, rather than virtue itself.) A “quality” audience is one that matches the advertisers’ needs in this respect. Programs are thus produced primarily with a view to satisfying that demand: the production values that attract the audience (or not) are of interest only in strictly instrumental terms of what works and what doesn’t; normative considerations are of marginal interest and are rarely mentioned.

Furthermore, the numbers, as Ang correctly observes, have real meaning as a description of “audience” only within the parameters of the formal institutional structure of commercial broadcasting, and even in this limited context they are open to serious question as to their authenticity [See also Meehan (1984, 1993a, 1993b)]. Since it is programming that attracts and assembles audience/product, individual audience members may be thought of as raw material from which a desirable, salable, bulk product is constructed. As a raw material, however, people are somewhat less than optimal. For one thing, they lack the homogeneity or substitutability that defines the ideal raw material in economic theory. The broadcasting industry has addressed this problem by the straightforwardly reductionist expedient of constructing a homogenous identity for the audience that ignores individual, normative idiosyncrasies of viewing and listening.

It is a fact of life of institutions in general, and business corporations in particular, that they seek to control the mechanisms of their self-production and survival [Rowland, 2005].The broadcast audience clearly falls within this category of mechanism and therefore is an object of institutional or corporate control. In the case of the commercial broadcast industry, “control of the audience” achieved through ratings is largely a fiction, operative only within the formal system of the industry as it constructs itself in terms of classical economic theory. In plain language, the industry needs to control the concept of “audience” in order to present its product in the best possible light to the advertiser/purchaser, and it does this by fabricating an audience that is both uniform and invariably satisfied. It is uniform because it is defined as such in its demographic or psychographic profile: in other words, it is packaged that way. And it is invariably satisfied, because within the industry construct it is axiomatic that the audience always gets what it wants. In this way, ratings provide the “empirical evidence” that broadcasters need if they are to demonstrate to advertisers that their money is well spent—which explains why the ratings industry is so lavishly, and slavishly, supported by broadcasters.

Because commercial broadcasters distribute their programs free of (overt) charge to the audience, there is no economic metric for ranking their appreciation. Rating services step in to fill this gap with their necessary fictions [Bourdon and Méadel, 2011; Miller 1994]. In fact the first broadcasting ratings service to emerge in the early days of radio was a co-op supported by advertisers, the Co-operative Analysis of Broadcasting (CAB). It operated from 1930 to 1946, when it bowed to competition from commercial ratings agencies, principally A.C. Nielsen. By that time, the logic of the market had shifted responsibility for demonstrating the existence of the audience to the broadcasters, who were after all the sellers of the product in question and therefore responsible for defining it for their customers. They would continue to provide the lion’s share of income for ratings services.

In my own experience in the broadcast industry, it has always been understood that ratings provide a largely spurious metric, and the main concern among broadcast industry managers is that the ratings services treat all competing broadcasters equally within this fictitious construct. (The treatment here of ratings as a necessary fiction, while, in my view empirically accurate, has the theoretical effect of sidestepping criticisms of Smythe’s conception of the audience as essentially passive, or non-subjective [Caraway, 2011], and of his acceptance of the “scientific” validity of ratings systems [Gertner, 2005]. This merely reinforces his, in my view, paradigm-changing point concerning economic relations among producer, advertiser, and audience.)

It should be noted that in the list of variables that actually determine a program’s audience ratings, and thus provide a focus for network executives’ concerns, normative quality ranks relatively low, below such key determinants as the program’s place in the broadcast schedule, the popularity of its lead-in and lead-out programs, what’s on competing channels, marketing expenditure, and so on. There are those who would insist that this is as it should be—that any attempt to make normative, prescriptive judgments about program quality in broadcasting (or in any other creative endeavour) is bound to be authoritarian and oppressive, or at least ill-advised and counter-productive—an unwarranted interference with the transcendent moral authority of the free market.

It is argued that audiences should make judgments of quality for themselves, through their choices in the self-disciplining economic market. (For a recent, representative, example of industry resistance to program and advertising regulations see the National Association of Broadcasters brief to the U.S. Federal Trade Commission’s Interagency Working Group on Food Marketed to Children, downloadable at www.nab.org. The notion of the self-governing market is of course central to liberal and neo-liberal economic thought as expressed, for example, in the writings of the Chicago School of economists.) This position finds favour among industry executives because it permits the avoidance of difficult moral/aesthetic issues, and of decisions which might militate in favor of program expenditures not directly related to audience size and advertising income. We have seen, though, that the unfettered commercial broadcasting market is not geared to producing quality in any context other than commercial exchange value, and that audiences are not, in fact, free to choose the programming they want. They can only make post hoc selections from programming that is made available to them by variously-motivated elites.

What is a “good” program?

As an essentially moral question, the issue of how to define quality in programming [question category (2), above] is not one that is amenable to satisfactory resolution in a few paragraphs. However audiences surveyed on the issue have no difficulty in making such judgments. They are able to tell interviewers that “sometimes I like to watch programs that are of poor quality,” and most agree with the statement that “a program can be of high quality even if few viewers like it.” Viewers report they are happiest when high quality programming delivers high appreciation as well. [Legatt, in Ishikawa, 1999: 75] In other words, viewers seem to accept that there are standards of quality applicable to television programming that are not strictly subjective. The same conclusion would presumably extend to other media as well.

How are we to identify those standards? For practical purposes it is helpful to distinguish among four areas of quality discourse: sender use quality; receiver use quality; descriptive (or truth) quality; and professional (or craft) quality. [Rosengren, Carlsson and Toged, in Ishikawa, 1999: 14-15; Mulgan in Mulgan (ed.) 1990: 4-32] Sender use quality, as I have suggested above, may be defined in market terms as “what works commercially.” Descriptive and professional quality are much more complex, supporting sizable literatures reaching into deep wells of scholarship in moral philosophy, epistemology, aesthetics, psychology, and media technology. I propose to focus here on receiver-use quality, which I will consider in mainly political-economic terms. In the interests of brevity, I will further limit the discussion to a consideration of the mass media’s role in the dissemination of information, though much the same arguments might be applied to other areas of content such as entertainment.

In the context of liberal-pluralist democracies, the generally accepted role of mass media is to provide, in the words of former U.S. Supreme Court Justice Oliver Wendell Holmes Jr., a “marketplace of ideas,” a competitive, freely-accessible public space in which (it is assumed) truth will prevail in the contest of competing points of view. Receiver-use quality may thus be defined in terms of those programs that make a positive contribution to that communicative process, versus those that hinder or compromise it in some way. Questions of descriptive (truth) quality of course present themselves here, and professional standards play a key supporting role.
It is a truism that for genuine communication to take place in any environment, there has to be some measure of reciprocity. In mass media this exchange is typically constructed by professionals. That is, professional journalists and producers present “both sides” of stories they cover, and are mandated to do this according to codes of conduct that stipulate such criteria as objectivity, fairness, and balance. Debate takes place within the production parameters of the medium concerned. More generally, in commercial mass media, decisions concerning content of all kinds are taken with a view to providing an environment of optimal appeal to advertisers, which is the overriding responsibility of station and network managers, and newspaper publishers. Broadcast industry experience has established that this means content that is neither unsettling nor overly challenging. This boils down, in radio, to a predominance of popular music tightly formatted to defined audience tastes, interspersed with chat themed to popular culture and celebrity; and on television to innocuous drama, light entertainment, sound-bite political coverage, and talk-show discourse that is designed to entertain with emotional venting rather than to clarify issues. Content, including news and information programming, needs to conform to, rather than lead or challenge, mainstream public taste and opinion.

That this type of programming satisfies quality criterion that prevail in the market in which audiences are sold to advertisers is axiomatic: if it did not, competitors would develop “better” formats. However it is in direct conflict with the views of contemporary media researchers, who have tended to identify receiver use quality with diversity, on the assumption that a variety of perspectives advances the cause of constructive social discourse. [Ishikawa and Murimatso; Litman and Kasegawa; Hilve and Rosengren, Ishikawa, Leggatt, Litman, Raboy, Rosengren, and Kambara in Ishikawa, 1999: 197-265] “Television is good,” says Geoff Mulgan, “when it creates the conditions for people to participate actively in a community; when it provides them with the truest possible information; when it encourages membership and activity rather than passivity and alienation; and when it serves to act as an invigorator of the democratic process rather than as a medium for what Walter Lippmann described as the manufacture of consent. [Mulgan, 1990: 23]

It is of particular interest in terms of public discourse that the free market dynamic leads to a relentless downward pressure on costs associated with content such as news and documentaries, programming that is expensive to produce well. In other words, it militates against both the allocation of air time devoted to maintaining public space in media, and the quality of content appearing in that space. Free market dynamics, far from serving the public interest in a lively public space, conspire against it. The evidence accumulated over three decades of deregulation in media markets strongly supports this conclusion, viz:
• eviscerated newsrooms;
• the industry-wide move toward “infotainment;”
• continuing concentration of ownership; and
• a continuing decline in public trust, which is directly related to the decline in content (receiver-use) quality. [McChesney, 2008]

For the commercial broadcaster, rational decision-making dictates that costs of production must be balanced against those qualities that make a program attractive to the desired audience. A low-cost production will always be favored over a higher-cost production that attracts the same audience. The customer/advertiser, similarly, will want the lowest-cost option for accumulating the desired audience, as dictated by the corporate fiduciary responsibility to its shareholders. Given that audiences have little control over the array of programming available, it is possible in this market, and therefore rationally mandatory, for the broadcaster/producer to confine its output to lowest-cost options. As CBS programming executive Arnold Becker asserted, “I’m not interested in culture. I’m not interested in pro-social values. I have only one interest. That’s whether people watch the program. That’s my definition of good, that’s my definition of bad.” (Gitlin 1983: 31)

Of course, there are exceptions to these general observations. Programs that meet high standards of descriptive, production, and receiver-use quality and at the same time provide an audience of desirable size and demographic profile for advertisers, do get made from time to time, but they are seldom long-lived in a market that has a strong preference for the predictable and bankable over novelty and challenge. Their exceptional nature is emphasized by the notice they receive from critics who tend to express astonishment at the phenomenon. The fact that many of the most critically acclaimed television programs in North America are produced by subscription-based services like HBO only serves to reinforce the point.

When he publicly speculated about the existence of a Gresham’s Law in advertising-supported broadcasting, Lord Reith was necessarily speaking in theoretical terms. Experience in ensuing decades has demonstrated the prescience of his intuitions. As I hope I have demonstrated, the phenomenon of the bad driving out the good (normatively defined) is in fact highly determined by the dynamics of the commercial broadcast media market. Firstly, bracketing normative values and substituting instead the economist’s sleight-of-hand known as marginally utility reduces the issue of quality in any commodity to one of observed desire or demand for it. Quality, that is, is measured by observed demand, on the assumption that whatever individuals desire/demand is by definition desirable. But the media market goes far beyond this in distorting commonsense criteria of value in the content it produces. Optimal functioning of the enclosed media market in which audiences are exchanged for advertising dollars demands that there be inexorable downward pressure on production budgets, and thus on program quality as judged by widely-accepted, “objective” normative criteria.

Gresham in cyberspace

Today we are in the midst of a wholesale, some would say revolutionary, transition of standard mass media formats of the twentieth century to the new production and distribution modes of the digital era. While the worldwide web and its related technologies have made it possible for virtually anyone with the means to own a computer to become a publisher simply by paying a modest fee for internet access, large corporate interests, including most traditional media outlets, remain a dominant factor in the on-line information and entertainment economy. Many observers see in the web a radical democratization of mass culture, due to the immediate access to information provided by search engines and vast data bases, and the outlets for individual expression provided by web sites, blogs and other innovations. But it needs to be asked whether, in light of the sponsorship arrangements that have developed over the past decade to support the publishing of web content, that content will ultimately be susceptible to Gresham’s Law. John Batelle, co-founder of Wired magazine, makes this observation in his book The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed our Culture: “On the internet, it can be argued, all intent is commercial in one way or another, for your very attention is valuable to someone, even if you’re simply researching your grandmother’s genealogy, or reading up on a rare species of dolphin. Chances are you’ll see plenty of advertisements along the way, and those links are the gold from which search companies spin their fabled profits.” [quoted in Siegel, 2008: 134] Well over half the advertising seen on web sites is currently generated dynamically by Google ad servers [www.marketingvox.com/googles-ad-server-market-share-at-57-042692/]. At this writing, Google’s annual revenues were U.S. $21 billion, about 95 percent of which is derived from its advertising offerings, which it shares with partnering websites. This is comparable to the combined advertising revenues of the major American television networks; the stakes are high.

Will web content be influenced by commercial considerations to the same degree as has been demonstrated in traditional broadcast media?

To begin, we can accept as a given that conventional corporate media outlets that migrate to the web will bring with them the same business principles and objectives that characterized their more traditional, brick-and-mortar business activities. That is, they will see their first responsibility as being to maximize their shareholders’ return on investment, and that will mean that web publishing efforts will ultimately be aimed at satisfying the needs of advertisers, the main source of revenue. Just as in the traditional broadcasting economy, these on-line services are seen as vehicles for selling audiences to advertisers. There is no reason to expect, for example, that CNN’s or ABC’s or CTV’s web content to be significantly different in its fundamental norms than their broadcast content. So we can expect that web content published by on-line arms of conventional, corporate mass media outlets will indeed be subject to Gresham’s Law, in that the goal will be to attract maximum advertising revenue at minimum outlay for the content that provides the bait for audiences.

While it is too early in the evolution of the web to draw definitive conclusions regarding smaller, web-specific enterprises such as blogs, trends can be identified which suggest that advertising may be having a corrosive effect on content quality parallel to that experienced in traditional broadcast media. While the tens of millions of web publishers worldwide dwarf the relatively tiny population of pioneer broadcasters, to the degree that they hope to profit from their enterprises (not all do) they face the same twin challenges as those early broadcast radio experimenters: how to pay for the content needed to attract an audience; and how to profit from that audience once assembled. Commercial radio’s solution lay in cobbling together ever larger networks of stations over long-distance telephone lines, thus distributing programming costs over a broader base, while at the same extending audience reach for those programs and their sponsors. It worked: national advertisers took notice immediately. [Smulyan, 1989; Rowland, 2009: 173ff]

As the number of web users worldwide climbed its exponential curve in the years following the internet’s privatization, the revenue issue was resolved for web entrepreneurs using the same principles that had served early radio so well: attract a large audience and charge advertisers for access to it. [Evans, 2009] But there were differences: the banner ad format quickly became popular with web advertisers, even though early adopters who could not count on anything like today’s heavy traffic, because the advertiser was typically charged only when a visitor clicked on the banner in order to hyperlink to the advertisers own website (hence, the “clickthrough rate” charged for such ads). This was a much more efficient way of reaching potential clients and customers than the scattershot approach necessarily employed in conventional over-the-air broadcasting. Banner ads were soon to be supplemented by the even more efficient and ultimately more popular contextual advertisement. These were unique to the new medium, dynamically generated by search engines on the basis of a user’s search request keywords, so that the advertising messages appearing on search result pages matched the searcher’s interests. These ads could reasonably be seen as added value, rather than an annoyance. [Sears, 2005]

Contextual ads now constitute the major source of income for most web sites other than the large, corporate enterprises able to attract sustaining sponsors. Any web site can be easily programmed to accept contextual ads: at blogger.com, which is owned by Google, a single click of a button will equip your new blog with Google ad capability. Apple, not to be outdone, uses sophisticated software to match clients of its many mobile computing apps to advertisers. Both companies encourage placement of their ads in as many websites as possible, by paying publishers for space and/or clickthroughs. Publishers, for their part, often see contextual advertisements as adding value to their sites, as well as providing income.

Page-view and page-ranking statistics are the currency of the web: given their detailed accuracy and instantaneous availability, along with the wealth of information they can provide about users (the audience), they can justly be characterized as ratings on steroids. [Lee, 2011] In the new world of web publishing, any website being operated according to rational business principles with profit as its goal will adapt its content to boost its ranking. A typical travel site, for example, will base its editorial decisions on search engine queries (data available from Google) rather than on any competing editorial interests (such as novelty, public service, etc.). It would be irresponsibly foregoing revenue if it were to do otherwise. The same financial logic applies, with more serious import, to web-based news providers. Taking note of the impact of this statistical flood on traditional print media transitioning to the web, Jim Brady, executive editor of washingtonpost.com said: “The best thing about the web—you have so much information about how people use it—is also the worst thing. You can drive yourself crazy with that stuff. News judgment has to rule the day, and the home page cannot become a popularity contest.” [Carr, 2007] This is a position with which his publisher may not agree.

Why bloggers blog

Some more specific indication of the impact of contextual advertising on small publishers can be gleaned from the experience of bloggers. Bloggers do what they do for a variety of reasons, but they share a common interest in getting noticed. The web obligingly abounds in advice on how to draw traffic to your blog, tips posted by sites such as technorati.com as a strategy to draw traffic to their sites. Traffic counts matter for a number of reasons, foremost among them the traditional equation of audience size with ad rates, but also because of the “linkback” principle that underlies Google and other, similar search engines. Google search results are ranked according to the number of links a given site has to other web sites, the idea being that the more links there are, the more popular the site is. A “popular” site is presumed by Google’s algorithms to be of more value (i.e., relevance) to the searcher. Thus, for example, visitors to blogs who leave comments in discussion forums frequently provide a link to their own websites as part of their signature. This increases the potential value of their own site, because the more of these linkbacks a blogsite can accumulate, the higher its ranking will be in search results, and that, in turn, will draw more traffic, and so on. Posting comments, along with such linkbacks, to web forums of all kinds has become a burgeoning branch of the search engine optimization industry, and much of this tedious, repetitious work is outsourced to armies of SEO workers in low wage areas such as South Asia.

Other traffic-generating strategies focus on blog content, and are therefore more directly germane to the issue of advertising’s impact on quality. A list of tried-and-true traffic-generating techniques used by bloggers and posted at problogger.net is typical of the many available. It includes such advice as publishing lists of resources pertinent to a particular field, and publishing “best-of” lists. Bloggers are also advised that, “Arguing a popular point of view” is helpful:

People like to have their world-view affirmed. If you can articulate something a lot of people agree with, those who agree with you will champion your post. Those who disagree will probably still link to you, because their response won’t make sense otherwise. This method works best when the topic isn’t too divisive. A reader won’t abandon your blog simply because you like Facebook and they like MySpace. They might abandon ship if you argue that capital punishment is necessary and that view is something they strongly disagree with. Make sure you’re not going to lose as many readers as you gain.

The use of attention-getting headlines for blog postings is also recommended. “When others link to you, it’s usually done in the space of a paragraph or even a single sentence. … Sometimes a really outstanding headline is all it takes to get traffic and links. Of course, you’ll receive much greater rewards if the headline is matched by a great post.” Another recommendation: “Q&As with high profile people. … Interviews with well-known bloggers always seem to get links, comments and traffic. The nice thing about this method is that the only work involved is writing questions and approaching bloggers. The success rates for getting interviews are pretty high as most bloggers love talking about themselves!” In commenting on this list, readers made further suggestions, such as writing about celebrities, and focusing on “rumor and scandal.” Celebrity gossip sites such as gawker.com simplify the issues of merit in content by overtly reducing quality to popularity: they pay freelance contributors for their postings on a sliding scale which is based directly on the traffic they draw to the site. This tilts content in the direction of the sensational and outrageous.

We begin to see, in practices like these, the directions in which the quest for advertising revenue can lead web publishers.

Culture critic Lee Siegel sees a bleak future for information on the web: “Make no mistake about it. Once … ‘consumers’ of news also become the producers of news, ‘choice’ will exist only for the sake of choice. In the name of ‘full participation,’ unbiased, rational, intelligent, and comprehensive news—news as a profession, like the practice of law or medicine—will be come less and less available. … Like demagogic politicians, who appeal to appetite and emotion rather than reason, this will be the age of demagogic journalism.” [Siegel, 2008: 165]

Other market dynamics familiar to traditional mass media are beginning to play themselves out on the web. While it is frequently argued that the web is not subject to the same deterministic rules as those governing traditional media markets, due to its “democratic” nature (that is, the ease with which an ordinary citizen can become a publisher), and because of the vast numbers of individual producers of web content, already we can see the outlines of a future for the web that may well reflect the earlier trajectory of commercial radio. For example: popular blogs are being consolidated under umbrellas (the web equivalent of networks) such as the Huffington Post, and the process is proving profitable; Google-owned YouTube pays millions each month to license professionally-produced television content to serve advertisers who want their products to be identified with something more apposite than amateur candid camera clips that make up so much online video content; search engines increasingly compete by providing algorithm-contrived “convenience,” which amounts to narrowing search results according to demonstrated user interests; search engine page-ranking is increasingly and arbitrarily manipulable by highly-paid search engine optimization specialists. As well, the rationalization process common to emerging markets can be expected to result in the kinds of ownership consolidations in fewer and fewer hands that marked the maturing of radio and TV, and before that, newspapers. It has always been safe to predict that in any commercial media market, consolidation of ownership will be a continuing feature, in the absence of regulatory constraint. [McChesney, 2008] This, in turn, has unavoidable implications for media content. At present, there is little reason to believe that quality in web content will escape this dynamic, at least as the medium will be experienced by the vast majority of users who are disinclined or unable to search out diversity.

WORKS CITED

Ien Ang, Desperately Seeking the Audience (New York, Routledge, 1991)

Mark Andrejevic (2002) “The work of being watched: interactive media and the exploitation of self-disclosure.” Critical Studies in Mass Communication 19(2): 230-248

Frank P. Arnold, Broadcast Advertising: The Fourth Dimension (New York, John Wiley and Sons, 1931)

Ben Bagdikian, The Media Monopoly 5th ed. (Boston, Beacon Press, 1997)

L. Bogart, Commercial Culture: The media system and the public interest (New York, Oxford University Press, 1995) pp. 90, 108

Jérôme Bourdon, Cécile Méadel (2011) “Inside television audience measurement: Deconstructing the ratings machine,” Media, Culture, and Society, July, 33 (5) 791-800

A. Briggs, The BBC: The First Fifty Years (Oxford, Oxford University Press, 1985)

David O. Brink, Moral Realism and the Foundations of Ethics (New York, Cambridge University Press, 1989)

Brett Caraway (2011) “Audience labor in the new media environment: A Marxian revisiting of the audience commodity,” Media, Culture, and Society Jul 11, 33(5), 693-708

R.K.L Collins and D.M. Skover, The Death of Discourse (Boulder, CO, Westview Press, 1996)

David Carr (2007) “24-Hour Newspaper People,” New York Times, Jan 15.

David S. Evans (2008) “The Economics of the Online Advertising Industry,”
Review of Network Economics, 7(3), 359 – 391

Stewart Ewan, Captains of Consciousness: Advertising and the Social Roots of Consumer Culture (New York, McGraw-Hill, 1976)

J. Gertner (2005) “Our ratings, ourselves,” New York Times Magazine, Apr 10, 34-41

Todd Gitlin, Inside Prime Time (New York, Pantheon, 1983)

Max Horkheimer and Theodor Adorno, Dialectic of Enlightenment, trans. Edmund Jephcott (Stanford: Stanford University Press, 2002)

Sakae Ishikawa (ed.) Quality Assessment of Television (Luton, Luton University Press, 1996)

Sut Jhally “Probing the Blindspot: the Audience as Commodity,” Canadian Journal of Political and Social Theory 6(1/2): 204-210.

Jean-Noel Jeanneney, tr. Teresa Lavender Fagan, Google and the Myth of Universal Knowledge (2008, Chicago, University of Chicago Press)

Micky Lee, “Google Ads and the Blindspot Debate” Media, Culture & Society, 33 (3) 433-447, 2011

Timothy Legatt, “Identifying the undefinable: an essay on approaches to assessing quality in television in the U.K.” in Ishikawa 1996 op. cit.

GangHeong Lee and Joseph N. Cappella, “The Effects of Political Talk Radio On Political Attitude Formation: Exposure Versus Knowledge,” Political Communication, Volume 18, Issue 4 October 2001, 369-394.
2008.

Robert McChesney, The Political Economy of Media: Enduring Issues, Emerging Dilemmas (New York: Monthly Review Press. 2008).

E.R.. Mehan (1984) “Ratings and the Institutional Approach: a Third Answer to the Commodity Question,” Critical Studies in Mass Communication 1(2): 216-225.
• (1993a) “Heads of Households and Ladies of the House: Gender, Genrre, and Broadcast ratings 1929-1990.” In W.S. Solomon and R.W. McChesney (eds.), Ruthless Criticism: New Perspectives in U.S. Communication History (Minneapolis, University of Minnesota Press) 204-221.
• (1993b) “Commodity Audience, Actual Audience: the Blindspot Debate.” In J. Wasko et al. (eds.) Illuminating Blindspots: Essays Honoring Dallas E. Smythe (Noprwood, N.J. Ablex) 378-397.

Peter V. Miller (1994) “Made-to-order and standardized audiences: forms of reality in audience meas¬urement.” In: James Ettema and D. Charles Whitney (eds) Audience Making: How the Media Create the Audience. Thousand Oaks, CA: Sage, 57–74.

Geoff Mulgan, The Question of Quality (London, The British Film Institute, 1990)

John O’Neill, Defending Objectivity (London, Routledge, 2004)

John Reith, Broadcast Over Britain (London: Hodder and Stoughton, 1924)

Wade Rowland, Spirit of the Web: The Age of Information from Telegraph to Internet (Toronto, Thomas Allen Publishers, 3rd edition, 2006)
Greed, Inc.: Why Corporations Rule Our World (Toronto, Thomas Allen Publishers, 2006; New York, Arcade Publishing, 2007, 2012)

Alan M. Rubin and Mary M. Step, “Impact of Motivation, Attraction, and Parasocial Interaction on Talk Radio Listening,” Journal of Broadcasting and Electronic Media, Vol. 44 Issue 4 Dec. 2000, 635-654

Sears, Jay. 2005. “The Evolution of Contextual Advertising,” Upgrade Magazine. (April/May), at .

Lee Siegel, Against the Machine: How the Web Is Reshaping Culture and Commerce—and Why It Matters (New York, Spiegel and Grau, 2009)

Susan Smulyan, The Commercialization of Radio, 1920-1939 (Washington, Smithsonian Institution Press, 1996)

D.W. Smythe (1977), “Communications: Blindspot of Western Marxism,” Canadian Journal of Political and Society Theory 1 (3): 1–28

James Winter, “Canada’s Media Monopoly: One perspective is enough, says CanWest,” www.fair.org/index.php?page=1106, June 2002.

Published inArticles-Blog