Skip to content

The current AI explosion is (even) scarier than you think

In the ChatGPT-induced rush to cobble together regulatory guardrails for AI technologies, an important piece of the puzzle is getting far less attention than it deserves.

The focus of current concern is what’s being called “the alignment problem,” how to ensure that AI agents that will replace or work alongside humans will respect human values—in other words, do no harm. What’s being overlooked is the fact that AI is (at least for the moment) a tool, and as astonishing as its abilities may be, whether it is used for good or for ill is in its users’ hands. It is human users who decide how a tool like AI deployed, what it’s used for, and whether those uses match up with human aspirations and values.

So we ought to be able to take some comfort in the thought that human users will consider the ethics of what they’re asking their AI tools to do. Humans, unlike the technologies they create, share a moral sensibility that allows them to differentiate right from wrong, and the free will to act on those judgements.

But what if the AI is being developed and deployed by an inhuman “person,” one that might itself be called an artificial intelligence— in this case the modern, bureaucratic, publicly-traded business corporation?

If we consider for a moment that AI is developing into a technology every bit as potent in its transformative potential as nuclear energy or genetic engineering, it should disturb us that its development and deployment is almost entirely in the hands of corporate business entities.

We now have a century or more of experience in trying to align corporate business values and ethics with human values and goals, and there is much to be learned from that harrowing experience. The most important lesson is that while the modern business corporation continues to serve its original purpose as a bureaucratic instrument for pursuing the shared goals of its owners and managers, it has evolved into an automaton, a cybernetic intelligence governed by algorithmic rules that control its responses to its environment, the market economy.

The large, publicly traded business corporation as currently constituted has been constructed in law and theory and carefully modified over many generations to smooth the workings of the world’s capitalist markets. It does this by representing large aggregates of individual shareholders, ensuring that their market participation will conform to economic theory’s definition of the “rational economic agent” as a self-interested, utilitarian pleasure-seeker.

The market’s “invisible hand,” (if left free from state or other outside interference) then converts this individual self-interest into collective public welfare. The “natural” dynamics of supply and demand ensure that economic output matches up with consumer demand.

That’s the theory, and it all hinges on the canonical assumption of innate human cupidity. In real life, though, human individuals often display other-directed, altruistic, stoic, or in other ways “irrational” behaviour in both their business and personal financial dealings. They flout the accepted definition of human nature that dates back to the founding of economics as an academic discipline, and the result is sub-optimal market performance.

But modern economics has worked out a solution for that, too. It involves putting control of most market transactions in the hands of corporate, non-human automatons.

Big, shareholder-owned business corporations reliably do behave the way rational economic agents are supposed to. That’s exactly what they were designed to do and what commercial law as it evolved over the late 19th and 20th centuries requires them to do: to be unfailingly self-interested on behalf of their shareholder-owners. To maximize return on investment.

Corporate managers, employees who comprise a sort of human black-box intelligence within the corporate entity, are strictly confined to narrowly prescribed roles and activities that are defined by the rules, or algorithms, governing the corporate machine’s legal and financial existence. The pioneering sociologist Max Weber famously called this the “iron cage” of bureaucracy. Personal moral and ethical values are left at the office threshold as a contractual condition of employment. In this way the corporate automaton is able to harness and direct the innate human intelligence it employs to do its bidding.

Over the past half-century, sophisticated AI-enhanced management techniques and surveillance technologies have combined to embed rigorous internal controls regulating employee behaviour at every level, from the shop floor to the corner C-suites. So effective are these tools that today it seems impossible to avoid the conclusion that the corporate entity itself —the corporate automaton—manages its managers, confining them to the consistently “rational” behavior that produces maximum profit.

It has been amply demonstrated over the past century that the mechanistic, single-minded modern business corporation has no threshold beyond which risk to human welfare becomes unacceptable, since the only risk it is designed to respond to is risk to profit. Its response to operating in a field where risks to human well-being are high, is predictable: it will continue to take risks so long as profitability appears to be secure. It will seek to minimize regulatory oversight wherever and by whatever means possible, because regulation adds to expenses and forecloses potential avenues of profitability.

To modern corporations, prudence in the face of unknown impacts of new technologies like AI is merely a call to scrutiny of risk with respect to the bottom line. A legal strategy for the avoidance of liability lawsuits and regulatory penalties may well be more acceptable to a corporation than a costly engineering exercise to exhaustively analyze human and environmental impacts of new products and develop safer design criteria.

A detached observer might say that to market products which pose a positive risk of significant damage to human or environmental health is ethical under only one condition: that the technology in question will, with a high degree of certainty, ameliorate or prevent an even greater calamity. The COVID-19 vaccines fit this criterion as do other medical breakthroughs. But a great many high-risk products and technologies currently in use and under development, including AI, do not.

The potential social impact of generative AI software products like ChatGPT is literally incalculable, and yet new and more sophisticated versions are being released almost daily, with little or no regulatory scrutiny.

The fact that this transformative technology is under the control of obsessively profit-focused modern business corporations like Microsoft, Google, Meta and OpenAI should be a real cause for concern.

If we are to manage the AI revolution in the public interest, we are going to have to find ways to align the goals of the corporate giants behind it with human values. It won’t happen automatically, or by the ministrations of some invisible hand. We need to be wary of industry “social responsibility” pacts or other voluntary compliance assurances, and of industry-led alarms that “overregulation” will cause our own participants in the industry to fall behind the international competition.

AI is too powerful, too potentially destructive, to be left entirely or even primarily in the hands of that self-serving automaton we call the modern business corporation.

In the ChatGPT-induced
rush to cobble together regulatory guardrails for AI technologies, an important
piece of the puzzle is getting far less attention than it deserves.

The focus of current concern is what’s being
called “the alignment problem,” how to ensure that AI agents that
will replace or work alongside humans will respect human values—in other words,
do no harm. What’s being overlooked is the fact that AI is (at least for the
moment) a tool, and as astonishing its abilities may be, whether it is used for
good or for ill is in its users’ hands. It is human users who decide how a tool
like AI deployed, what it’s used for, and whether those uses match up with
human aspirations and values.

So we ought to be able to take some comfort in
the thought that human users will consider the ethics of what they’re asking
their AI tools to do. Humans, unlike the technologies they create, share a moral
sensibility that allows them to differentiate right from wrong, and the free
will to act on those judgements.

But what if the AI is being developed and
deployed by an inhuman “person,” one that might itself be
called an artificial intelligence— in this case the modern, bureaucratic, publicly-traded
business corporation?

If we consider for a moment that AI is
developing into a technology every bit as potent in its transformative
potential as nuclear energy or genetic engineering, it should disturb us that
its development and deployment is almost entirely in the hands of corporate
business entities.

We now have a century or more of experience in
trying to align corporate business values and ethics with human values and
goals, and there is much to be learned from that harrowing experience. The most
important lesson is that while the modern business corporation continues to
serve its original purpose as a bureaucratic instrument for pursuing the shared
goals of its owners and managers, it has evolved into an automaton, a
cybernetic intelligence governed by algorithmic rules that control its
responses to its environment, the market economy.

The large, publicly traded business corporation
as currently constituted has been constructed in law and theory and carefully
modified over many generations to smooth the workings of the world’s capitalist
markets. It does this by representing large aggregates of individual
shareholders, ensuring that their market participation will conform to economic
theory’s definition of the “rational economic agent” as a
self-interested, utilitarian pleasure-seeker.

The market’s “invisible hand,” (if left free
from state or other outside interference) then converts this individual self-interest
into collective public welfare. The “natural” dynamics of supply and demand ensure
that economic output matches up with consumer demand.

That’s the theory, and it all hinges on the canonical
assumption of innate human cupidity. In real life, though, human individuals
often display other-directed, altruistic, stoic, or in other ways “irrational”
behaviour in both their business and personal financial dealings. They flout the
accepted definition of human nature that dates back to the founding of
economics as an academic discipline, and the result is sub-optimal market
performance.

But modern economics has worked out a solution
for that, too. It involves putting control of most market transactions in the
hands of corporate, non-human automatons.

Big, shareholder-owned business corporations
reliably do behave the way rational economic agents are supposed to.
That’s exactly what they were designed to do and what commercial law as it
evolved over the late 19th and 20th centuries requires them to do: to be
unfailingly self-interested on behalf of their shareholder-owners. To maximize
return on investment.

Corporate managers, employees who comprise a
sort of human black-box intelligence within the corporate entity, are strictly
confined to narrowly prescribed roles and activities that are defined by the
rules, or algorithms, governing the corporate machine’s legal and financial
existence. The pioneering sociologist Max Weber famously called this the “iron
cage” of bureaucracy. Personal moral and ethical values are left at the office
threshold as a contractual condition of employment. In this way the corporate automaton
is able to harness and direct the innate human intelligence it employs to do
its bidding.

Over the past half-century, sophisticated AI-enhanced
management techniques and surveillance technologies have combined to embed
rigorous internal controls regulating employee behaviour at every level, from
the shop floor to the corner C-suites. So effective are these tools that today
it seems impossible to avoid the conclusion that the corporate entity itself
—the corporate automaton—manages its managers, confining them to the
consistently “rational” behavior that produces maximum profit.

It has been amply demonstrated over the past
century that the mechanistic, single-minded modern business corporation has no
threshold beyond which risk to human welfare becomes unacceptable, since the
only risk it is designed to respond to is risk to profit. Its response to
operating in a field where risks to human well-being are high, is predictable:
it will continue to take risks so long as profitability appears to be secure.
It will seek to minimize regulatory oversight wherever and by whatever means
possible, because regulation adds to expenses and forecloses potential avenues
of profitability.

To modern corporations, prudence in the face of
unknown impacts of new technologies like AI is merely a call to scrutiny of
risk with respect to the bottom line. A legal strategy for the avoidance of
liability lawsuits and regulatory penalties may well be more acceptable to a
corporation than a costly engineering exercise to exhaustively analyze human
and environmental impacts of new products and develop safer design criteria.

A detached observer might say that to market
products which pose a positive risk of significant damage to human or
environmental health is ethical under only one condition: that the technology
in question will, with a high degree of certainty, ameliorate or prevent an
even greater calamity. The COVID-19 vaccines fit this criterion as do other
medical breakthroughs. But a great many high-risk products and technologies
currently in use and under development, including AI, do not.

The potential social impact of generative AI
software products like ChatGPT is literally incalculable, and yet new and more
sophisticated versions are being released almost daily, with little or no regulatory
scrutiny.

The fact that this transformative technology is
under the control of obsessively profit-focused modern business corporations
like Microsoft, Google, Meta and OpenAI should be a real cause for concern.

If we are to manage the AI revolution in the
public interest, we are going to have to find ways to align the goals of the
corporate giants behind it with human values. It won’t happen automatically, or
by the ministrations of some invisible hand. We need to be wary of industry “social
responsibility” pacts or other voluntary compliance assurances, and of
industry-led alarms that “overregulation” will cause our own participants in
the industry to fall behind the international competition.

AI is too powerful, too potentially destructive,
to be left entirely or even primarily in the hands of that self-serving automaton
we call the modern business corporation.

 

Published inArticles-Blog