[Note: This article is part 4 of a series on AI Ethics and Regulation. The previous installment, on the use of AI in the criminal justice system and facial recognition, can be found here.]
Let’s return to the opening question posed by the first article in this series: Despite the glaring disparity in impact, the subject of AI regulation is routinely lavished with attention and media coverage while software reliability hardly makes a ripple—why is that? An attempt to explain the difference might start by making the following points.
The reason why there have never been sustained calls to regulate software in general is because such calls would be overly broad, vague, and unenforceable. Software is written for a tremendous variety of reasons, with different lifetimes and degrees of societal impact, in different languages, running on different hardware, and serving different purposes. It would be absurd to regulate a high school student’s program for playing tic-tac-toe. Nothing hinges on it. By contrast, mission-critical software, such as programs that control nuclear reactors (or airplanes, or CT scanners, or power plants, and so on), needs to be subjected to high levels of scrutiny—and it is. There are guidelines in place regulating the design, development, operation, and maintenance of such software.
AI systems have certain key features that distinguish them crucially from conventional software, such as:
(a) the scope of their capabilities, many of which are novel;(b) their ability to learn;
(c) their reliance on historical data, which can raise a host of thorny questions involving bias, privacy, intellectual property rights, and others; and
(d) the rate of progress and change in the field of AI.
These unusual characteristics warrant a different approach to regulating AI systems.While AI has not had any particularly steep fallout yet, we should prepare for it. As the technology becomes more powerful and widely available, it is only a matter of time before it exacts a heavy toll, whether by design or by accident. It’s better to be safe than sorry.
These are reasonable—and frequently made—points, but it’s not clear that they explain the disparity or that they make a strong argument in favor of stringent AI regulation.
In particular, the first point applies to AI just as well: Regulating AI en masse makes as little sense as regulating software en masse. Many AI systems will be built, tweaked, and used for a tremendous variety of purposes, from throw-away experimentation work to strictly personal use cases to internal-company workflows to external-facing applications. Calls for horizontal regulation to guard against the “human rights and societal risks created and enabled by artificial intelligence (AI) technologies” are just as unreasonable as calls for horizontal regulation guarding against the risks that software poses to society. Yes, mission-critical software is indeed regulated, but the regulations are industry-dependent, focusing on the specific problems and requirements that arise in particular domains. Software controlling nuclear reactors, for example, is regulated by the U.S. Nuclear Regulatory Commission (NRC). In the US, software controlling airplanes is regulated by the FAA, though the underlying set of guidelines, known as DO-178C/ED-12C, is also used by the European Union Aviation Safety Agency (EASA) and Transport Canada. Software used in medical devices is regulated by the the US Food and Drug Administration (FDA). Telecommunications software is regulated by the FCC. And so on. This is just as it should be.
Moreover, simply distinguishing different generic levels of risk, along the lines of the EU’s AI Act (which classifies all AI systems into four levels of risk, unacceptable, high, limited, and minimal) is not helpful as long as the levels themselves are horizontal. Having a single “FDA for algorithms” that applies to all “important” or “risky” code would be completely impractical, and likewise for an “FDA for AI systems.” The key point, as I have already stressed several times in previous installments, is this: Regulation should be problem-centric, and therefore vertical, not technology-centric and horizontal. We don’t regulate or outlaw the production of knives; we outlaw the use of knives to commit homicide or manslaughter, because that is the harmful outcome that we ultimately want to prevent—an outcome that can come about in myriad different ways. Likewise, a contemplated regulatory intervention in the technology space should be debated with a particular set of questions in mind: What problem, exactly, is being addressed? In particular, what right or protection are we trying to safeguard, or what harm are we trying to prevent, in what context, and what is the cost we are willing to incur in that effort? Last but not least: What might be some unintended consequences of the intervention?
The second bullet point above makes a number of claims that merit individual discussion, but taken together they can be understood as an exceptionalism argument for AI regulation, to the effect that AI’s capabilities and workflows are so radically different from those of more conventional software that they compel a greater degree of regulatory scrutiny. On its own, this is clearly not a sound argument. Even if we grant a unique status to AI’s capabilities, it simply does not follow that we ought to regulate the technology more heavily, nor does it shed any light on the extraordinary amount of attention that has been devoted to the subject. The bottom line remains that conventional software errors have wreaked havoc on a scale that dwarfs any damage caused by AI technology. The mere fact that AI technology is different is not particularly consequential. Let’s look then at some of the individual claims made in this connection, starting with the claim that AI systems are able to learn.
Insofar as the word “learning” naturally invites parallels with the human ability to learn, the term “machine learning” is a misnomer. The way humans learn is drastically different from—and much more efficient and versatile than—any machine learning algorithm on offer today. People can learn from a tiny amount of data, extrapolate general principles and rules that apply across different contexts, adjust to new environments and challenges with ease, infer causal relationships, and more importantly, unlike ML algorithms, they do not learn simply by ingesting positive and negative examples of concepts. Indeed, humans are not limited to learning concepts. We can acquire general declarative information with arbitrarily rich propositional structure, like information expressed by sentences with alternating quantifiers. When we learn Mendel’s segregation law, for instance, we learn that for any given gene, the pair of alleles that an individual possesses separates randomly during gamete formation, so that each gamete receives only one allele from each pair and, when gametes unite during fertilization, the offspring inherits one allele from each parent with equal probability. This is not a concept example, positive or negative. It does, however, make reference to a number of concepts, most of which are theoretical entities (such as genes and alleles), some of them denote processes (such as fertilization), and some of them are incredibly abstract (randomness and probability). The law conveys a complex body of information with very rich logical structure that imposes a number of constraints on the underlying concepts. For humans, learning such information requires integration with existing knowledge and conceptual structures, a process that demands reasoning over a number of domains, not mere adjustment of weights based on a simple loss function. Human learning, of course, also involves sensory experiences by virtue of having a body and being situated in a physical environment, emotions, social interactions, intuition, imagination, innate models of folk physics and common-sense psychology, and a host of cognitive processes that have little to do with predicting the next word of a text sequence.
Finally, and crucially, humans are inherently continuous learners. From the moment we are born until we die, we are constantly absorbing new information and learning new skills. By contrast, the learning that current AI models undergo is a fixed process with a clear beginning and a sharp end. The model stops learning the moment its training is over.1 Its weights are fixed for ever after that point. There have been many attempts to implement “continual learning,” especially with neural networks, whereby a previously trained model is trained on a sequence of new tasks
but all such attempts have suffered from catastrophic forgetting, which causes the model to gradually forget information it had acquired earlier. It is also noteworthy that the continual learning datasets used in practice are simple extensions of preexisting single-task datasets (e.g., the MNIST dataset might be split into 5 sequential tasks, each having to learn two digits). There is simply no comparison to the human ability to learn over an entire lifetime and an endless array of tasks.
How about (a), the scope and novelty of AI’s capabilities? Again, neither novelty nor scope by themselves are compelling arguments for stringent regulation. Regulation must be focused on specific threats. Blockchain is a new technology enabling a wide spectrum of novel applications, but that does not entail an a priori need to “regulate blockchain” tout court. A group of friends might start a book borrowing club and use blockchain to maintain a transparent and immutable record of who is borrowing whose books. It is only specific uses of the technology in specific contexts that might or might not warrant new regulation. For instance, cryptocurrencies (which are only one of many blockchain applications) might be used to launder money or to finance terrorism. But that concern reflects broader and longstanding challenges with criminal activities that predate the advent of digital currencies.2 It is those activities that ought to be the focus of governmental vigilance, not whatever technology du jour is being used to enable them. Traditional financial systems, after all, have long been successfully exploited for similar purposes, with criminals using everything from cash transactions to convoluted international banking arrangements. These abuses were confronted not by banning cash and banking, or by having public debates about “regulating cash” in the sort of vague blanket sense applied to “regulating blockchain,”, or by forming “cash working groups” akin to California’s “Blockchain Working Group,”3 but by laying down some fairly narrow policy rules.4
Granted that the emergence of cryptocurrencies might introduce new dimensions to these issues, such as disintermediation and anonymity, and these might merit careful scrutiny (which could warrant a more precisely named working group). Nevertheless, the core underlying problems are old and orthogonal to the technology. And the determination of what, if any, regulation is needed, must be grounded in the actual impact of a new technology on these old problems, not in speculative fears about how the technology might be perverted in the future—the sort of fears that often turn out to be wildly exaggerated, as history shows time and again. (See the next paragraph for a number of examples.) It also requires appropriate comparison baselines to put that impact into perspective. In the case of money laundering, for instance, it turns out that use of cryptocurrencies accounts for only a small fraction of the overall set of cases, the large majority of which rely on “traditional methods.” Finally, it requires thinking hard about the potential blowback and side effects of regulatory interventions, which may be likened to “ships which we may watch set out to sea, and not know when or with what cargo they will return to port.”5 See footnote 4 for the cargo that came back with the AML policy ship.
The historical record is so littered with overreactions to technology threats, real or imagined, that it is worth taking a minute to recall a few of them. Here’s what Calestous Juma writes about the introduction of street lighting in Europe:
Street lighting encountered early opposition in Europe. In 1819 a German newspaper published an article stating, “God had decreed that darkness should follow light, and mortals had no right to turn night into day.” The article claimed artificial light imposed an unnecessary tax on the people; caused health problems; led to people staying out late and therefore catching colds; removed the fear of darkness, leading to crime; and made robbers bold. Furthermore, the lights undermined patriotism as night public festivals undermined the value of public functions. Opponents claimed that artificial lighting made horses shy and ostensibly reduced their value in battle. (p. 147)
When trains were first introduced more than 200 years ago, it was commonly believed that they were unsafe because the human body was not designed to travel at speeds in excess of 30 miles per hour. Some believed that the body would simply melt at 50 mph, while others thought that such excessive speeds would tear off the limbs of those foolish enough to get on board. Particularly acute concerns were expressed for women and children, with some fearing that women’s uteruses would fly off at those speeds. There was a general “moral panic” about the technology, as anthropologist Catherine Bell put it, but far from being an one-off confined to trains, it was the sort of “moral panic [that] is remarkably stable and is always played out in the bodies of children and women.” As she notes: “The first push-back is going to be about kids. Is it making our children vulnerable? To predators? To other forms of danger? We will immediately then regulate access. I don’t want to seem cynical because there is a reason why we worry about children, but I do think you can tell that’s where it’s going to start.” The same pattern of moral panic followed the introduction of electricity: “If you electrify homes you will make women and children and vulnerable. Predators will be able to tell if they are home because the light will be on, and you will be able to see them.” (And indeed the same sort of panic about children is driving most of today’s efforts to regulate social media content and digital activity, as will be discussed in a subsequent installment.) Examples can be multiplied. In 1854, Henry David Thoreau famously dismissed the telegraph, writing that “We are in great haste to construct a magnetic telegraph from Maine to Texas, but Maine and Texas, it may be, have nothing important to communicate.” The telephone encountered even more resistance:
In 1877 the New York Times fulminated against the “atrocious nature” of Alexander Graham Bell’s improved version of the telegraph: the telephone. Invasion of privacy was the charge. Twenty years later, the indictment stood: “We shall soon be nothing but transparent heaps of jelly to each other,” one writer predicted. Another early complaint against the telephone was that it deprives us of the opportunity “to cut a man off by a look or a gesture”.
(from The Rise of The Image, the Fall of the Word, p. 31). Even decades after, telephones were thought to be dangerous:
They weren’t human, they popped or exploded … [People] were afraid that if they stood near one in a thunderstorm they might get hit by lightning. Even if there wasn’t any storm, the electric wiring might give them a shock. When they saw a telephone in some hotel or office, they stood away from it or picked it up gingerly.
When mechanization started to transform American agriculture in the early twentieth century, lobbyists representing traditional farming practices based on horses and mules launched a fierce fight against tractors.6 And so forth.
Nor is sheer speed of development by itself a cause for heavy-handed regulation. The Internet was developing at an unprecedented rate in the 1990s, but nevertheless there was a deliberate policy decision by the Clinton administration to avoid placing stringent restrictions on it (codified in the 1997 Framework for Global Electronic Commerce). That policy, in combination with Section 230 of the Communications Decency Act, laid the foundations that enabled the Internet to flourish into the transformative technology that it became, overcoming the many early fears that were being voiced (e.g., about a rapidly growing “digital divide”). 7
Finally, reliance on historical data does raise interesting questions, both about bias and about other issues, but it also provides an opportunity for a much greater degree of evaluative scrutiny than what is possible in the case of human decision making, which can help to improve fairness rather than undermine it. Excessive regulation is liable to squander that opportunity.
The Terminator is Coming: the Singularity and other Existential Risks
Ultimately, however, the rhetorical tenor of the exceptionalism argument, and the subtext that our lizard brains are invited to read between its lines, is the fear of extinction: The notion that AI poses existential risks, because—or so the syllogism goes—these machines can learn (point (b)) and therefore improve themselves at a breakneck rate (point (d)), and therefore soon enough they will surpass us, and at that fateful point it might just be “lights out for all us,” as Sam Altman has put it with characteristic melodramatic flair.8 That’s the gist of the singularity argument,9 a line of reasoning that many AI experts have historically viewed as so blatantly flawed (or, at best, so severely enthymematic) that it is not worth taking seriously, that it is science fiction masquerading as an argument. Indeed, from a history-of-ideas perspective, science fiction and the notion of an AI singularity have been joined at the hip from the beginning. The singularity has been aptly called rapture for nerds, and for many, belief in it has had the marks of a religious cult.
I will engage with singularity arguments seriously in the article after the next one, by carrying out a careful analysis of their logical structure. But, historically speaking, singularity scenarios and their societal risks lacked wide currency until very recently, especially in policy debates about AI regulation. Quoting from p. 150 of the 2020 book AI Ethics by Mark Coeckelbergh:
Most proposals reject the science fiction scenario in which superintelligent machines take over. For example, under the presidency of Obama, the US government published the report “Preparing for the Future of Artificial Intelligence,” which explicitly claims that the long-term concerns about superintelligent general AI “should have little impact on current policy” (Executive Office of the President 2016, 8). Instead, the report discusses current and near future problems raised by machine learning, such as bias and the problem that even developers may not understand their system well enough to prevent such outcomes.
AI practitioners, and thinkers in related fields such as cognitive science, used to laugh off singularity predictions as object lessons in pathological catastrophizing. While acknowledging “a super-intelligence” (ASI) as a theoretical possibility (to the extent that they found it coherent), they would dismiss it as a jumble of confused pie-in-the-sky scenarios whose probabilities were so infinitesimal that devoting attention to them would be absurd. Their attitude was perhaps best captured in a 2015 statement by AI luminary Andrew Ng, who declared that “there could be a race of killer robots in the far future, but I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.” In 2017, Pedro Domingos of the University of Washington wrote that “The Terminator scenario, where a super-AI becomes sentient and subdues mankind with a robot army, has no chance of coming to pass” with any of the machine learning algorithms that we know, adding that “just because computers can learn doesn’t mean they magically acquire a will of their own. Learners learn to achieve the goals we set them; they don’t get to change the goals” (p. 45). In 2018, Steven Pinker wrote:
For half a century, the four horsemen of the modern apocalypse have been overpopulation, resource shortages, pollution, and nuclear war. They have recently been joined by a cavalry of more-exotic knights: nanobots that will engulf us, robots that will enslave us, artificial intelligence that will turn us into raw materials, and Bulgarian teenagers who will brew a genocidal virus or take down the Internet from their bedrooms. ⋯ Techno-philanthropists have bankrolled research institutes dedicated to discovering new existential threats and figuring out how to save the world from them, including the Future of Humanity Institute, the Future of Life Institute, the Center for the Study of Existential Risk, and the Global Catastrophic Risk Institute.
He adds: “The Robopocalypse is based on a muzzy conception of intelligence that owes more to the Great Chain of Being and a Nietzschean will to power than to a modern scientific understanding. In this conception, intelligence is an all-powerful, wish-granting potion that agents possess in different amounts.”
Inveterate futurologists like Musk, who have a long history of painting AI as an existential threat more dangerous than nuclear weapons, and warning that with AI “we are summoning the demon” and that machines are about to overtake humans,10 tended to be viewed as cranks by people in the field, such as Max Versace, the CEO and co-founder of Neurala, who, in reference to Musk, said that “people who aren’t competent are discussing AI, which they have no clue about,” adding that “they are selling fear and it’s working.” The sentiment was echoed by Jerome Pesenti, the former VP of AI at Meta, who wrote:
I believe a lot of people in the AI community would be ok saying it publicly. @elonmusk has no idea what he is talking about when he talks about AI. There is no such thing as AGI and we are nowhere near matching human intelligence.
(Musk’s response was “Facebook sucks.”)
But a lot has happened over the last couple of years—most notably, of course, the wide release of LLMs like ChatGPT. A June 2023 article in the MIT Technology Review pointed out that “the experience of conversing with a chatbot can … be unnerving. Conversation is something that is typically understood as something people do with other people.” The article quotes Meredith Whittaker, a cofounder and former director of the AI Now Institute, as saying that the release of ChatGPT “added a kind of plausibility to the idea that AI was human-like or a sentient interlocutor … it gave some purchase to the idea that if AI can simulate human communication, it could also do XYZ.” Articles started popping up with titles like “Will ChatGPT Lead To Extinction Or Elevation Of Humanity? A Chilling Answer”. A more recent TIME article notes that the release of ChatGPT marked “the first time this [AI’s] pace of change became visible to society at large, leading many people to question whether future AIs might pose existential risks to humanity.”
Of course, terminator scenarios were already widespread in pop media depictions of AI, but since the release of ChatGPT they have been increasingly crossing over into the scientific and regulatory mainstream. Last year, AI heavyweights Bengio and Hinton signed a statement put out by the Center for AI Safety (CAIS), declaring that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories included California congressman Ted Lieu and Kersti Kaljulaid, a former president of Estonia. Last year also saw the publication of a taxonomy of “societal-scale risks from AI” by Stuart Russell and Andrew Critch, AI researchers and signatories of the CAIS declaration, who spoke of “existential risks” from AI and claimed that AI has a “regulatory problem” and that “algorithms and their interaction with humans will eventually need to be regulated in the same way that food and drugs are currently regulated.” (That last idea is not new; already in 2016 a lawyer in the Department of Justice proposed “An FDA for Algorithms”.)
At the same time, the AI fear mongering in a media that is increasingly reliant on emotional manipulation has reached new heights. Last year, TIME magazine put out a “special report” on AI with a cover that read “THE END OF HUMANITY” in huge, boldfaced, all-capital letters set against a blood-red background, taking media hysteria and misinformation to just one notch below a modern-day reenactment of the 1938 War of the Worlds broadcast. Over the last two years, the New York Times has been publishing a continuous stream of quasi-religious op-eds by professional eschatologists like Tristan Harris, who, in a joint essay with historian Yuval Noah Harari and programmer Aza Raskin, warned that that the deployment of AI could “unleash godlike powers”, that “we have summoned an alien intelligence” (theurgic words like “summon” and “conjure” appear to be de rigueur ingredients in the genre of portentous AI premonitions), an intelligence which we must learn to master “before it masters us.” And an unhinged manifesto that TIME had no qualms publishing last year declared that “if we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong,” while advising that the only way to deal with AI is to “shut it all down” and that governments should be “willing to destroy a rogue datacenter by airstrike” and “to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs” (meaning that we might need to nuke China to force it to stop working on AI). The piece eerily echoes the themes and tropes one would expect to find in a Unabomber screed (extreme anti-technology stance, apocalyptic rhetoric, violence of contemplated interventions, and so on).
This is not to say that doomerism has become quite mainstream within the AI field—it remains a minority view. Even the assumption that superintelligence is possible in principle “is far from a consensus view in the AI community,” as pointed out in a July 2023 essay whose authors rightly note that “even defining “superintelligence” is a fraught exercise, since the idea that human intelligence can be fully quantified in terms of performance on a suite of tasks seems overly reductive” and that “extinction from a rogue AI is an extremely unlikely scenario that depends on dubious assumptions about the long-term evolution of life, intelligence, technology and society.” Aidan Gomez, CEO of the AI firm Cohere, has suggested that worrying about “a takeover by a superintelligent AGI is an absurd use of our time and the public’s mindspace.” Rodney Brooks, a roboticist and former MIT professor and director of the MIT Computer Science and Artificial Intelligence Laboratory, has repeatedly spoken out against the doom-mongering: “We are surrounded by hysteria about the future of Artificial Intelligence and Robotics. There is hysteria about how powerful they will become how quickly,” “hysteria about what they will do to jobs,” and “hysteria that Artificial Intelligence is an existential threat to humanity,” referring to some sample claims as “ludicrous [I try to maintain professional language, but sometimes …].”11
Last November, Julian Togelius, an associate professor of AI at NYU said: “For every other existential risk we consider—nuclear weapons, pandemics, global warming, asteroid strikes—we have some evidence that can happen and an idea of the mechanism. For superintelligence we have none of that.” He added that “the idea of AGI is not even conceptually coherent.” And while pioneers like Bengio and Hinton may suddenly be concerned about AI’s “extinction risks,” other pioneers, like Yann LeCun, are referring to such risks as “preposterously ridiculous.” Kyunghyun Cho, “a prominent AI researcher and an associate professor at New York University,” noted that he “respects researchers like Hinton and his former supervisor Bengio” but “warned against glorifying “hero scientists” or taking any one person’s warnings as gospel, and offered his concerns about the Effective Altruism movement that funds many AGI efforts.” He added “I’m disappointed by a lot of this discussion about existential risk; now they even call it literal “extinction.” It’s sucking the air out of the room.” Likewise, Andrew Ng, while also respectful of Bengio’s and Hinton’s recent stances, continues to believe that “current talk about AI being an existential threat to humankind is vastly exaggerated.” He said that AI doom myths are giving rise to a “massively, colossally dumb idea of policy proposals that try to require licensing of AI” and which “would crush innovation.” He continued: “There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction.”
Nevertheless, the sheer volume and intensity of the extinction rhetoric has shifted the Overton window considerably. In May of last year, for example, Rishi Sunak, the prime minister of the UK, met with executives of AI firms including Sam Altman and Demis Hassabis. Following the meeting, the UK government issued a statement declaring:
The PM and CEOs discussed the risks of the technology, ranging from disinformation and national security, to existential threats. They discussed safety measures, voluntary actions that labs are considering to manage the risks, and the possible avenues for international collaboration on AI safety and regulation. The lab leaders agreed to work with the UK Government to ensure our approach responds to the speed of innovations in this technology both in the UK and around the globe [my italics].
Altman, Hassabis, and Anthropic’s Amodei also met Biden and Harris last year and provided Senate testimony, where, once again, “Mr. Altman warned that the risks of advanced A.I. systems were serious enough to warrant government intervention and called for regulation of A.I. for its potential harms.” The New York Times article also noted that “Eventually, some believe, A.I. could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, though researchers sometimes stop short of explaining how that would happen.” And earlier this year, in March 2024, Gladstone AI, a small firm of four people that prepares AI technical briefings for government staff, released a report, commissioned by the U.S. government over a year ago, in which they urge the U.S. government to move “quickly and decisively” to avert “catastrophic AI risks” that “are fundamentally unlike any that have previously been faced by the United States.” They define “catastrophic risks” as “risks of catastrophic events up to and including events that would lead to human extinction.”
So this is where AI finds itself at the moment, facing a double-barrel assault on two fronts. On the singularity-doomsday side we have nonstop wailing and gnashing of teeth about the dangers of an evil ASI extinguishing human life; while cooler heads on the other side reassure us that AI is plenty evil as is, in the here and now—it is deeply biased, unfair, racist, reactionary, causing job losses, and so on.
Consider, finally, the last point made above, that AI might not have caused tremendous damage yet, but we should prepare for it because, in view of what's at stake, it's better to be safe than sorry. That point cannot be easily teased apart from AI's perceived existential risks, which, as we saw, have seeped deeply into the public consciousness courtesy of a sensationalist media hungry for clicks. The “better safe than sorry” approach to AI regulation derives most of its resonance from the dark sci-fi connotations that the term “AI” has acquired over the years, particularly over the last couple of years. It calls for regulating AI on the basis of the so-called Precautionary Principle, which I will discuss at length in the next installment.
The so-called “in-context learning” of LLMs does not change model weights and is therefore even more of a misnomer. It only allows a model to adapt its responses based on the context provided within individual prompts. It does not allow for accumulative behavior, whereby the model can acquire new knowledge and skills over time and improve its performance on previously seen tasks or learn new tasks without forgetting old ones. Regardless of the size of the context, each new input is processed completely independently of all past interactions and cannot build upon previous experiences, which inherently entails temporal and causal discontinuities.
In 1888, a police inspector by the name of John Bonfield told the Chicago Herald the following: “It is a well-known fact that no other section of the population avail themselves more readily and speedily of the latest triumphs of science than the criminal class” (as cited in the seventh chapter of Tom Standage’s The Victorian Internet, which vividly describes the many and sundry unscrupulous uses of the electric telegraph that promptly emerged after the technology was introduced to the public). No doubt Bonfield’s observation had already been made many times by others before him, as his own statement concedes.
A 2021 paper drawing on the author’s experience as a participant in the California Blockchain Working Group starts out by posing the question “How should legislators write a law regulating a brand-new technology that they may not yet fully understand?”. It neglects to ask the logically prior question “Why should legislators be writing laws regulating a brand-new technology that they don’t yet fully understand?” and indeed “Why should they be regulating a technology (the how) as opposed to specific problematic scenarios that may be enabled by a technology (the what)?”.
Though neither narrow enough nor effective enough, apparently. There have been many regulations in place for a number of decades now (since the 1980s), collectively known as AML (“Anti Money Laundering”), that require financial institutions to be on the alert for “dirty money”: They must report suspicious transactions, implement policies like KYC (“know your customer”) and CDD (“customer due diligence”), and so on. It is widely recognized that these have been unsuccessful. They fail to detect the vast majority of criminal activity but regularly scoop up innocent citizens, all while incurring huge enforcement costs.
As explained in the above Economist article, AML compliance has “become a huge part of what banks do and created large new bureaucracies. It is not unusual for firms such as HSBC or JPMorgan Chase to have 3,000–5,000 specialists focused on fighting financial crime, and more than 20,000 overall in risk and compliance.” A widely cited 2020 article in Policy Design and Practice referred to AML as “the world’s least effective policy experiment,” finding that AML “policy intervention has less than 0.1 percent impact on criminal finances, compliance costs exceed recovered criminal funds more than a hundred times over, and banks, taxpayers and ordinary citizens are penalized more than criminal enterprises.” Research published in the Journal of Financial Crime indicated that the current Money Laundering/Terrorist Financing (ML/TF) regime “appears almost completely ineffective in disrupting illicit finances and serious crime.” In the EU alone, banks spend $20 billion each year on AML compliance, while “professional money launderers … are running billions of illegal drug and other criminal profits through the banking system with a 99 percent success rate,” as pointed out by Robin Wright, former Director of Europol.
The Horse Association of America, HAA, was “established in 1919 as one of America’s early prominent lobbying organizations. Its stated mission was to “aid and encourage the breeding, raising, and use of horses and mules.” More broadly, it was created to “champion the cause of livestock dealers, saddle manufacturers, farriers, wagon and carriage makers, hay and grain dealers, teamsters, farmers, breeders, and other business interests that had a financial or emotional interest in horses and mules” (Innovation and Its Enemies: Why People Resist New Technologies, p. 128). They printed “leaflets that presented farm animals as superior to tractors. One leaflet claimed, for example: “A mule is the only fool proof tractor ever built.” (p. 127).”
See this informative 1998 interview of Ira Magaziner for additional context. Magaziner said:
We accepted in the beginning that nobody, including ourselves, was going to fully understand where this medium was headed because it was too new, too complex and too fast changing.
Granted, of course, that Internet commerce is very different from other areas in which AI can be applied, and much more amenable to efficient resource allocation via market forces. The point here is simply that a technology’s rate of growth does not necessarily, by itself, justify precautionary regulation.
Mr. Altman is somewhat of a doomsday-survivalism connoisseur. Already in 2016 he revealed that he was amassing “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.” Because if the singularity doesn’t happen, some other sort of apocalypse might, and surely a tech bro can never be too prepared. (He mused, for example, that “after a Dutch lab modified the H5N1 bird-flu virus, five years ago, making it super contagious, the chance of a lethal synthetic virus being released in the next twenty years became, well, nonzero.”)
Most versions in circulation are only slightly more fleshed out, e.g., they might add an appeal to Moore’s law in support of the rate-of-progress premise.
He has made this prediction several times. The one discussed in the Independent article was in 2020. As the article points out:
In 2016, Mr. Musk said that humans risk being treated like house pets by artificial intelligence unless technology is developed that can connect brains to computers. Shortly after making the remarks, Mr. Musk announced a new brain-computer interface startup that is attempting to implant a brain chip using a sewing machine-like device.
Musk has a history of proclaiming singularity-like events to be just around the corner. In 2014 he predicted that “the risk of [seriously dangerous AI] happening is in the five year timeframe. 10 years at most.” He has made a long series of failed predictions that fully autonomous driving is imminent (and that, naturally, Tesla will be the first company to achieve it). My own prediction, on which I am willing to lay down some serious money, is that Musk will continue to make failed predictions with clockwork regularity.
These statements are from 2017, but Brooks has not changed his views post-ChatGPT.
Sir, have you checked the new model?