-
Seeing the Forest
“Preoccupied with a single leaf. . . you won’t see the tree. Preoccupied with a single tree. . . you’ll miss the entire forest. Don’t be preoccupied with a single spot. See everything in its entirety. . . effortlessly.”
— Takehiko Inoue
Building moral machines will rely upon a combination of practical and philosophical considerations. Bostrom’s Paperclip Maximizer is an absurd, albeit foundational, thought experiment in AI ethics. An AGI determined to produce paperclips could end civilization. [1]
The most daunting, underdiscusse, and bewildering aspect of future AGIs is their potential plurality. Accessibility is one consistent trend in the history of technology. There is no reason to believe AGI will be forever confined to a dozen proprietary systems managed by a handful of benevolent, or malevolent, organizations.
Popular concern and scholarly attention currently revolve around individual AIs operating alone. Researchers tend to conceive of superintelligent AGI as a monolith with a single “mind.” This is misleading, as AGIs, including very clever ones, may be neither singular nor insular. [2]
Outcomes on the macro level frequently cannot be extrapolated from individual behaviors. Simple rules can give birth to tremendous complexity. [3]
Consider a model of neighborhood segregation where each resident only asks that one third of their neighbors be ethnically or economically similar to themselves. Despite being composed of fairly tolerant individuals, extreme segregation is still observed at the macro level. [4]
LLMs are paving the way for improved simulacras of behavior in mediums such as video games; it is now possible to begin seriously studying what sort of societies can emerge from the interactions of generative agents. [5]
There are very real ramifications from failing to step back and examine the forest.
Ants coordinate with each other in a decentralized fashion. Boasting only simple nervous systems, the sum cannot be understood solely from studying its parts. Naturally, there is no guarantee of such cooperation between the synthetic beings of the future–although perhaps this should not be cause for concern.
It seems unrealistic to assume all AGIs, even if they are neuromorphic or otherwise endowed with human-like empathy, will follow a strictly ethical path: if they are intelligent beings, they are dynamic—capable of rapidly overhauling themselves. In a vacuum this could be the case.
Yet in groups and ecosystems, balances are usually found. Cheaters of different kinds abound in animal populations. Yet the ratio of suckers, cheaters, and grudge-holders generally finds an equilibrium. [6]
Keeping AGI friendly does not begin or end with inculcating individual intelligences with rigid maxims, but with satisfactorily explicating upon the emergent phenomena that will arise from the interactions of similarly or differently educated machines—long before AGI becomes a pressing concern.
For this reason, the vocabulary, methods, and intuitions supplied by systems theory, economics, and evolutionary biology are of immense value to finding starting points for further lines of inquiry.
-
Tutelary Spirits and Personal Daemons
A tutelary spirit guards a person or place. A daemon, one’s personal guide, is one example. Yet such spirits could be the defenders of a corporation, ideology, or government. Surely, if this is the case, they will come into conflict with one another.
Machines with near-human intelligence will be interacting with us (and each other) as personal assistants, teachers, nannies, counselors, caregivers, and, for lack of a better word, friends. A machine filling any of these positions would need a grasp of the emotional states of those it’s serving.
While it may not matter whether a smart toaster prefers Kant or Mill, AI is poised to largely replace, and in some instances supplant, human workers. Business, national defense, law enforcement, jurisprudence, and governance are not immune to the encroachment of synthetic minds. [7]
Truthfulness and privacy can clash, especially if something cannot decide exactly when it is trivial to withhold the truth. Total transparency would spell the death of personal liberty.
Self-interest and altruism are often at odds; a hypermoral machine would find any form of self-indulgence unnecessary or loathsome. Why should someone sit on the couch when there is litter in the street? Or for that matter, people without homes?
Handling externalities, finding fair outcomes, determining when force is necessary, and arbitrating disputes are a few of the responsibilities these machines may have to shoulder. Dealing with these situations requires sets of criteria the agents have to refer to make ethical decisions.
Homogeneity in form or training could be a substantial stumbling block to AGI safety—it may be the worst stumbling block of all. Diverse systems are robust ones.
A population with limited genetic diversity does not stand a chance against an especially virulent pathogen. The same, in all likelihood, applies to digital pathogens and disinformation. [8]
Diversity not only ensures some degree of herd immunity,but assists in problem solving by leveraging the insights of different perspectives and ways of being. [9]
-
The Next Steps
It is not feasible, necessary, or worthwhile to analyze a film one bacterium at a time. Similarly, only so much can be learned by studying the solitary behaviors of social animals. Anticipating behaviors from a bird’s eye perspective could make the task of forecasting AGI’s evolution considerably easier.
Park et. al, cited at the beginning of this essay, showed remarkable results, reporting their generative agents “wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day.”
While these worlds are greatly simplified versions of our own, the essential parts are still there. They allow us to preemptively explore and prepare for the unexpected. The interdependence of a complex adaptive system’s components makes simulation the most viable avenue for exploring cause and effect relationships within it. [10]
Chaos theory emphasizes the sensitivity of networks to starting conditions, which strongly suggests some forethought should be put into the architecture of future AGI societies before they begin to take shape. [11]
References
- Bostrom, Nick. “Ethical issues in advanced artificial intelligence.” Science fiction and philosophy: from time travel to
superintelligence (2003): 277-284.
2. Alonzi, Adam; The Foundations of Robosociology: Values and The Aggregate Behaviors of Synthetic Intelligences; AI
and Robotics 2020; August 23, 2020.
3. Schelling, Thomas C. “Models of segregation.” The American Economic Review 59.2 (1969): 488-493.
4. Park, Joon Sung, et al. “Generative agents: Interactive simulacra of human behavior.” arXiv preprint arXiv:2304.03442
(2023).
5. Dawkins, Richard. The selfish gene. Oxford university press, 2016.
6. Glenn, Jerome C., Theodore J. Gordon, and Elizabeth Florescu. “State of the future.” Washington, DC: The Millennium
Project (2022).
7. Page, Scott E. Diversity and complexity. Princeton University Press, 2010.
8. McDonald, Bruce A., and Celeste Linde. “Pathogen population genetics, evolutionary potential, and durable resistance.”
Annual review of phytopathology 40.1 (2002): 349-379.
9. Miller, John H., and Scott E. Page. Complex adaptive systems: an introduction to computational models of social life:
an introduction to computational models of social life. Princeton university press, 2009.
10. Boccaletti, Stefanos, et al. “The control of chaos: theory and applications.” Physics reports 329.3 (2000): 103-197.

Leave a comment