Artificial Intelligence in Strategic Context
AI PULSE, Spring 2021 (with Edward Parson et al.)
Study of AI’s societal impacts and risks, and its implications for risk assessment and governance, tends to fall into two clusters, separated in time horizon and the anticipated scale of impacts. Since early development of AI there has been intensive speculation over potential emergence of general intelligence far more capable than humans, under terms such as advanced general intelligence (AGI), superintelligence, or the AI singularity. More recently, as actual applications with significant impacts have proliferated, attention has shifted to current and immediately anticipated risks, such as fairness and bias, privacy, autonomy and manipulation, due process, and other concerns. There is, a region of mid-term effects between these two, however – where AI might transform people and societies by vastly reconfiguring capabilities, information, and behavior, while still remaining (mostly) under human control – that receives much less attention. These mid-term effects may be of greatest importance, in terms of the probability and magnitude of potential societal disruptions and the ability to influence these through anticipatory responses. But in contrast to immediate impacts, for which an observable record is available; and to endpoint or singularity issues, which are amenable to deductive reasoning based on stipulated technological characteristics that are assumed to dominate any societal or political conditions in shaping impacts; this mid-range presents serious challenges to assessment and response planning, with few promising tools or methods available. We propose and begin to develop one approach to assessing these mid-term impacts, focused on the actors whose decisions shape the development and application of AI systems; their interests, capabilities, and information; and the strategic interactions that influence their decisions.
Read entire publication here.