Anthony Downey NEOCOLONIAL VISIONS Algorithmic Violence and Unmanned Aerial Systems ENG PostScriptUM Kataložni zapis o publikaciji (CIP) pripravili v Narodni in univerzitetni knjižnici v Ljubljani COBISS.SI-ID 165272323 ISBN 978-961-7173-35-2 (PDF) Anthony Downey NEOCOLONIAL VISIONS Algorithmic Violence and Unmanned Aerial Systems “All prediction damages the future […] Developments, tendencies, curves can be projected from the present forward, and these projections can be manipulated.” — Vilém Flusser 1 In the wake of the 2003 invasion of Iraq, the risk posed by improvised explosive devices (IEDs) threatened to inflict significant casualties on the ground troops of the United States (US) and Al ied Forces. Triggered remotely and designed to disrupt, incapacitate, maim and kil , the odds of successful y anticipating and disrupting the insurgent networks responsible for planting IEDs were deemed low at best. Faced with this prospect, some in the US military suggested that if the war in Iraq was lost, it would be attributable to the tactical effectiveness of these devices.2 Countenancing defeat, this realisation led to a pronounced upswing in financial support for unmanned aerial systems (UAS) and autonomous weapons systems (AWS), alongside other counter-IED technologies. Over a ten-year period, from 2003 onwards, the US Defense Advanced Research Projects Agency (DARPA) funnelled an estimated $75 1 Flusser, V. (2011). Into the Universe of Technical Images. University of Minnesota Press, p. 159. 2 The timeline of IED usage and the technological response to it is adapted here from Arthur Holland Michel’s astute account of how aerial models of hyper-surveillance were developed in relation to the invasion of Afghanistan in 2001 and, in 2003, the invasion of Iraq. Holland Michel observes that “[s]even months into the war, General John Abizaid, head of the US Army Central Command, wrote a classified memo to Defense Secretary Donald Rumsfeld and the chairman of the Joint Chiefs of Staff, warning of the potentially catastrophic effect of widespread IED use. If the United States and its coalition partners were going to lose the war in Iraq, Abizaid predicted, the IED would be the reason.” See: Holland Michel, A. (2019). Eyes in the Sky: The Secret Rise of the Gorgon Stare and How It Will Watch Over Us All. Houghton Mifflin Harcourt, pp. 3–4. 5 bil ion into such projects, albeit with varying degrees of success.3 One of the more enduring initiatives funded by DARPA turned out to be the Autonomous Real-Time Ground Ubiquitous Surveil ance Imaging System (ARGUS-IS), which, in 2006, became the first airborne apparatus to al ow for the deployment of effective wide-area persistent surveil ance systems (WAPSS). The advent of WAPSS proved to be pivotal in countering IED attacks. Live video, transmitted in real-time directly from ARGUS-IS, enabled surveillance teams to scroll backward through footage to investigate a bomb site and backtrack – from the visual evidence of an explosion – to the prior locations of the suspected bomb makers. It also allowed surveillance teams to fast-forward through footage, post-explosion, to locate the whereabouts of potential insurgents and – theoretical y at least – anticipate future attacks. In their panoptic ambitions, the success of apparatuses such as ARGUS-IS and other aerial systems was supported, if not driven, by developments in Artificial Intel igence (AI), machine learning (ML) and computer vision. These technological al iances incremental y synchronised the reasoning behind the pre-emptive military strike – the priority, that is, to strike first in a theatre of war – and, as we will see, the in-built predictive logic that underwrites AI. To suggest as much is to be reminded from the outset that the operational calculus of AI is preoccupied with an overarching goal: prognostication. The concatenation of pre-emption as a military goal and prediction as the end use of AI apparatuses is all the more conspicuous when we consider the degree to which machine learning and advanced computer vision determine the effectiveness of WAPSS and automated targeting apparatuses. Working from the statistical prevalence of past features, patterns and occurrences, machine learning strives to autonomously generalise from input – data in the form of, say, ful -motion video images from zones of conflict – in order to predict the future and, thereafter, eradicate pending threats. In this scenario, prediction not only begets pre-emption, it also stimulates computational exemplars of paranoiac projection in the pursuit of extra-terrestrial dominion 6 3 Holland Michel, 2019, p. 13. and terrestrial dominance. The concerns surrounding the logic of pre-emption and the prophetic impulses of AI are further compounded when we examine how the future of counter-terrorism and US military policy in the so-cal ed Middle East is systematical y invested in and simultaneously reified through models of algorithmic violence.4 To ful y understand how the military rationale of pre-emption is encoded in the operative logic of algorithms (and how, in turn, AI endorses these martial imperatives), we need to consider the evolution of colonial technologies of vision. Although colonisation was first and foremost preoccupied with the exploitation of wealth and labour through occupation, advanced forms of machine learning and computer vision have given rise to neocolonial apparatuses that, while furthering such objectives, are powered by AI-enhanced prototypes of data extraction. The establishment of Iraq, to take but one country, as a testing ground for advanced imaging technologies registers the evolution of colonial paradigms of dominion into imperial methods of remote disciplinary control. The historical ascendancy of the “imaginative command” we once associated with colonialism, alongside the political and economic demands that defined colonisation more general y, has mutated, in sum, into a paradigm of neocolonial “algorithmic command”.5 We can, in this context, draw a direct line between the contemporary application, development and enhancement of western apparatuses of vision – powered by machine learning and advanced prototypes of AI-powered computer vision – and the historical ambition to subjugate and control populations. However, neocolonial projections, underwritten and endorsed by AI, are not merely about monitoring and containing the present; they are, crucially and irrevocably, implicated in the martial and political will to occupy the future. 4 For an insightful account of algorithmic violence as a “force of computation”, see: Bellanova, R. et al. (2021). Toward a Critique of Algorithmic Violence. International Political Sociology, 15(1), p. 123. 5 I borrow the phrase “imaginative command” from El eke Boehmer’s discussion of Edward Said and others in Colonial and Postcolonial Literature (2005, Oxford University Press). 7 Artificial Intelligence and Unmanned Aerial Systems By early 2007, it was reported that the high-resolution sensors employed by ARGUS-IS, when used on an Unmanned Aerial Vehicle (UAV), could reliably distinguish people and objects on the ground through the use of advanced computer-imaging processing methods. In 2008, following this report (which was widely touted as providing a “‘God’s eye view’ of insurgent networks”), DARPA signed a memorandum of agreement that effectively licensed ARGUSIS cameras for use in the so-cal ed Gorgon Stare programme, the latter being a cornerstone in the unprecedented expansion of aerial surveil ance and remote targeting across the Middle East – primarily in Iraq and Afghanistan – and elsewhere.6 The advances made in WAPSS from 2003 onwards effectively ushered in a new epoch of semi- and, in some cases, fully autonomous surveillance systems.7 They also, however, introduced a problem of scale: who was going to scroll through the unprecedented volume of captured data? Given the sheer enormity of data extracted through aerial surveil ance (a “single 10-hour Gorgon Stare mission generates 65 tril ion pixels of information”), the deployment of AI-enhanced decision-making in zones of conflict was, in military terms, inevitable.8 The management of risk and threat prediction – through the large-scale analysis of extracted data – would be underpinned through the deployment of machine learning and computer vision, a fact that was arguably already apparent in 2003 when, in the lead-up to the invasion of Iraq 6 Holland Michel, 2019, pp. 52, 46. 7 Although fully autonomous surveillance systems are common, there remains much by way of debate and ambiguity as to what constitutes a fully autonomous lethal weapon. A United Nations Security Council Report, published on 8 March 2021, observed that a Turkish-made Kargu-2 drone may have acted autonomously in selecting, targeting and possibly killing militia fighters in Libya’s civil war. If this is proven to be the case, it would be the first acknowledged use of a weapons system with AI capability operating autonomously to find, attack and kill humans. See: United Nations. (2021). Letter dated 8 March 2021 from the Panel of Experts on Libya Established pursuant to Resolution 1973 (2011) addressed to the President of the Security Council. Retrieved August 31, 2023, from https://digitallibrary. un.org/record/3905159?ln=en. For a fuller discussion of semi- and fully autonomous lethal weapons, see: Scharre, P. (2019). Army of None: Autonomous Weapons and the Future of War. W. W. Norton and Company. 8 This estimation of the number of pixels involved in a Gorgon Stare mission is quoted 8 from: Holland Michel, 2019, p. 123. (the latter being, lest we forget, the exemplar of a pre-emptive war), George W. Bush announced that “[i]f we wait for threats to ful y materialise, we wil have waited too long”.9 Implied in Bush’s statement, whether he intended it or not, was the unspoken assumption that counter-terrorism would be necessarily aided by autonomous weapons systems capable of maintaining and supporting the military strategy of anticipatory and preventative self-defence. In the process of extracting data from social, cultural, political and community-based activities and interactions, the forecasting of potential insurgency, from at least 2003 onwards, was focused on neutralising threats in the present and, crucial y, the prediction of future risks and threats that had yet to materialise. By 2013, ten years after the invasion of Iraq, the company contracted by DARPA to develop ARGUS-IS announced that their 1.8 gigapixel colour camera and its full field-of-view (FOV) vehicle motion detection had the capacity to generate “[r]eal-time forensic reachback capability” alongside “thumbnails and metadata for ~40,000 targets”.10 Crucial y, this “unprecedented situational awareness” was achieved using “onboard, embedded image processing algorithms”.11 The implementation of WAPSS and UAS apparatuses provided an all-seeing, algorithmically augmented surveillance template capable of scrutinising a given area. It also provided a technological y enhanced version of the panoptic technologies associated with colonialism.12 The “God’s eye view” guaranteed that such systems could capture not only the activities of insurgent networks but the matrices of community-based activities, 9 Office of the Press Secretary. (2022, June 1). President Bush Delivers Graduation Speech at West Point. The White House. Retrieved December 12, 2021, from https://georgewbush- whitehouse.archives.gov/news/releases/2002/06/20020601-3.html 10 See: BAE Systems. (2012). Autonomous Real-Time Ground Ubiquitous Surveillance Imaging System – Argus-Is. Retrieved August 31, 2023, from https://www.baesystems.com/ en/product/autonomous-realtime-ground-ubiquitous-surveillance-imaging-system-argusis 11 Ibid. 12 I have previously discussed the evolution of the panoptic colonial gaze into the neocolonial realm of algorithmic “perception” in: Downey, A. (2020). There’s Always Someone Looking at You: Performative Research and the Techno-Aesthetics of Drone Surveil ance. Heba Y Amin: The General’s Stork (A. Downey, Ed.). Sternberg Press. I have also formulated, in part, my research on algorithmic violence as presented here in: Downey, A. (2022). The Algorithmic Apparatus of Neocolonialism: Counter-Operational Practices and the Future of Aerial Surveil ance. Shona Illingworth: Topologies of Air (A. Downey, Ed.). Sternberg Press. 9 social interactions and day-to-day communal relationships. To confirm aberrant or non-normative behaviours, data had to be scraped from an ever-widening ambit of activity considered to be “normal” or, for the purpose of comparison with insurgent activity, non-threatening. This schematic focus on the nominal y normative and non-normative behaviour systems of entire communities is all the more evident when we consider the terminology in use to describe how these procedures allowed for an “association matrix” to be formalised into a “social network analysis” and, as noted in the Commander’s Handbook for Attack the Network, the identification of individuals who could be targeted, captured, kil ed or otherwise terminated.13 Neocolonial Projections Throughout the era of colonisation, the apparent exactitude and technological facility involved in the techno-scientific fact of analysing and calculating everyday existence generated an authority associated with the symbolic and al egorical fixing of an imperial reality. The event of establishing reality through technologies of measuring was likewise viewed as evidence of western superiority over non-western subjects: “The geographical engineers believed in their ability to measure the value of the peoples and the cultures they were invading. This was fundamental y related to a growing western sense that the essence of western superiority lay in the accuracy and measurement of which non-European cultures appeared incapable.”14 The technopolitics of measuring, invested in the positivist logic of scientific validation and mathematical proofs, prefigure the operative logic of algorithmical y defined methods of quantification, the core of which are invariably derived from the statistical analysis of patterns in pre-existing data. Drawing on the work of Edward Said, amongst others, Anne Godlewska foregrounds how the colonial extraction of data in the eighteenth and nineteenth centuries was both fundamental 13 Commander’s Handbook for Attack the Network, quoted in: Holland Michel, 2019, pp. 23–24. 14 Godlewska, A. (1994). Napoleon’s Geographers: Imperialists and Soldiers of Modernity. Geography and Empire: Critical Studies in the History of Geography (A. Godlewska & N. Smith, 10 Eds.). Blackwell, p. 40. to cartographic processes and, to al intents and purposes, a primary method to ensure the numerical fixing of reality: “The emphasis on number and the instrumentality of knowledge has a strong association with cartography as mapping assigns a position to all places and objects. That position can be expressed numerically.”15 If a place or object can be expressed numerically, it implies a positionality that – situated in a given time and space – can be readily contained and extrapolated to “manage”, regulate, govern and occupy, metaphorically or otherwise, both the present and the future of that place or object. In reference to Napoleon’s expedition to Egypt (1798–1801) and its ambition to map entire regions, it has been further observed that the “cartographic apparatus […] for Napoleon and the generals was a means of visualising and managing the future”.16 Significantly, the management of the future through imperial means sought, as Edward Said cannily observed in Orientalism, to “divide, deploy, schematise, tabulate, index, and record everything in sight (and out of sight)”.17 That which cannot be seen, in the sense implied by Said, relates to how the implicit caesura of ocular-centric vision – the limits of human sight – can be compensated for through the use of cartography and its projection onto a given landscape. It is this method of interrogative projection that effectively underwrites the ambition to schematise and render visible that which cannot be seen. In our algorithmic age, the originary goal of cartography to render visible that which – to the ocular-centric, anthropoid eye – remained largely invisible is indelibly encoded into the objectives of artificial intel igence, focused as it is on revealing and cataloguing the present in order to predict the 15 Godlewska, A. (1995). Map, Text and Image. The Mentality of Enlightened Conquerors: A New Look at the Description de l’Egypte. Transactions of the Institute of British Geographers, 20(1), p. 6. Emphasis added. 16 Engberg-Pedersen, A. (2015). Empire of Chance: The Napoleonic Wars and the Disorder of Things. Harvard University Press, p. 157. Emphasis added. See also: Engberg-Pedersen, A. (2023). Martial Aesthetics: How War Became an Art Form. Stanford University Press. The future-oriented ambitions involved in the copious mapping of France and Europe are also highlighted in Antoine Bosquet’s account of the Carte de l’Empereur, a relief map of Europe on a 1:100,000 scale commissioned by Napoleon. See: Bousquet, A. (2018). The Eye of War: Military Perception from the Telescope to the Drone. University of Minnesota Press, pp. 122–126. 17 Said, E. W. (1991). Orientalism. Penguin Books, p. 86. (Original work published 1978). Emphasis added. 11 future. Throughout colonial technologies of vision and present-day neocolonial anxieties concerning the calculation of proximate threats yet to materialise, it is precisely that which remains “out of sight” that continues to fuel the anticipatory, preventative logic of a pre-emptive missile strike. While the historical impact of cartographic, cadastral and aerial photographic methods across the Middle East has been well documented, I want to highlight here how the perpetual and all-encompassing algorithmic gaze not only expands upon colonial antecedents but also substantively extrapolates the al -seeing gaze into the future. Through proposing that the technological y devolved “eye” has evolved into an unaccountable algorithmic gaze, I am directly linking colonial technologies of vision with the evolution of WAPSS technologies to make a further distinction: the devolution of deliberative, ocular-centric principles of seeing and thinking to the recursive realm of algorithms reveals the calculated rendering of subjects in terms of their disposability or replaceability, the latter being a key feature of colonial discourse and practice. This process of devolving decision-making processes relating to questions of life and death discloses a causal, if not fatal, link between colonial technologies of representation and the opaque regime of unaccountable neocolonial apparatuses that include, but are not limited to, ventures such as Project Maven. Project Maven and the Principle of Pre-emption In a declassified memorandum from the US Deputy Secretary of Defense, dated 26 April 2017, it was stated that the Department of Defense (DoD) “must integrate artificial intel igence and machine learning more effectively across operations to maintain advantages over increasingly capable adversaries and competitors”.18 In late 2017, Project Maven was delivered to ten intel igence 18 Deputy Secretary of Defense. (2017, April 26). Establishment of an Algorithmic Warfare Cross-Functional Team (Project Maven) [Memorandum]. Retrieved May 22, 2021, from https://dodcio.defense.gov/Portals/0/Documents/Project%20Maven%20DSD%20 12 Memo%2020170425.pdf units working on missions in Syria, Iraq and other undisclosed African countries.19 The launch of Project Maven, also known as the Algorithmic Warfare Cross-Functional Team (AWCFT), effectively heralded an “automated analysis system capable of recognising targets and discovering suspicious activities”.20 Given the sensitivities surrounding the autonomous, machinic identification of subjects (often viewed as potential threats that can be summarily eliminated), it is unsurprising that the US Air Force is unwilling to share exact details of how machine-learning algorithms – once deployed in advanced computer vision models – are trained to support targeting apparatuses and other categories of threat prognosis. However, echoing as it does the colonial compulsion to “record everything in sight (and out of sight)”, the process of “recognising targets and discovering suspicious activities” is inevitably contingent on the extraction of data (input) and the algorithmical y enhanced prediction of future events in the name of not only mitigating risk but, more controversial y, eliminating it before it materialises. One year after the 26 April 2017 DoD report was published, it was announced that the program overseeing Project Maven employed an “AI-based” algorithm for the purpose of autonomous target recognition and identification.21 This is in keeping with the stated rationale of Project Maven, which, according to the United States’ DoD, includes “developing and integrating computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of ful -motion video data that DoD collects every day in support of counterinsurgency and counterterrorism operations”.22 In a subsequent 19 Holland Michel, 2019, p. 135. 20 Ibid. Emphasis added. 21 Ibid., pp. 135–136. Covering as it does the grounds for the “application of lethal or non-lethal, kinetic or non-kinetic, force by autonomous or semi-autonomous weapon systems”, a recent Department of Defense directive effectively revises the use of AI in aerial weapons systems to authorise, pending the approval of a special military panel, the autonomous use of lethal force. See: Office of the Under Secretary of Defense for Policy. (2023). DoD Directive 3000.09: Autonomy In Weapon Systems. Retrieved April 14, 2023, from https:// www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf 22 Pellerin, C. (2017, October 27). Project Maven Industry Day Pursues Artificial Intelligence for DoD Challenges. US Department of Defense. Retrieved July 17, 2021, from https://www.defense.gov/News/News-Stories/Article/Article/1356172/ project-maven-industry-day-pursues-artificial-intelligence-for-dod-challenges/ 13 report, published in 2019, successive advances in the project – which was by then supported by engineers based at Google – were understood to entail the application of software that had been “trained on thousands of hours of smal er low flying drone cam footage depicting 38 strategical y relevant objects from various angles and in various lighting conditions” to ful -motion video data col ated from conflict zones.23 Although we do not know the exact constitution of these 38 objects, the author of the report details how – in reference to their use in zones of conflict – the objects depicted in such footage were “label ed as [to] what we know the objects to be, such as a traveling car, a weapon, or a person”.24 In addition, the algorithms involved in these calculative predictions of as-yet-unseen objects – that is, potential threats – would have been trained on data sets of digital images that had been previously procured from apparent instances of insurgency – the planting of IEDs, for example – and the day-to-day social networks of people and communities more broadly. In the composition of training sets, it is known that data (ful -motion video images) is pre-labelled by human operators in semi- and un-supervised structures of machine learning.25 Through collating intelligence based on preconceived notions of threat, data labelling generates categorical bias: certain classes of images are significantly overrepresented or underrepresented compared to others, ensuring that any bias in the data-labelling or input stage will be algorithmical y amplified in the output stage of prediction.26 This process has given rise to a “data-driven kil ing apparatus” based on extracted material that is rendered quantifiable – and, thereafter, actionable – through human-defined categories that, in the case of war, are often predefined by 23 See: Roth, M. (2019, January 9). Military Applications of Machine Vision – Current Innovations. Emerj. Retrieved February 12, 2020, from https://emerj.com/ ai-sector-overviews/military-applications-of-machine-vision-current-innovations/ 24 Ibid. 25 Supervised learning involves training a machine learning system using labelled data. In unsupervised learning, there are no labels or target outputs predefined during training, so as to encourage a learning algorithm to “learn” patterns, structures or relationships without explicit guidance. 26 For an overview of how “algorithmic amplification” operates, see: DiResta, R. (2018, October 1). Computational Propaganda: Public Relations in a High-Tech Age. The Yale Review. 14 Retrieved January 22, 2019, from https://yalereview.org/article/computational-propaganda the spectre of threat.27 Although routinely presented as an objective “view from nowhere”, a supposedly self-referential sphere of unbiased knowledge production that is empirical y objective, AI-powered systems of unmanned aerial surveil ance and autonomous weapons produce epistemic structures to justify the event of actual violence. The algorithmic augury of possible threat can, in short, summon forth quantifiable threat. For Louise Amoore, in her insightful analysis of how algorithms operate in relation to the “crowded data environment of drone images”, the “defining ethical problem of the algorithm concerns not primarily the power to see, to col ect, or to survey a vast data landscape, but the power to perceive and distil something for action”.28 The prediction of apparently quantifiable threat, based on patterns of previous insurgency, gives momentum to actionable directives as to how risk should be eliminated. Amoore continues: “As an aperture instrument, the algorithm’s orientation to action has discarded much of the material to which it has been exposed. At the point of the aperture, the vast multiplicity of video data is narrowed to produce a single output on the object. Within this data material resides the capacity for the algorithm to recognise, or to fail to recognise, something or someone as a target of interest.”29 Through this algorithmic “aperture”, prediction leads inexorably to action, pre-emptive or otherwise. Prediction, however, is just that: a premonition of a potential event that is but one possible outcome amongst countless others. In this sense, prediction begets violence inasmuch it terminates or usurps imminent potential. In the form of projections into the future, algorithmic extrapolations – to paraphrase the epigraph to this essay – can and do annul the future. In the case of drone footage used to train the AI systems in use in Project Maven, video from conflict zones was uploaded to an artificial neural network 27 Weber, J. (2016). Keep Adding. Kill Lists, Drone Warfare and the Politics of Databases. Environment and Planning D. Society and Space, 34(1), p. 108. 28 Amoore, L. (2020). Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Duke University Press, p. 16. See also: Amoore, L. (2009). Algorithmic War: Everyday Geographies of the War on Terror. Antipode: A Radical Journal of Geography, 41(1), pp. 49–69. 29 Amoore, 2020, p. 17. 15 in the form of training data (input) for the purpose of producing efficient patterns of object identification and prediction (output).30 This process took place on the ground after the footage had been captured so that the neural network in question – Google’s TensorFlow Application Programming Interface (API) – could be trained and subsequently deployed in WAPSS and other unmanned aerial systems.31 This encompasses, as it did for ARGUS-IS since at least 2013, embedded image processing algorithms designed to foresee the prevalence of future objects of interest based on past instances and relative occurrences of such objects. Thereafter, the modus operandi of pre-emption, in keeping with the military logic of the pre-emptive strike, is concerned with extinguishing threats that are “not-yet-taking-place”.32 Although we enter here into a speculative domain, in which events are not-yet-taking-place, the virtual manifestation of perceived threat – through the algorithmic prediction of threat – can justify the summary sanctioning of a pre-emptive drone strike. Algorithms can, through their convolutions, actualise threat. The epistemologically sanctioned realm of algorithmic prediction – the regime of epistemic violence – engenders, in these environments, actual violence. For all the apparent validity of AI systems, as deployed in WAPSS, we need to consider here the degree to which “algorithms are political in the sense that they help to make the world appear in certain ways rather than others. Speaking of algorithmic politics in this sense, then, refers to the idea that realities are never given but brought into being and actualised in and through algorithmic systems.”33 Fol owing Taina Bucher’s insights, alongside those of Amoore and others, we need to acknowledge the degree to which algorithms, 30 For a fuller discussion of how Google managed the data from the Pentagon, see: Metz, C. (2021). Genius Makers: The Mavericks Who Brought AI to Google, Facebook and the World. Penguin Books, pp. 246–250. 31 TensorFlow is a popular open-source machine learning framework that provides tools and libraries for building and training various types of neural networks, including convolutional neural networks (CNNs). CNNs are particularly well-suited for tasks involving image and video analysis. 32 Massumi, B. (2015). Ontopower: War, Powers, and the State of Perception. Duke University Press, p. 235. Emphasis added. 33 Bucher, T. (2018). If…Then: Algorithmic Power and Politics. Oxford University Press, p. 3. 16 Emphasis added. such as those deployed in machine learning and computer vision, are explicitly seeking out and summoning forth patterns of behaviour to justify pre-emptive missile strikes. Through the identification of a given object, an algorithmic apparatus effectively renders visible that which remains, on the whole, invisible to ocular-centric standards of human sight. Throughout this operative calculus, the apparently oracle-like algorithm seeks to guarantee that the aporetic – that which is characterised by the irresolvable, undetermined and unidentified – is rendered not only knowable but, crucial y, detectable and destroyable in the future. Calculating Futures Introducing as it did the neocolonial vision of unending and perpetual violence, the so-called “war on terror” further established a dualism of contending forces that, in its apparently al -encompassing urgency and implied dangers, foreshadowed an entire region in terms of both atavistic and pending threat. To counter such threats, the evolution of AI and autonomous systems of aerial surveillance and targeting was quantified through the spectres of this purportedly unending phantasm of violence. The direct link between autonomous AI-augmented systems of identification – calculus – and the eradication of threat – violence – was therefore in evidence from the very inception of Project Maven. Promoting a field of vision and action that triggers a response, pre-emptive or otherwise, based on the apparently perpetual and irreconcilable presence of terror and threat, the foundational logic of Project Maven was deterministic rather than tentative; pragmatic rather than exploratory. It is a logic that advocates, through predictive analysis, a heuristic regime where the algorithmic “perception” of threat is enough to warrant pre-emptive action and eventual destruction. Observing the function of Project Maven in 2021, a spokesperson for the United States’ DoD noted that the technology in use effectively “enhances the performance of the human-machine team by fusing intelligence and operations through AI/ML [machine learning] and augmented reality 17 technology. Project Maven seeks to reduce the time required for decision making to a fraction of the time needed without AI/ML.”34 When earlier defending their involvement with the US military, a spokesperson for Google noted that “[t]his specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data”, before adding that “[t]he technology flags images for human review, and is for non-offensive uses only”.35 In light of how algorithmic “apertures”, with an apparently inescapable logic, sanction action and have an al -too-real impact on people, communities and environments, this comment is, at best, disingenuous. In 2018, following the resignation of several employees and widespread condemnation, Google announced that it would let its contract for Project Maven expire when it came to an end in March 2019. The furore surrounding Google’s involvement in Project Maven, and their subsequent withdrawal from it, has arguably overshadowed the sobering fact that the venture, steeped as it is in predictive analytics, did not end there. In 2019, it was reported that the privately owned company Palantir had taken it over and, in an al usion to the eponymous 1982 sci-fi film, changed its name to TRON.36 Although the futuristic terminologies in use through the company’s choice of nomenclature – the name Palantir being an allusion to a Palantír, a crystal ball of sorts featured in J. R. R. Tolkien’s epic tale The Lord of the Rings (1937), and TRON being a reference to the eponymous film known for its technological prescience – is somewhat circumstantial, it is precisely the ambition to produce more 34 See: Brewster, T. (2021, September 8). Project Maven: Startups Backed By Google, Peter Thiel, Eric Schmidt And James Murdoch Are Building AI And Facial Recognition Surveillance Tools for The Pentagon. Forbes. Retrieved September 9, 2021, from https://www.forbes. com/sites/thomasbrewster/2021/09/08/project-maven-startups-backed-by-google- peter-thiel-eric-schmidt-and-james-murdoch-build-ai-and-facial-recognition-surveillance- for-the-defense-department/ 35 Conger, K. & Cameron, D. (2018, March 6). Google Is Helping the Pentagon Build AI for Drones. Gizmodo. Retrieved April 4, 2018, from https://gizmodo.com/ google-is-helping-the-pentagon-build-ai-for-drones-1823464533 36 Peterson, B. (2019, December 10). Palantir grabbed Project Maven defense contract after Google left the program: sources. Business Insider. Retrieved September 2020 from https://www.businessinsider.com/ 18 palantir-took-over-from-google-on-project-maven-2019-12?r=US&IR=T effective practices of predictive analysis that remains core to the company’s not inconsiderable investment in AI apparatuses for the purpose of waging kinetic and non-kinetic warfare. Although there are entries on the Palantir website outlining the company’s work with the US Army, there is no direct reference, at the time of writing, to Project Maven/TRON, although its enduring presence can be found in the stated aims that accompany the martial implications of deploying autonomous technologies: “Palantir offers solutions to harness the power of […] hardware solutions, reduce system complexity, and provide improved human-machine interfaces […] Palantir’s solutions can reduce cognitive burden, protect, and connect the warfighter.”37 Elsewhere, and in tune with the stated military deployment of UAS and WAPSS, we learn that “[n]ew aviation modernisation efforts extend the reach of Army intel igence, manpower, and equipment to dynamically deter the threat at extended range. At Palantir, we deploy AI/ ML-enabled solutions onto airborne platforms so that users can see farther, generate insights faster and react at the speed of relevance.”38 As to what reacting “at the speed of relevance” means, we can only surmise it has to do with the pre-emptive martial logic of autonomously anticipating and eradicating threat before it becomes manifest. Palantir’s stated objective to produce projective AI solutions that enable military planners to “see farther”, autonomously or otherwise, is further evidence of its reliance on the inferential, or inductive, qualities of artificial intelligence.39 In April 2023, the company released a video on YouTube that showcased an “Artificial Intelligence Platform for Defense” (AIP).40 In an era of 37 Retrieved April 2, 2023, from https://www.palantir.com/offerings/defense/army/. Emphasis added. 38 Retrieved April 2, 2023, from https://www.palantir.com/offerings/defense/ army/#airborne. Emphasis added. 39 I am drawing here on a popular conceptualisation of induction algorithms, which, needless to say, are highly complex and contingent on multiple operational features. For an accessible account of algorithmic induction, see: Domingos, P. (2015). The Master Algorithm: How the Quest for the Ultimate Learning Machine will Remake our World. Penguin Books, pp. 57–91. 40 Retrieved April 26, 2023, from https://www.youtube.com/watch?v=XEM5qz__HOU 19 post-ChatGPT, a technology that is reliant on Large Language Models (LLMs) and therefore inherently grounded in the predictive functioning of algorithms, the video outlines how AIP “unleashes the power of large language models and cutting-edge AI for defence and military organisations”.41 Putting to one side the degree to which the LLMs employed in such technologies are prone to so-called hal ucinations (or, more correctly, outright examples of erroneous projection), the fact that such algorithms produce predictions founded upon statistical and probabilistic rationalisations of input data remains critical for their future deployment in warfare.42 The reality that algorithmic predictions will unavoidably conclude in death and injury recalls, in part, Martin Libicki’s comment that “visibility equals death”.43 However, we could extend this insight here to highlight how the algorithmic encoding of political y defined military goals – with all their pre-emptive bias, avowed ruthlessness and unmitigated opportunism – also equals death but with an important addendum: algorithmic rationalisations of probability routinely herald a precarious dimension where death is both yet-to-come and simultaneously ever-present. If the machinic perception of threat is neither beyond statistical estimation – the coercions, that is, of algorithmic calculation – nor, crucial y, the range of UAVs, then the predictive function of AI-enhanced weapons systems adumbrates a computational y defined radius of death.44 This is not, final y, about the deferral of death as such; rather, it is about the deference of life-and-death decisions to a mechanical calculus of probability that is ultimately beholden to martial devices of pre-emption, political expediencies and the neocolonial logic of expendability. 41 Ibid. 42 For an extended discussion on the implications of AI-induced hallucinations in UAS technologies, see: Downey, A. (forthcoming, 2024). The Future of Death: Algorithmic Design, Predictive Analysis, and Drone Warfare. War and Aesthetics: Art, Technology, and the Futures of Warfare (J. Bjering, A. Engberg-Pedersen, S. Gade & C. Strandmose Toft, Eds.). MIT Press. 43 See Martin C. Libicki, quoted in: Bousquet, 2018, p. 3. 44 I am alluding here to the Latin root of the term “adumbrate” – namely, umbra or shadow – and the manner in which it describes a series of activities that include giving an outline or a 20 form to an object through foreshadowing or, more ominously, casting a shadow upon it. This essay draws upon recently published research and a series of conference papers that include The Future of Death: Algorithmic Anxieties and Programmable Destruction (The War Seminars #3 – War and Aesthetics, University of Southern Denmark, September 24, 2021); Algorithmic Command: Digital Archives, Data Sets, and Neocolonial Futures (Resistant Archives, University of Münster, October 22, 2022) and Neocolonial Visions: Algorithmic Anxieties and Epistemic Violence (Shifting Scales, Aksioma (Ljubljana), March 3, 2023). I am grateful to Anders Engberg Pedersen, Ursula Frohne and Janez Fakin Janša for their invitations to speak at these conferences and feedback on various papers. The research will be published in full in Decolonising Vision: Algorithmic Anxieties and the Future of Warfare (forthcoming, MIT Press, 2024). 21 Anthony Downey NEOCOLONIAL VISIONS: ALGORITHMIC VIOLENCE AND UNMANNED AERIAL SYSTEMS PostScriptUM #47 Series edited by Janez Fakin Janša Electronic edition Publisher: Aksioma – Institute for Contemporary Art, Ljubljana www.aksioma.org | aksioma@aksioma.org Represented by: Marcela Okretič Proofreading: Miha Šuštar Design: Luka Umek Layout: Sonja Grdina Cover image: Courtesy of ATPD 2023 (c) Aksioma | All text and image rights reserved by the author | Ljubljana 2023 Supported by the Ministry of Culture of the Republic of Slovenia and the Municipality of Ljubljana Published in the framework of the programme Tactics & Practice #14: Scale aksioma.org/scale ISBN 978-961-7173-35-2 0€ PostScriptUM #47, Ljubljana 2023