Abstract
Immersed in the networks of artificial intelligences that are constantly learning from each other, the subject today is being configured by the automated architecture of a computational sovereignty (Bratton 2015). All levels of decision-making are harnessed in given sets of probabilities where the individuality of the subject is broken into endlessly divisable digits. These are specifically re-assembled at check points (Deleuze in Negotiations: 1972–1990, Columbia University Press, New York, 1995), in ever growing actions of predictive data (Cheney-Lippold in We are data and the making of our digital selves, NYU Press, New York, 2017), where consciousness is replaced by mindless computations (Daston in “The rule of rules”, lecture Wissenschaftskolleg Berlin, November 21st, 2010). As a result of the automation of cognition, the subject has thus become ultimately deprived of the transcendental tool of reason. This article discusses the consequences of this crisis of conscious cognition by the hands of machines by asking whether the servo-mechanic model of technology can be overturned to expose the alien subject of artificial intelligence as a mode of thinking originating at, but also beyond, the transcendental schema of the self-determining subject. As much as the socio-affective qualities of the user have become the primary sources of capital abstraction, value, quantification and governmental control, so has technology, as the means of abstraction, itself changed nature. This article will suggest that the cybernetic network of communication has not only absorbed physical and cognitive labour into its circuits of reproduction, but is, more importantly, learning from human culture, through the data analysis of behaviours, the contextual use of content and the sourcing of knowledge. The theorisation of machine learning as involving a process of thinking will be taken here as a fundamental inspiration to argue that the expansion of an alien space of reasoning, envisioning the possibility of machine thinking against the servo-mechanic model of cybernetics.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
When Gilles Deleuze argued that the informational language of control had transformed the individuality of the human subject into endlessly divisable digits that are constantly re-assembled at check points (Deleuze 1995), the current reconfiguration of the subject in terms of data (Cheney-Lippold 2017) rather exposes human consciousness to computational forms of mindless decision-makings (Daston 2010).
Immersed in the networks of artificial intelligences that are constantly learning from each other, the human subject is said to be now determined to computational sovereignty (Bratton 2015): an automated techno-infrastructure that harnesses all levels of decision to given sets of probabilities. The image of the automated subject is thus grounded in the data determinations of decision today where mindless computation exposes the condition of the human as being deprived of the transcendental tool of reason. However, if the automation of decision-making has debunked the transcendental model of self-determination of the subject, one should also ask what are the consequences of this relinquishing of conscious cognition in light of what Denisa Ferreira Da Silva (2007) calls the analytic and historical production of the global idea of race, namely colonialism, the scientific and historical analytics of a world already known to machines as the formation of a subject that is automated? According to Denise Ferreira da Silva however, the analytics of raciality (2007, p. 70), as a scientific project that pursues the “truth of man” (p. 95), takes the transcendental tool of reason as the conceptual poiesis that reproduces the transparent I, whose compression of the world however is at once re-constituted by the world outside man’s schema of representation. In other words, and in the context of the global idea of race operating through the transcendental tool of reason, to what extent is it possible to try and reverse the argument about the crisis of conscious cognition of the human (and its transcendental tool of reason) by the hands of machines and rather ask if there is no subject outside of technology, is it also true to say that there is no technology without a subject? If so, what would it mean for technology to have a subject? In other words, can the servo-mechanic model of technology be overturned to expose the alien subject of artificial intelligence as a mode of thinking originating at, but also beyond, the transcendental schema of the self-determining subject?
One can suggest that the subject of technology cannot be fully explained according to previous forms of technological orders that have said largely to correspond to the regimes of “machinic enslavement” and “social subjection” (Deleuze and Guattari 1987, pp. 456–458). In their debunking of the modalities of power relying on the representational model of the self (i.e. the arborescent model of self-reproduction), Deleuze and Guattari discuss the complex stratification of the subject in terms of dynamic processes of individuation, co-constituted by relations between internal, external and associated milieu involving how socio-technical assemblages rather generate a subject in terms of captures across and between apparatuses that overlap, mix and destratify given unity altogether. For instance, the regime of “machinic enslavement” includes industrial mechanisms of reproduction that engulf the subject into automated sensori-motor behaviours, dismembering the unity of the individual into part-objects belonging to the mega-machine of industrial capital. The neurotic mono-rhythms of the assembly line imparted a split between doing and thinking, establishing a standard expenditure of energy, measured through the equivalence of labour, time and money (Deleuze 1989).
Within the invisible order of cybernetic communication and systems of information exchange, the subject is the result of information compression activated by the binary logic of zeroes and ones. More than executing instructions, the cybernetic machine demands that functions are supplied with cognitive and creative solutions in order to acquire a knowing-how, a practical mode of thinking driven by learning. It is only at the end of the recursive cycle that the subject returns as an experiment in steering knowledge beyond what is already known. As the subject becomes looped within relational and non-relational databases, hiding in the background of everyday haptic interfaces, the illusion of a self-determining consciousness remains nonetheless granted by the emerging role of the user. No longer a simple operator of the machine, the user rather assumes the role of orchestrator or choreographer of the social world in which she decides to be linked with. The regime of social subjection replaces the cold rhythms of mechanical reproduction with the supple polyrhythms of relational communication absorbing the cognitive and creative terminal subject immersed into decentralised networks of instantaneous, parallel and participatory actions. However, as much as the socio-affective qualities of the user have become the primary sources of capital abstraction, value quantification and governmental control, so has technology, as the means of abstraction, itself changed nature (Parisi 2014). The cybernetic network of communication has not only absorbed physical and cognitive labour into its circuits of reproduction, but is, more importantly, learning from human culture, through the data analysis of behaviours, the contextual use of content and the sourcing of knowledge. In this article, I will attempt to argue that the subject is neither solely an enslaved component of machines nor its deluded interactive user. Instead the subject is being reconfigured from the standpoint of a learning machine, which has replaced the primary phases of information accumulation with the generative layers of machine-to-machine communication unfolding a new form of technocultural production based on artificial intelligence. If the machine acted to grant knowledge accumulation by means of data retrieval and transmission, today’s automated systems of knowledge rather rely on modes of learning to learn, and the particular forms of predictive patterning. Within this new context, some modes of machine learning merely act as amplifiers of social knowledge (i.e. echo chambers that reproduce class, gender and race biases), others instead activate modes of predictive learning that engage with hypothetical reasoning, aiming to determine complex decision-making in those cases when information is missing (for instance, predictive systems concerning security, health diagnoses and financial risks) (Inoue et al. 2013). It is within these forms of predictive patterning that the subject is reconfigured in terms of the means that define modes of thought that are neither given nor constructed, neither internally self-posed nor derived from external use. The subject of technology relies on a dynamic form of automation, involving not the automation of the given intuition of time (achieving faster results), but the re-patterning of the space of thinking (i.e. the expansion of a search space that retro-ductively constructs the time of thinking as determined by the unknown). Computation involves a performative compression of random quantities (or incomputables) entering the general infrastructure of algorithmic patterning. It is here that machine learning becomes the subjective aim of artificial reasoning insofar as the computational manner of generating learning comes to constitute the machine’s structure of being as denaturalising the very image of automated thought.
It may be useful at this point to state that this article is concerned not with the crisis of the self-positing subject but with the concretisation of the subject of technology: namely with the mode of existence of a techno-logic stemming from the becoming subject of the medium of thought, that is the realisation that under its computational configuration, the instrument of thinking has itself become a thought. From this standpoint, one may also gain a renewed insight about technology away from the servo-mechanic model of cybernetics that determines the default image of technology as either an instrument in the hands of man or as entrapping man’s freedom into its mindless procedures. Within the current image of computational technology, the servo-mechanic model returns to conceive of human subjects as “informational persons” caught within the web of constant updates, check-ins and all forms of invisible verifications that animate our phones and computers. Our human behaviours are therefore objects to be constantly analysed in the form of transcripts, scores, records, deeds and all sorts of meta-data. This article aims to counter the servo-mechanic model of technology by suspending the assumption that machines are always already made in the image of man, and as such determine what the human subject is and does. It is true that our everyday use of algorithmic machines gives us the illusion of being monitored, our existence being tracked, the steps we take, the friends we have, the places we have been to being recorded alongside what we search for, buy and sell, what we like and what we want to do. And yet, what remains invisible in this ordinary usability is precisely the realisation that the instrument of the self-determining human subject has profoundly challenged the servo-mechanic model through which it was conceived. It is from this perspective that this article takes the computational problem of learning in machines as being central to the re-articulation of machine thinking as an opportunity to defy the colonial reproduction of the servo-mechanic model of technology imparting the view of the crisis of human reasoning and conscious decision-making by the hands of mindless computation. Since its inception, the computational modelling of AIs has focused on how a machine may learn to think, that is how elements of surprise in its programmed—or mindless—behaviour could attest to a nuanced space of reasoning. This preoccupation with thinking as a mode of machine learning will be taken here as a fundamental inspiration to argue that the expansion of an alien space of reasoning with complex logic of AI points to the possibility of re-articulating the political dimension of a technological subject away from the servo-mechanic image of cybernetics.
With the automation of vision, for instance, the machine-to-machine seeing apparatus is not simply an extension of the disciplinary regime of transparency and clarity. Instead, the subject of automated control feeds an array of increasingly powerful artificial intelligent systems some trading information that identifies people, recognises places and objects, habits and preferences, assigns race, class, and gender behaviours, and tracks economic statuses as well as one’s desires, moods, and life choices. It is as if the disciplinary version of the human subject is continuously and variably dramatised through an infinite plethora of scripts poised towards what we have already seen, experienced, felt: habituations, familiarities, recurrent compulsive pleasures. But underneath the eternal returns of these dramatisations that re-impart the servo-mechanic image of technology made in the image of man there is another space for an alien subject, lurking beneath the smooth surface of performativity.
If machines talk to other machines, if machines see with other machines, if they are correlated within a space of communication that remains opaque to human vision, it is because the question of the digital subject involves the ingression of computation into the realm of thinking, breaking away from the self-determination of human consciousness. The critical view of the digital subject cannot overlook the challenge that the question of machine thinking poses to the self-positing model of the subject today. This is not simply a question of technology, but perhaps, and more importantly, a question of techno-politics that recuperates the ontological dimension of the subject from the standpoint of the medium with the perspective of defying the servo-mechanic model of the medium. If the human subject has been absorbed in the digital matrix of algorithmic communication, does it mean that its political qualities—namely the critical capacities to envision aims outside of the mere use of instruments—has also been neutralised by the networked image of the “informatics of domination” (Haraway 1991)? As AI systems become increasingly imbued with the question of thinking, they also reveal the alien—that is the denaturalised—condition of the subject. From this standpoint, this article asks: is there a political dimension to this new condition? What are the political possibilities of this alien subject?
To answer these questions I will discuss current debates that shift broadly between two main positions. On the one hand, a negative critique of automation declares the condition of the subject as enslaved to network capital, or what the collective Tiqqun called “the cybernetic hypothesis” (2001). On the other, an affirmative trust in full automation insists on the re-purposing or re-aiming of the means of enslavement so as to liberate the self-determination of the subject and re-use technology for human emancipation. The cybernetic hypothesis also sides with debates about the crisis of knowledge, the condition in which truths and laws have been replaced by the automated induction of data for which subjects are just correlating aggregates. The crisis of truth directly implies the failure of deductive reason, namely Western metaphysics, and its colonising project of emancipation based on slavery—a prototype of automation that comes to include the servo-mechanisms of the industrial assembly line and of cognitive automation. The crisis of Western metaphysics indeed is also a consequence of the automation of logic, exposing the limits of deductive reasoning, that is, of totalising or complete systems of reason. With the instrumentalisation of logic that actuates with computational reasoning, ends no longer match the means of programming. Instead, it can be argued that since the invention of the Turing Machine, and early experiments in the automation of deductive logic, the self-determining ends of the conscious subject became absorbed by the instrumental mechanisation of thought. However, this article will suggest that instead of extending the horizon of Western metaphysics, the automation of reason marked the origination of the alien logic of machines, an alien mode of thought stemming from within the instrument in order to re-loop the servo-mechanic model of machines against the self-determining consciousness of man. I will conclude by suggesting that this de-naturalised machine of reasoning can contribute to the theorisation of an alien hypothesis the re-articulation of a political dimension of a subject with and through technology.
This re-articulation however is not intended to argue for a technological extension to the conceptual self-determination of the unknown. The alien subject of artificial intelligence offers no prosthetic solution to the dis-integration of the rational subject and, this article suggests, it can be taken as a starting point to question the impasse in current critique of information technologies, cybernetics and intelligent computational systems. On the one hand, I will discuss the so-called Cybernetic Hypothesis arguing that the automation of the subject mainly coincides with an extension of the ideological instrumentalisation of decision-making that has bound the entire sphere of the social to the search engine of capital. On the other hand, this discussion will be set in contrast with what can be called the Accelerationist Hypothesis, according to which techno-sciences and informational means can be re-purposed towards new political ends enabling the liberation of the social from ideological self-determination of the subject. In what follows, I am going to discuss two propositions that generally correspond to these views, the so-called Cybernetic Hypothesis on the one hand, and the Accelerationist Hypothesis on the other. This discussion addresses the limits of this impasse and argues for a renewed engagement with techno-logics beyond the assumption that intelligent machines are either an extension of a transcendental subject or are means for the emancipatory formation of social collectivities. It will be suggested that a closer reading of the automation of logical reasoning in machines profoundly challenges the transcendental dimension of the self-determining subject, that can be discussed in terms of a dynamic relation between truth and proof. The internal critique of information sciences will be taken as a starting point to argue for a pragmatist view of techno-logic, which allows for a theorisation of the medium of thought in terms of an instrumental transformation of reason. This argument will be gathered in the last session, the Alien Hypothesis, where an abductive, constructionist, experimental envisioning of the working of logic will be discussed as part of a speculative image of the alien subject of AI. It is only through a renewed engagement with the complexity of techne-logic that it may possible to move beyond the colonial implications of current discussions about the crisis of the transcendental subject and of human reason perpetuated in the extension of the servo-mechanic image of technology. The alien hypothesis is not meant to offer a messianic view of automated reason as that which will re-constitute the transcendental subject in machines, but to rather extend post-Kantian efforts to theorise the future of thought as a speculative image of the human, which was already originating from the crisis of the deductive reason of Man by means of machines.
The cybernetic hypothesis
It has been argued that the digital subject has seen the consistency of its organic unity shattered into bites and bits: infinitely aggregating parts that give us the illusion of decisional integrity by the click. This remoteness in unity is a consequence of the disappearance of causality, a diffused programming of purposeless purposiveness: the paradoxical condition in which a subject’s aims have no cause. As Devroy writes: “indifferent to the causes of phenomena, automation functions on a purely statistical observation of correlations between data captured in an absolutely non-selective manner in a variety of heterogeneous contexts” (Rouvroy 2011, p. 126).
While Gilles Deleuze (1995) already diagnosed the infinitesimal “dividuation” of the subject and the reduction of epistemology to data-driven systems of unsupervised decision-making, today the political dimension of the digital subject is haunted by networks, correlations, feedbacks and participation. The collective Tiqqun’s manifesto, “The Cybernetic Hypothesis”, addresses this paradox directly: How can a political proposition for a digital subject use the same means (instruments or technology) as those used by capital for information governance?
According to Tiqqun, the cybernetic hypothesis is a political programme originating from the liberal view of the individualised human subject. As cybernetics enfolds biological, physical and social behaviours into information systems, it turns this liberal view into sets of recombinant “dividuals” (Deleuze 1995) individuals steered towards certain actions so as to benefit the system upon which they depend. As a science of prediction, cybernetics transforms the individual subject into data sets maintaining order as if moved by an active desire for totality. Cybernetics, however, both incorporates and transcends liberalism by transforming the social into a laboratory of trial and error, testing all possible governances and following an “experimentation protocol”. This is possible because cybernetics replaces the model of governance based on law, with a mechanism of information retrieval and transmission subtending the formation of a new empire of data, which actively steers conduct and checks and predicts behaviour. For Tiqqun, the forma mentis of cybernetics is an extended policing of life: this intelligent war of communicability is a war against all that is living. Such programmatic destruction produces heuristic subjects through a system able to reassess the effectiveness of its laws according to people’s responsive conduct. By sequencing the problem of uncertainties (of living and life) into a series of probabilistic scenarios, cybernetics breaks down and recomposes the subject through statistical analytics. The predictive steering of uncertainties enables cybernetics to impart an informational image of totality as re-configuring networks that mirror the nerves of governance and capital. As the subject is dismembered into data sets, it remains locked within the networked frame of communication, numbed by the illusion of a united social body and by a profound faith “in the genius of humanity”. As the cybernetic hypothesis achieves heuristic programming, where the subject is the social tester of the machine, the medium of governance withdraws to the background, out of sight. For Tiqqun, cybernetic automation is but an external imposition of unity onto the social, turning political collectivity into thin air.
Recently, Alex Galloway has similarly argued that cybernetics and computation have transformed forms of governance from serialised processing—so central to cinema, and its time image—into a spatialised apparatus that is binary, atomistic and iterative (2014). If the cinematic apparatus, according to Gilles Deleuze, transformed mechanical, action-based forms of automation into a virtual image of thought or time image, it is because this time-based mode of automation de-naturalised representation and imparted an impersonal superposition of images into the everyday (Deleuze 1995). This virtual assembly of the industrial machine introduced a serial infinity into the sequential arrangement of the assembly line. As the time image broke into the mechanics of chronological time, it also anticipated the advent of cybernetic automata of thought, equipped with control and feedback and replacing automata of movement—clockwork automata, motor automata, etc.—with predictive temporalities. The repetitive automaton of industrial capitalism enmeshes the unity of the subject into its sequential tasks that eventually become the data bits of the networked matrix of interactive agents. The cinematic machine had already sublimated thinking into a parallel dimension, away from natural perception and phenomenal cognition. With information control and feedback, cellular automata are not simply programmed to accomplish tasks, but to search for a space of probability that binds the subject to a growing network.
According to Galloway, the image of the network is rooted in absolute differences, parallel intersecting or aggregating space-times (2014). In particular, digital processing envelops this new spatial ordering according to an informational ground that exceeds the image of time both in the form of linear temporalities and constant heterogeneity. The order of the network is total and open, horizontal and distributive, inclusive and universal. Here the subject is no longer constituted by its internal temporalities (its circular drive from death to life). Instead, the order of this subject is emergence, modelled onto the behaviour of a swarm: the iterations of multiple parallel strings define how interactive agents stir order to run out of equilibrium, giving way to a self-organising subject without identity. This subject emerges from the evolutionary aggregation of disconnected parts.
What the collective Tiqqun proposes and Galloway’s analysis endorses is that the dominance of cybernetic automation and its spatialised regime of smoothness gives us the illusion of a united subject imprisoned in interactive communication. This image of the network, then, cannot be embraced as a form of politics but instead it must be counter-actualised, resisted and challenged in order to break the apparent heterogeneity of a monolithic network. From this standpoint, only through the diffused devising of tactics of opaqueness, experimenting with a fog-like micropolitics, it is possible to counter-act the networked regime of visibility with the impersonal, the neutral and the invisible. As opposed to the comforting self-mirroring of the subject within social media, these practices of non-existence, we are told, shall claim to have nothing in common, refusing to feedback into automated data networks. Against the one-directional speed of computational networks, these are practices to slow down and dilute the overwhelming invasion of overlapping times and spaces. Dominated by the constant longing for something and somewhere other than here and now, the digital subject is under strenuous psycho-social pressures as the ceaseless monitoring of information traffic and updates leaves no room for bearing the presence of interior truths. Enveloped in precarious, casual and fast decisions and enclosed in a bubble of self-satisfaction, this subject has no consciousness. The only option left here is to exit the cybernetic condition, to step away from the vortex of data proliferation and reject the automatisms of the self. This demands a full distrust in the system and a break from the cybernetic spell of social media so that a veritable social rhythm can enter collectivities. In other words, only a messianic promise, a call forth for a faceless, unconnected, impersonal and indifferent subject can oppose the predictive analytics of computable, classifiable and forever interchangeable data.
To further understand the implications of this proposition, one could turn to the use of tactics of invisibility in contemporary post-internet aesthetics. For instance, Adam Harvey project CV Dazzle Look 5 offers an instance of these micropolitical tactics of excess that focus on how to develop tactics to thwart facial recognition algorithms.Footnote 1 We know that machine vision is based on patterns of recognition as algorithms are trained to determine data by being fed with thousand images of faces from the net to learn how to put together the oval shape of the face or figure out the identity of a face by mining the distances between ears and eyes, etc. Harvey’s project rather shows how to develop tactics that can rather conceal features—for instance by covering the eyes or the nose bridge with hairs or create other features that algorithms are not able to see, such as using make-up to create additional lines in the cheek area, etc. (but are instead visible to humans).
Harvey’s video CV Dazzle Look 5 illustrates how automated image analysis works, by comparing one face with and one without make-up and experimenting with what remains invisible and imperceptible to machines. These efforts to break from the cybernetic hypothesis of a unified subject in the image of networked data by calling forth for a impersonal and invisible politics however more profoundly coincide with a reactive response to the crisis of political possibilities to think, act and live beyond the cybernetic script. More importantly, this view seems to foreclose the possibility of critique from within the becoming intelligent of the instrument of reason, whereby the dystopic view of control society is counter-acted by a messiac call for a world outside instrumentality, relegating machine thinking to the sheer efficiency of tasks.
From this standpoint, the cybernetic hypothesis seems to overlook what Paul Virilio already anticipated in The Vision Machine (1994), arguing that the profound transformation of optical media into the sightless thought of the computer exposed a speed in time processing that decoupled machine thinking from the transcendental model of optical perception. This implies that machines will learn to recognise what is invisible now as much as branches of computer vision are already creating adversarial images designed to thwart automated recognition systems. It can be then suggested that not only the claim that the subject needs to exit its networked condition remains an unconvincing sceptical proposition that seems to reify the image of the self-determining modern subject (thinking, acting and living autonomously from the instruments he uses) foreclosing the possibility of reinventing what an instrumental subject can be beyond the dominance of the servo-mechanic model of machines. In what follows, this article will attempt to argue that an instrumental alienness of machine thinking must be accounted for to re-invent a critical theory of automation. But, how is it possible to break away from techno-governance and techno-politics and yet offer an alternative critique of automation? In the next section, I will turn to the hypothesis of accelerationism. Although there are many versions of this view, I will refer to it as generally proposing a rather nuanced materialist approach and not an idealist rejection of instrumentality.
Accelerationist hypothesis
The Accelerationist hypothesis in particular can be understood as addressing the self-limiting tendency of capital, manifested in its surplus value investment in the technical machine (and the form of fixed capital), as activating an internal critique of capital in the form of a machine desire—or instrumentality—running away from its own transcendental determination. This hypothesis already starts from the possibility of techno-sciences not to coincide with a self-determining subject and with its automation in sheer efficient functions. Instead, within the accelerating machine of capital, whose deterritorialising tendencies, according to Deleuze and Guattari, could break open capital monopoly, this hypothesis addresses possibilities to construct perspectives for what is and how is the task of critical thinking in the age of automated decision. However, how to describe an apparatus of capture that runs away from itself, how to understand the dominance of algorithmic forms of subsumption that challenge both the law of the subject and its crisis today?
From this standpoint, it can be suggested that here the critique of technology is above all a critique of power and of the unleashing of the colonial war in and against populations by the war machine of capital. Here the condition of real subsumption is not simply a question of techno-scientific knowledge, of mathematics or computation, but is mainly a question of how power operates within the automated matrix. The computerised social world is not equivalent to power, but is in itself re-organised and automatised according to non-hierarchical criteria that enable the management of society and of the labour market in terms of information networks.
Antonio Negri’s argument that the technical machine is not simply a constitutive instrument of capital (2014) echoes some of the content of the accelerationist manifesto (Williams and Srnicek 2013), in particular the claim for an alternative modernity as the techno-social futurity of the subject above all as a socio-technical collectivity. Negri argues that informatisation—and not only physical assets such as buildings, vehicles, plants, equipment—is the most valuable form of fixed capital, because it is socially generalised through cognitive work and social knowledge. Automation incorporates information technology within itself, because it is able to integrate informatics and society within capitalist organisations. Here, automation coincides with a higher level of real subsumption, namely the networked command of algorithmic capital. This rule-based machinery both centralises and commands an increasingly fragmented and complex system of knowledge, corresponding to an alien configuration of the General Intellect now animated by the automated cognition of machine learning.
Attuned to the post-operaist spirit, Negri urges us to invent new modes of re-appropriation of this fixed capital in both practical and theoretical dimensions. To embrace the political potential of information technology for Negri means to positively address computable capacities that could augment productivity while directly reducing labour time (disciplined and controlled by machines), increase salaries and grant social income to everyone. Instead of rejecting inductive data retrieval and transmission, Negri calls for the re-claiming of quantification, economic modelling, big data analysis, and abstract cognitive models through educational and scientific practices. For Negri, mathematical and computational models can be politically repurposed beyond the limits imparted by capital on the function of automated cognition. By overcoming the negative critique of instrumentality, Negri rehabilitates the political dimension of technical means. Here there is already a suggestion for a computational form of politics that distinguishes between the technological configurations of information processing and the capitalist imperative of the informatics of domination.
But how can we embrace the political possibilities of instrumentality? How can one distinguish between pre-processed uses of machines and their re-programming for alternative uses? Is this in fact a focus on the usability of the instrument that is free from the historical knowledge (the norms and the socio-cultural biases) through which it was designed and manifested? Can a materialist account of the technical dimensions of the subject break from social reproduction and its accumulation of physical and technical assets?
It seems difficult to envision such politicisation of machines and a new form of instrumentality away from a sheer logic of exchange, whereby machines can still retain their servo-mechanic qualities that can be constantly re-traded. For instance, how can thousands of algorithmic species be used benevolently for experimenting with forms of sociality; for whom, for what kind of humanity? Hasn’t automation since its very inception exposed the indeterminacy of the servo-mechanic view of the medium of thought, or of what lies outside the transcendental tool of reason? As Da Silva puts it, the transcendental tool or medium of reason defined the ontological and epistemological production of global idea of race insofar as the scientific and historical analytics of race was part of the self-apprehension of the subject (2007). As a transcendental tool, reasoning has to prove the alienness of its outside as that which could not be fully contained in the analytic procedure without allowing for infinities to enter the space of the transcendental. In other words, as Da Silva points out, if blackness as quantum infinity is outside of internal and external circuit of self-apprehension, it is because it has always been pushing the transcendental beyond the schema of deductive decision (Ferreira Da Silva 2017)
This is not simply a question of denouncing the opacity of machine intelligence and embracing the critical mission of opening up the Pandora box to reveal the normative ground of knowledge, reproduced through intelligent machines. Instead, one cannot deny that techno-logic is embedded in particular histories and technical knowledges. It is true therefore that the re-purposing of machines cannot occur without working out their intrinsic material and conceptual complexity. To expropriate the potentialities of dynamic automata means accepting that machine knowledge is not neutral and that technical objects cannot simply be enlivened by political force as if from the outside, in an act of deliberate appropriation. The challenge is to determine the kind of appropriation that can open political possibilities with technology from within its mode of existence.
As Yuk Hui points out, technical objects shall be conceived as modes of existence situated within a digital milieu that singularly brings into relation ontologies (technical ontologies) and ontology (philosophical ontology) (2015). These are two distinct orders of magnitude whose divergence and convergence define digital objects away from mere use. Instead, an enquiry into the technical and philosophical lineages of the digital object can show the syntactic operations of a machine or the grammatical structure that machines interpret as an entry point into their modes of being. By divorcing essence from existence, the validity of digital object comes not to depend on the external action of thinking it (Hui 2015, pp. 76, 98). In particular, by drawing on Gilbert Simondon’s theorisation of technical objects, Hui argues that digital objects cannot be primarily understood in terms of aesthetic interaction whereby the use of machines is said to produce effects that can break away from the rigid protocols of information technologies. More radically instead, Hui calls for a philosophical inquiry—as a secondary level of abstraction—into the condition of being of digital objects so as to unpack alternative forms of practical and theoretical knowledge, which can in turn modify the normative system of relations between ontology and ontologies (pp. 96–99).
From this standpoint, it may not be sufficient to claim that information technology can be re-purposed away from capital reproduction without considering the meta-communication that machines have with other machines, namely the algorithmic elaboration of data and the computational structuring of randomness. This question concerns not simply the contextual use or re-usability of data, but needs to be addressed through a philosophical rehabilitation of instrumentality, involving an immanent non-relation or an alien dimension of relationality between means and ends (as a process of unknown elaboration and not simply a merger between doing and thinking). It could be argued that the mode of existence of digital objects entails a transformative relation between the syntactical functions and the ontological dimensions of information technology, or between the networked image of the cybernetic order and the being of machines.
By taking this position it is no longer possible to merge network functions with the image of the subject as if they belonged to the same chain of efficient causalities, whereby means beget other means, and where the subject becomes a means of capital. If the digital subject is more than the sum of its networked parts, it is because as a political medium it is not simply equivalent to the interactive functions of making (a productive dimension of doing or knowing-how), but also of thinking or knowing (a transcendental dimension from doing). If information technologies entail instrumentality, it is because this medium does not simply record and re-assemble data, re-connecting their social ontologies. Instead, instrumentality means that performing activities become a filter, or a mode of articulation, and, thus, of knowing or aiming to know, of information, stemming from the medium and its history, culture, visions and ontology. What shall be central to a critical theory of automation therefore is not simply an account of how the information medium is a maker of the world or simply a means to make the world. Instead critique shall be concerned with how instruments filter the world—how they know this world instead of that, and how computational abstraction gives us another or alien horizon to image of the world derived from but not reducible to its syntactical functions.
It is therefore not only a question of what machines can do and how they can exit the order of subjection of the social imposed by the techno-capital regime. Critical theory shall be concerned with the kind of knowledge originating from the techno-logic of machines, namely with how the medium filters the real and brings forward its alien vision of the world. A critical inquiry in the techno-logic of machines can address the material constrains of computation—namely the tension between information and energy, algorithmic patterns and randomness—and account for its transcendental space of reasoning, with its forms of knowledge of the relation between truths and proofs. Thus, a critical enquiry may need to articulate the relation between informational processes and logical filtering to argue that automata as instruments or media do not simply coincide with the function of the network, the given ranking obtained from the machine syntactical or correlational functions of data. By bringing together algorithmic information theories and logical constructivism, one could theorise machine thinking in terms of a logical transformation of the relation between truth and proof, whereby the image of the digital subject becomes bound to transcendental computation. Instead of extending the tool of reason to machines and thus liberating the human subject from the responsibility of decision-making and self-determining judgment, the alien subject of AI rather exposes the capacity of the medium to elaborate a transcendental dimension beyond a given set of programmed instructions.
To point to a transcendental dimension of the medium is important in order to challenge the servo-mechanic vision of information technology and argue that the machines’ function of using and being used implies more than a mere application or execution of instructions, or in other words, their sheer usability. Here, it can be rather argued that using is correlated to functioning and is determined by the material constrains of the machine that come to constitute a particular modality or knowing-how based on its modality or knowing-how, namely a practical knowledge that sets up the conditions by which a mechanism is expected to work. However, it is precisely at this level of material processing, where function seems to determine use, insofar as machines are enmeshed in the global analytics of techno-scientific knowledge, that it is possible to enquire into the transcendental possibilities of the instrument. In what follows, this article will discuss this transcendental dimension from the standpoint of non-deductive forms of computational logic and information compression. The alien hypothesis requires this re-visiting of logic in computation not simply in order to propose a technical explanation of the human condition that defies the dominant image of big data. The scope is to expose the ontological implications of the techno-scientific figurations of the world, which demand a re-conceptualisation of the postulates of being beyond the already given and the always already constructed. Transcendental computation can thus be taken to suggest precisely this effort to re-conceptualise the medium of thought beyond the modern image of servo-mechanic functions without thought.
The alien hypothesis
If, for François Laruelle, the “transcendental computer” is a fully decisional space, whereby automation cannot integrate physical and conceptual dimensions because it lacks “lived immanence” (Laruelle 2013), the transcendental possibilities of the medium entail a computational axiomatics that re-programmes the decisional moment of transcendence (or in Laruelle’s terms, the philosophical decision) of sentience from sapience, and of syntactical function from conceptual knowledge. According to Gregory Chaitin, for instance, computational axiomatics can no longer be understood in terms of self-positing truths or postulates and shall be theorised in terms of experimental algorithms, which involve the contingent processing of randomness, where decision occurs in the last instance (according to the capacities of compression) and is not dependent solely on the binary logic of 0s and 1s (Chaitin 2005). The binary rules-functions of the Turing Machines set the expectation that results must conform to necessary truths. Instead, experimental axiomatics includes the ingression of contingency and not necessity in causality, because randomness or incomputables, infinite varieties of infinities, become the causal condition of computational processing. Here, the transcendental transformation of the medium involves the way in which the computational medium filters information and thus involves a synthesis of logic and calculation (Dowek 2015), thus acting on data world through doing and thinking. For experimental axiomatics, incomputables define a real condition for computational indeterminacy that exposes the bifurcation between randomness and algorithmic discretisation. However, as much as indeterminacy is central to this disjunction between information, randomness and algorithmic patterning, it also imparts a new level of continuity between the material and the ideal. Here the ingression of contingency in causality in the form of incomputables can help us to explain that the relation between truth and proof is not self-determined by a transcendental subject, but rather remains open to the instrumental processing through which it is activated. Thus, one could suggest that between truth and proof there is a gap—or the formation of a new space of reasoning—that can be the starting point from which to envision the transcendental dimension of computation.
On this basis, one can argue that rule-based thinking does not simply correspond to sequences of mindless algorithms, programmed to executing instructions. In the article “Computing Machinery and Intelligence” (1950), Alan Turing suggests that it is the behavioural effects of the environment on the system, and not the inner workings of consciousness, that determine automated thinking as a mode of learning and not simply as a programed black box. However, learning does not only involve the organisation of information according to pre-existing symbols and notions. Learning, above all, entails the inclusion of fallibility or of indeterminacy in the logical step of formal reasoning (Turing 1936–1937). Turing’s attempts to automate deductive reasoning into one algorithmic language already declared that the task of reducing contingency to necessity to be impossible. Instead of grounding mathematical truths in automation, Turing discovered that certain postulates were not computable, because (a) they could not be simplified into smaller steps and (b) because they could not be known in advance by the programme. Logician Kurt Gödel already demonstrated that deductive reasoning was incomplete in 1931. Truths could be found outside the given premises of a postulate, revealing incompleteness in universal language. Similarly, Turing’s attempt to automate logical reasoning led to results or proofs that were not consistent with postulates. Instead of relying on self-evident truths, reasoning had to account for its limits and learn from unknowns, setting up an over-turning relation between truths and proofs.
One crucial implication of the discovery of the incomputable was that logical reasoning could no longer rely on the inferential deduction of proof from truths, but rather had to confront the real infinite varieties of infinities, unknown unknowns. This demarcated the ingression of fallibility in logical thinking, activating a heuristic mode of learning from unknowns in formal procedure. Far from simply imparting a representation of the world onto the world, the Turing Machine had stumbled upon the limits of logical deduction. In this context, it became apparent that the material conditions of thinking by far exceeded any consistent model of truths. Logical mechanisms were not simply demonstrative of already made truths and facts, but were instead in search of solutions, learning from unplanned outcomes, and transforming universals into experimental axiomatics.
By instrumentalising reasoning, the Turing Machine exposed the inevitability of contingency or the possibility of fallibility in the logical explanation of truths. Incomputables here are not the limit, but the condition against which information technologies can no longer be defined as tools that retrieve, transmit and aggregate information. This condition of computational indeterminacy however has also come to support the Kittlerian view of the material substrate of media (1997), whereby even software is said to be reducible to the hardware components of the machine. This view, however, cannot help us to see beyond the cybernetic hypothesis of the subject bonded to its functional modalities. In other words, the account of fallibility in computational logic risks to mainly deprive the informational medium of any possibility of transcending its condition from within its own frame of communication.
While the Kittlerian critique of the formalistic reading of computational media importantly relies on the historical and practical knowledge of media, it also delimits the understanding of instrumentality to functional conduct, to how machines remain grounded in a material substrate that explains their mode of existence in terms of what they do. This view cannot account for machine filtering, learning and knowing and thus risks falling back into normative reiterations of knowledge where machine reproduce what is already known according to the self-determining logic of the subject. It is here that an enquiry into automated reasoning shall turn to non-deductive models of computational logic where a dynamic relation between truth and proofs can set up the basis to argue for the possibility of machines to think beyond what they do.
One way to account for this possibility can be found in logical constructivism that can help explain how computational logic is conditioned by a temporal indeterminacy in the non-deductive relation between truth and proof. In particular, already in 1927, Brouwer pointed out that mathematics is inexhaustible and cannot be completely formalised. As a general system of symbolic logic, constructivism relies not on the traditional concept of truth, but on the concept of constructive provability (Brouwer 1967). In deductive logic, propositional formulae are always assigned a truth-value (“true” and “false”) regardless of whether there is evidence or proof for either case. For constructivism, there is no assigned pre-established truth-value. Instead, propositions are only considered “true” when there is direct evidence or proof. This is how logic becomes dynamic. Brouwer distinguishes two acts of intuitionism and calls them “twoity”, explaining that a finite state or limit shall be followed as continuing an ideal trajectory towards infinity and can have only retro-active ramifications. In short, twoity implies that infinite sequences cannot be fixed in advance by what is known and that the limits of knowledge are instead ideal trajectories of knowing. These tendencies of logical thinking towards the unknown do not only break from the deductive schema of truths (to be proved) but also imply that fallibility is central to reasoning.
While for the Platonic model of representation of the real, mathematical statements (true and false) and philosophical reasoning are tenseless, for constructivism, truth and falsity have a temporal aspect. Similarly, in contrast with the empiricism of inductive method of retrieving data, where facts cannot be changed, for constructivism, a statement that becomes proven at a certain point in time is said to lack a truth-value prior to that point. Brouwer’ s twoity addresses the actualisation of infinity through a structured and serialised process of thought. Twoity indeed means that number one already implies a movement towards number two. Numbers entail a relation to the continuum as an on-going affair, each time delimited by a finite set or a discrete unit that imposes discontinuity on the smooth surface of the continuum. It is possible to argue that Turing’s discovery of incomputables has a particular affinity with Brouwer’s constructivism and the possibility of understanding computation in terms of the processing of truths, and the recurrent compression of logical possibilities over time. This entails that proofs are not pre-determined in premises: i.e. they are not simply probabilities, but stand out as potentialities of premises (eventually confirmed to be true or false) that can only be defined retro-ductively (thus enabling their logical revision).
To understand better how the problem of the incomputable can lead to a new theorisation of the instrumentalisation of reasoning in machines, it is crucial to attend to another instance of non-deductive logic in computation. Information theorist Gregory Chaitin turns to experimental axiomatics to account for information complexity (i.e. information that cannot be compressed into simpler algorithmic strings) (2005). Chaitin explains that computation is defined by the tendencies of information to increase in size. Like the infinite counting of numbers, there is no end to information. Deductive reasoning therefore cannot sufficiently describe what happens in the logical thinking of machines. From this standpoint, one can suggest that incomputables—as infinite varieties of infinities (or randomness)—delineate the trajectory of computation from and towards infinity. This already implies that computation as the current mode of instrumentalisation of reasoning does not directly correspond to the material-historical constitution of media or the practical knowledge (the knowing-how) in them. Instead, a critical effort must be embraced to account for what happens in this tendency from and towards infinity, the becoming transcendental of computational media.
To envisage the possibility of a transcendental computation also means to shift the focus on to the origination of thinking from within the instrumental, to argue for a techno-logic that does not simply imply an efficient know-how, but also the elaboration of machine knowledge. Here, instrumentality is not a means to an end, but an experimental method or a knowing-how tending towards the determination of this or that result (and thus a transformation from knowing-how into a knowledge of this or that). Algorithmic patterning relies on learning as a possibility of revising both truths and facts. From this standpoint, it is possible to suggest that transcendental computation exposes the manner in which proofs can become generative of rules by showing that postulates or truths are the results of the compression of incomputables into discrete patterns (Chaitin 2006). As in Brouwer‘s double moment or twoity, here the finite and infinities exist on an immanent plane that does not simply merge them to immediately turn incomputables into knowledge. Instead, the transcendental function of computation entails scales of mediation, which include the instrumental patterning of complexity, involving, as it were, an alien ideation from what machines can do.
But how to explain transcendental computation without simply sublimating the indeterminacy of the proof? How to address the instrumental becoming without simply relinquishing logic in the name of absolute contingency? The sublimation of indeterminacy, the incomputable and randomness, often reflected in critical theories’ claims against the deductive and inductive regimes of automation, misses the crucial epistemological transformation of instrumental reasoning—the epistemological formation of machine knowledge. In other words, the rise of computational automata cannot be disentangled from the realisation that automated patterning has always been a mode of learning transcending its own known premises and challenging the exceptionalism of sapience in the self-determining formation of “Man1” defining the physical explanation of the subject as being autonomous from nature, and “Man2” corresponding to the rise of the analytics of the subject with biological sciences (Wynter 2003, p. 264).
As interactive, evolutive and learning algorithms in distributive, parallel and concurrent systems continuously perform information patterning, the algorithmic intelligibility of randomness has also acquired epistemological validity. While it seems still premature to conclude that machines are subjects able to know and change their own rules, it would be myopic to deny that computational automation has exposed the transcendental schema of reason to the experimental becoming of thought. This also means that the algorithmic function of labelling, selecting, valuating and commanding information increases the space of experimentation of automated systems in order not simply to extract data from social, economic and cultural practices, but also to challenge what is given in data by elaborating patterns that establish new webs of meaning.
Instead of the network image of the subject stemming from the material doing of machines externally bonding data in a seamless sea of information, one can at this point ask, what are the political possibilities of and for a digital subject from the perspective of a transcendental dimension of computation? In other words, how to conceive of this experimental subject in terms of this instrumental de-manization of logical reasoning? What possibilities can the alien subject of AI have beyond the cybernetic hypothesis of networked data and the accelerationist re-purposing of techno-social collectivities?
Multilogics
To address these questions, I will turn to Charles Sander Peirce’s triadic system of logical reasoning based on what he calls “abductive-inductive-deductive circuit” (1955, 2005). If the cybernetic figuration of the subject is determined by the spatial thinking of the network, Peirce’s triadic method accounts for the analysis, conceptual elaboration and speculative reasoning of non-inferential practices. Starting from the speculative function of reason (from reasoning not grounded in facts or truths but working through hypothesis), this method envisions an experimentation of both proof and truth (Magnani 2009, pp. 65–70). The determination of truths follows a series of hypothetical assertions (abduction) in order to explain real phenomena and involves the collection of measurable data (induction), followed by a consequent elaboration of rules (deduction). Rules are not fixed and are not symbolic representation of material practices. Instead, rules are the result of experimental reasoning, starting from the hypothetical account of unknowns and proceeding with the search for low-level patterning that informs both truths and proofs.
This multilogical approach points out that the infrastructure of meaning is not simply given but is both revealed and constructed as a minimal mode of patterning. Here, reasoning disentangles thinking from doing and eventually establishes a modification of sense-data and concepts, aiming to articulate a logic of continuum between relations. As Whitehead reminds us, intelligible functions, involving both the physical prehension and the conceptual elaboration of sense-data, are not simply representations of the non-inferential dimensions of material reality (1929). Instead, they are part of a multilogical processing that entails distinct dimension of mediation across the material, the physical and the conceptual elaboration of meaning in existing relations between particulars. Here, the function of reason is not to determine truths, but to establish causal relations between truths and facts through the dynamic formation of rules (Whitehead 1925, p. 25).
Rules are not locked within fixed categories, but are crucially dependent on practices, that in turn are open to be revised by and through the hypothetical reasoning about non-inferential phenomena. This multilogics is not the result of syntactical connections, but the end point of a synthetic processing requiring a complex elaboration of patterns, including a propositional evaluation (deconstruction and reconstruction) of data. Pragmatics thus comes before logic; logical inference is an explicit formalisation of meaningful patterning imbued in data, defining the point at which data are to be questioned. From a speculative to a critical articulation of data, pragmatism here contributes to a vision of the political dimension of automated systems. This dimension is to be discussed not in terms of how these means can be used for the emancipation of the subject, but mainly entails an invite to re-theorise the subject from the standpoint of the medium of thought as it grasps infinities and incomputabilities that force the given schema of conceptual self-determination to re-visit premises, axioms and truths
To approach automation in terms of pragmatism is to argue that transcendental computation can be distinguished from representational models of cognition or the inductive presumption that the causal efficacy between things is the same as the conceptual elaboration of things (where doing and thinking are said to merge into one set of actions). The techno-capitalist subsumption of cognition, therefore, corresponds not to the end of the subject in the networked image of data, but needs to be re-addressed in relation to the historical development of non-deductive logic in machines and the introduction of time in the elaboration of truths in automated systems. In particular, the computational infrastructure of machine learning already points to a radical transformation of computation as it involves the expansion of the abductive generation of inferences in learning automata. It is from a philo-fictional investigation about how can computational logic challenge the critique of technology as lamenting the crisis of reason, of knowledge and of man, that it is possible to rather re-invent the modern question of technology beyond the “sociogenic principle” (Wynter 2003, p. 328) by which the biological explanation of the self-making of man is made to coincide with a prosthetic vision of techne as simply an extension of the biological ground of evolutionary Man (p. 267).
A recuperation of Peirce’s triadic system of abduction-induction-deduction in the context of computation may help to argue for a multilogical subject that includes the alien logics of machines as a form of machine knowledge that starts from a minimal algorithmic patterning within practices of compression. The pragmatist triadic schema of logical thinking indeed may help to propose not a binary distinction, but a synthetic elaboration of axiomatic, empirical and constructivist modes of inferential reasoning. This implies not simply the representation of concrete practices in fixed symbolic meanings, but, more importantly, an experimental abstraction of relations starting from algorithmic patterning as something that already implies a mode of thinking about thinking. Instead of a cybernetic second-order mode of reflection that claims that thinking is grounded in the biocentric invention of Man (Wynter 2003, p. 317), namely an autopoietic reproduction of a given principle of organisation, multilogics involves a third level of abstraction, where rules are outcomes of relations between relations across scales and dimensions insofar as they are artificial synthesisers of knowing-how, discovering and creating meaning from the complexity of practices.
The attempt to re-habilitate automation outside of the cybernetic image of the servo-mechanism of network society is also an effort to re-address the particular configuration of computational mediation in terms of a multilogics that challenges the presumption that the functional behaviours of machines directly correspond to what machines can do and think. However, while the multilogical proposition of pragmatism admits asymmetrical scales of doing and thinking, Peirce’s triadic onto-logic can importantly be defined in terms of a logic of the continuum (Zalamea, 2012). This proposition involves a hypothetical mode of explaining unknowns (incomputables) originating low levels patterning or articulation of learning about this or that to retro-activate the validity of assumed truths and facts. Abduction involves a certain degree of “ignorance preserving activity” that implies not simply errors in learning, but the incomputable dimensions of realty for which logic appears to be a response to an ignorance problem (i.e. to what is not known) (Magnani 2009). Ignorance here is not to be resolved, but preserved, because it guarantees a progressive becoming of truth, working through the space of reason as involving unknown unknowns. This is also how constructivism in logic redefines computation. Indeterminacies are intrinsic part of the process of proof validation because unknown results enlarge the possibilities of hypothesis to become progressively determined (and not self-determined)—i.e. determined through the process-procedure of proof validation.
But how can this abductive, constructionist, experimental envisioning of the working of logic propose a new image of the digital subject or a techno-political dimension of the alien subject of AI? A philo-fictional approach to the alien logic of AI proposes a pragmatist view of instrumentality and argues that automated thinking does not preclude the possibility of a subject of technology. However, instead of exiting the cybernetic matrix of data correlation, I have argued for a becoming instrumental of the subject, and I have discussed the computational possibility for an alienation hypothesis of the subject that invites further enquiry into the technological dimension of a subject after the crisis of Man. Far from simply equating the instrument with the subject—the means or the mediatic syntax of communication—I have addressed the multilogical modalities of instrumentality (abduction-induction-deduction), which includes experimental axiomatics and logical constructionism, to argue that algorithmic processing and its syntactic ontology shall be addressed in terms of the transcendental re-programming of computational axioms.
Instrumentality is thus another way to argue for the condition of alienation of truths, including the current given form of the data subject …. However, while the messianic claim for a newly configuring techno-subject can only concur within or as a result of the mechanics of governance, it can be suggested that a multilogics for a speculative and critical reasoning of automation requires a multiscalar analysis of causalities, of the experimental relation between means and ends that precede and exceed the servo-mechanic image the neoliberal production of the subject.
Instrumentality importantly defines the possibilities for and of finality in terms of an immanent envisioning of ends, which must be distinguished from the view of computation as the result of an endless chain of mindless effects. For instance, Haraway’s discussion of the politics of the cyborg has already argued for the necessary re-articulation of the relation between means and ends, parts and wholes without renouncing the view of a general subject that could assemble together non-identical scales beyond social subjection (1991). One can also notice that a new concern with universalism—as a new conception of the whole or unity of the subject—occupies theories of gender and sexuality, which appeal to Alain Badiou’s notion of indifference to critically address the implications of an increasingly relativistic politics of particularities (Menon 2015). For instance, Menon discusses queer universalism to argue that indifferent (to identity politics) qualities of desire are not a version of a particular subject, but define an anti-ontological state of being, because universalism (the unity of the subject) is not given, but must be achieved. In other words, the advocacy for queer universalism involves a structural learning, “a wholesale revision of cultural-political-social-sexual habitation” (Menon 2015, p. 23).
The alien subject of AI as an experimental and constructivist vision of both causality (in terms of incomputable conditions) and finality (in terms of transcendental tendency) is thus a proposition for a multilogical unity of the subject. In addition to a universalism defined by praxis—or the enaction of theory—however, the political dimension of human action (as resulting from transcendental reflection) also requires a step back into the politics of instrumentality that is how the becoming medium of thought has led to a radical transformation of formalism, logic and reasoning. From this point of view, the alien subject of AI coincides with the argument that instrumentality is not a resignation to the network image of the subject. Instead, it is a way to propose that reasoning has become instrumental to the transformation of reasoning itself, calling for a re-origination of the transcendental subject from its infinite, incomputable outside and thus from within the alienating condition of thinking with and through machines. Far from being another messianic proposition, the instrumental origination of a digital subject is already happening and re-configuring the everyday activities of computational processing in the formation of multilogical modes of reason.
Notes
See Harvey’s project based on open source face detection at https://cvdazzle.com (last accessed November 1st 2018).
References
Bratton, B. 2015. The Stack: On Software and Sovereignty. MIT Press.
Brouwer L.E.J. 1967. Intuitionistic Reflections on Formalism. In: From Frege to Godel: A Source Book in Mathematics, 1879–1931, ed. J. Heijenoort, 490-492 (trans: Stefan Bauer-Mengelberg). Cambridge: Cambridge University Press.
Chaitin, J.Gregory. 2005. Meta Math! The Quest for Omega. New York: Pantheon.
Chaitin, G.J. 2006. The Limits of Reason. Scientific American 294 (3): 74–81.
Cheney-Lippold, J. 2017. We are Data and the Making of our Digital Selves. New York: NYU Press.
Daston, L. 2010 “The Rule of Rules”, Lecture Wissenschaftskolleg Berlin, November 21st.
Deleuze, G. 1989. Cinema 2: The Time-Image. London: The Athlone Press.
Deleuze, G. 1995. Postscript on Control Societies. In Negotiations: 1972–1990. New York: Columbia University Press.
Deleuze, G., and F. Guattari. 1987. A Thousand Plateaus: Capitalism and Schizophrenia. Minneapolis: University of Minnesota Press.
Dowek, G. 2015. Computation, Proof, Machine: Mathematics Enters a New Age. Cambridge: Cambridge University Press.
Ferreira Da Silva, D. 2007. Toward a Global Idea of Race. Minneapolis: University of Minnesota Press.
Ferreira Da Silva, D. 2017. 1 (life) ÷ 0 (blackness) = ∞ − ∞ or ∞/∞: On Matter Beyond the Equation of Value. e-flux Journal #79 - February 2017. https://www.e-flux.com/journal/79/94686/1-life-0-blackness-or-on-matter-beyond-the-equation-of-value/. Accessed 1 November 2018.
Galloway, R.A. 2014. The Cybernetic Hypothesis Differences. A Journal of Feminist Cultural Studies 25 (1): 107–131.
Haraway, D. 1991. The Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century. Simians, Cyborgs and Women: The Reinvention of Nature, 149–181. New York: Routledge.
Hui, Y. 2015. On the Existence of Digital Objects. Minneapolis: University of Minnesota Press.
Inoue, Katsumi, Andrei Doncescu, and Hidetomo Nabeshima. 2013. Completing causal networks by meta-level abduction. Machine Learning 91 (2): 239–277.
Kittler, F. 1997. Literature, Media, Information Systems. London: Routledge.
Laruelle, F. 2013. The Transcendental Computer: A Non-Philosophical Utopia. Trans. T. Adkins and Chris E., Speculative Heresy, 26 August 2013. https://speculativeheresy.wordpress.com/2013/08/26/translation-of-f-laruelles-the-transcendental-computer-a-non-philosophical-utopia/. Accessed July 2017.
Magnani, L. 2009. Abductive Cognition: The Epistemological and Eco-Cognitive Dimensions of Hypothetical Reasoning. Berlin: Springer.
Menon, M. 2015. Indifference to Difference. On Queer Universalism. Minnesota: University of Minnesota Press.
Negri, A. 2014. Reflections on the ‘Manifesto for an Accelerationist Politics. e-flux Journal 53 (3): 01–10 (trans. Matteo Pasquinelli). http://www.e-flux.com/journal/reflections-on-the-“manifesto-for-an-accelerationist-politics. Accessed 28 Aug 2016.
Parisi, L. 2014. Digital Automation and Affect. In The Timing of Affect, ed. M.-L. Angerer, B. Bosel, and M. Ott, 161–179. Epistemologies of Affection Chicago: University of Chicago Press.
Peirce, C.S. 1955. Abduction and Induction. In Philosophical Writings of Peirce, ed. J. Buchler, 150–156. New York: Dover.
Peirce, C.S. 2005. Reasoning and the Logic of Things: The 1898 Cambridge Conferences. In Lectures by Charles Sanders Peirce, ed. K.L. Ketner. Cambridge: Harvard University Press.
Rouvroy, A. 2011. Technology, Virtuality and Utopia: Governmentality in an Age of Autonomic Computing. In Law, Human Agency and Autonomic Computing: The Philosophy of Law Meets the Philosophy of Technology, ed. M. Hildebrandt and A. Rouvroy, 119–140. London and New York: Routledge.
Tiqqun. 2001. The Cybernetic Hypothesis. https://theanarchistlibrary.org/library/tiqqun-the-cybernetic-hypothesis. Accessed 28 July 2017
Turing, M.A. 1936–1937. On Computable Numbers, with an Application to the Entscheidungs Problem. Proceedings of the London Mathematical Society (2) 42: 230–265. Reprinted in A. M. Turing (2001) Collected Works: Mathematical Logic, R. O. Gandy and C. E. M. Yates E (eds.) Amsterdam: North-Holland.
Turing, M.A. 1950. Computing Machinery and Intelligence. Mind 59: 433–460.
Virilio, P. 1994. The Vision Machine. Indiana: Indiana University Press.
Williams, A. and N. Srnicek. 2013. #ACCELERATE MANIFESTO for an Accelerationist Politics. Critical Legal Thinking, 14 May 2013. http://criticallegalthinking.com/2013/05/14/accelerate-manifesto-for-an-accelerationist-politics/. Accessed 28 August 2016
Whitehead, A.N. 1925. Science and the Modern World. New York: Free Press.
Whitehead, A.N. 1929. The Function of Reason. Boston: Beacon Press.
Wynter, S. 2003. Unsettling the Coloniality of Being/Power/Truth/Freedom: Towards the Human. After Man, Its Overrepresentation: An Argument, The New Centennial Review 3 (3): 257–337.
Zalamea, F. 2012. Peirce’s Logic of Continuity. MA: Docent Press.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
OpenAccess This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Parisi, L. The alien subject of AI. Subjectivity 12, 27–48 (2019). https://doi.org/10.1057/s41286-018-00064-3
Published:
Issue Date:
DOI: https://doi.org/10.1057/s41286-018-00064-3