Digital Culture & Society, 2/2021
Edited by Olga Moskatova, Anna Polze and Ramón Reichert
Capturing personal data in exchange for free services is now ubiquitous in networked media and recently led to diagnoses of surveillance and platform capitalism (Zuboff 2019; Srnicek 2017). In social media discourse, dataveillance and data mining have been criticized as new forms of digital work and capitalist exploitation for some time (cf. Allmer 2015; Andrejevic 2012; van Dijck 2014; Fuchs 2010, 2013; Scholz 2013; Trottier 2012). With the general transformation of the open web into an ecology dominated by commercial platforms (Hands 2013; Helmond 2015; Langois and Elmer 2013; Gillespie 2018), platformization and economic surveillance also redefine digital visual culture, facilitating new forms of images, everyday practices and online visibility, while expanding the logics of social media to the rest of the web. From social photos (Jurgenson 2019), selfies and image communities on the internet to connected viewing and streaming, and video conferencing during the Corona pandemic – the digital image is not only predominantly networked (Rubinstein and Sluis 2008) but also accessed through platforms (van Dijck 2013; van Dijck et al. 2018) and structured by their economic imperatives, data acquisition techniques and algorithmic processing. Today, participation and commodification are closely linked in the production, circulation, consumption and operativity of images and visual communication, raising the question of the role networked images play for and within the proliferating surveillance capitalism.
Linking images and surveillance automatically brings traditional concepts such as panopticon and its numerous modifications into play, since they rely on optical and visual metaphors (Haggerty 2006; Buschauer 2016). In his famous analysis of the panopticon, Michel Foucault showed to what extent power can be exercised through visuality and so produce specific subjects. However, as frequently remarked (Haggerty and Ericson 2000; Kammerer and Waitz 2015), this form of power seems incapable of grasping the dynamics of networked digital media technologies. In the paradigm of the control society (Deleuze 1992), not only media but also the techniques of surveillance and control are increasingly networked and unobtrusive. Many of their contemporary forms do not rely on the visible demonstration and internalization of the gaze, but on automated data-based and algorithmic forms of control that are often motivated economically. They are not “salient”, but “silent” (Introna and Wood 2004) and even “calm” technologies (Weiser and Brown 1997) that proliferate in everyday life and diffuse through environments. Although the relationship between visuality and surveillance is thus being transformed, images are nevertheless an important part of post-panoptical media assemblages and their silent forms of power. Since many successful economic platforms and our everyday networked practices are image based, an evaluation of surveillance capitalism that takes media differences seriously becomes decisive.
Aestheticization of Surveillance Capitalism
The special issue therefore aims to interrogate the manifold relationships between economic surveillance and networked images, and to identify their intersections. On the one hand, images may support the reproduction and maintenance of surveillance capitalism in several ways: Aesthetic strategies and media principles of user-generated, professional and popular images such as humour, compactness, nudity, spectacularity, cinematicity, seriality, interactivity, cuteness or emotionality can contribute to users turning to a platform, capturing attention, prolonging browsing times and generating the “network effects” (Srnicek 2017) necessary for the functioning of surveillance capitalism. Adding to the “attention economies” (Beller 2006; Franck 1998; Goldhaber 1997; Krogan and Kinsley 2012; Terranova 2012) and experiential-aesthetic regulation online, they can reintroduce the logic of “gaze”, i.e. the focused stare, into the media environments of “glance”, i.e. the incidental and fleeting glimpses (Bryson 1983) or intermediate forms of active cognitive engagement with media content such as “grazing” (Creeber 2013). As such, images can serve as incentives themselves or be part of nudging interface and website aesthetics (Mühlhoff 2018), and therefore contribute to the aestheticization of digital capitalism.
Anaesthetization of Images
On the other hand, networked images can become anaesthetized, “calm” and “silent” themselves – in a similar way to the techniques of control and surveillance: Against the background of surveillance capitalism, technological endeavours such as the internet of things (IoT), ubiquitous computing and ambient intelligence appear as attempts to expand the opportunities for data extraction and monetization. Everyday objects become sentient things that are capable of multimodal monitoring of environments and living beings, and of recording, storing and circulating captured information. Visual data acquisition in the form of sensors, webcams or computer vision operates without drawing attention to itself. Often, not only the technologies are invisible, but also the images that are no longer destined for human viewing and remain data without being visually displayed (Paglen 2016; Rothöhler 2018). By being processed in machine-to-machine seeing and communication within IoT or used as training data for computer vision application (Crawford and Paglen 2019), the networked and social media images are anaesthetized and rechanneled into an invisible “visual” culture as new economic assets (Mackenzie and Munster 2019). These “invisible image data” (Rothöhler 2021) or “invisible images” share their unobtrusiveness with algorithmic security systems such as facial recognition, which exploits the publicness of the face, and produces “calm images” operating in the background without addressing the users’ conscious attention (Veel 2012).
Subjectivation in Surveillance Capitalism
Furthermore, silent and economically motivated forms of networked surveillance do not eliminate power relations and processes of subjectivation. Rather, silent and scopic forms of power are related in different ways, depending on platforms and the images they provide: On social media platforms, forms of social control based on the visibility of the personal can hardly be separated from algorithmic sorting and recommending. They modulate visibility and invisibility as well as the associated social fears (Trottier and Lyon 2012) and thus algorithmically reconfigure scopic forms of power (Bucher 2016, 2018) and self-care (Nguyen-Trung 2020). It can be assumed that algorithmic control not only complicates or prevents the possibility of subjectivation (Chiney-Lippold 2011, 2016; Rouvroy 2013; Rouvroy and Berns 2013), but also enforces new and old ways of subjectivation. This means that categories such as gender, age, class and race, which are gaining increasing attention in surveillance studies (Dubrofsky and Magnet 2015; Browne 2015; Conrad 2009), take on special relevance for investigations of a networked digital capitalism. For example, not all bodies are subjected to the exposure, economization of attention, automated censorship and content moderation in the same way on popular platforms for sharing images (Gillespie 2018; Müller-Helle 2020; Roberts 2019). Nudity, female nipples, scars, bodily fluids, or pubic hair, for instance, are regularly banned from Instagram (Byström et al. 2017; Gerling et al. 2018), while TikTok gets negative press for shadow banning LGBTQ-related tags or suppressing black or disabled creators – raising questions about the relationship between moderation, discrimination, normalization, and economics. Image sets that can be retrieved from social media platforms without compensation, and destined to train algorithms, are known to demonstrate racial and gender bias or lack of diversity (Buolamwini and Gebru 2018; Crawford and Paglen 2019; Gates 2014; Kember 2013; Monea 2019). On streaming platforms, the rhetoric of algorithmic personalization (Alexander 2010; Finn 2018) also obscures collaborative filtering and stereotypical clustering, which can reinforce gender and age biases (e.g. by correlating gender and genre) (cf. Lin et al. 2019), among others, and so modulates specific viewer subjects (Kellogg et al. 2020).
The special issue invites the submission of papers examining such and comparable phenomena that are capable of shedding light on the role of networked images and the reconfiguration of visuality in surveillance capitalism. In particular, it focuses on the tension between a visual aestheticization of capitalism and the anaesthetization of images or/and surveillance techniques. It raises the following questions, such as: To what extent and by means of which aesthetic strategies do images create incentives for, and stabilize surveillance capitalism? How do they contribute to its aestheticization? How is pictoriality reconfigured in post-panoptical, ambient media environments and subjected to forms of anaesthetization? How is subjectivation produced in apparatuses of dataveillance and algorithmic control, and how are the regimes of the gaze transformed within them?
Topics can include, but are not limited to:
- The role of images for the generation of the “behavioural surplus” (Zuboff 2019) and data extraction
- Images as decoy and nudges; medial and aesthetic incentive strategies
- Audience labour and modulation of viewing
- (In-)visibility as social control, and its relation to data monitoring and algorithmic sorting
- New forms of subjectivation, desubjectivation or the prevention of subjectivation in visual surveillance capitalism
- Economization of attention
- Platform politics and automated censorship of images
- AI training on user-generated images and platform capitalism
- Surveillance capitalism in popular visual media and media arts
- Gender, race, class and algorithmic control on platforms for (moving) images
- Calm images and invisible images
- Visual data acquisition in the internet of things, and ubiquitous computing
- Tension between the aestheticization of surveillance capitalism and the anaesthetization of images
When submitting an abstract, authors should specify to which of the following categories they would like to submit their paper:
1. Field Research and Case Studies (full paper: 6000-8000 words). We invite articles that discuss empirical findings from studies that examine surveillance and political economies in digital visual culture. These may e.g. include studies that analyze particular image platforms; address nudging and incentive aesthetic strategies; scrutinize whether and how algorithmic personalization produces specific consumer subjects, etc.
2. Methodological Reflection (full paper: 6000-8000 words). We invite contributions that reflect on the methodologies employed when researching data-driven and algorithmic surveillance and networked images. These may include, for example, critical evaluation of (resistance) discourses of transparency or obfuscation, algorithmic black boxing, and their implicit epistemologies of the visible; discussion of new or mixed methods, and reflections on experimental forms of research.
3. Conceptual/Theoretical Reflection (full paper: 6000-8000 words). We encourage contributions that reflect on the conceptual and/or theoretical dimension of surveillance, capitalism and images. This may include, for example, the relationship between scopic and silent forms of power and control; critical evaluation of different concepts such as surveillance capitalism, platform capitalism, algorithmic governmentality, etc.; the tensions between the aestheticization of capitalism and anaesthetization of images in data-driven media environments (e.g. due to filtering, platform censorship, calm technologies, etc.).
4. Entering the Field (2000-3000 words). This experimental section presents initial and ongoing empirical work. The editors have created this section to provide a platform for researchers who would like to initiate a discussion about their emerging (yet perhaps incomplete) research material and plans, as well as methodological insights.
Deadlines and contact information
- Initial abstracts (max. 300 words) and a short biographical note (max. 100 words) are due on: 31 March 2021.
- Authors will be notified by 19 April 2021, whether they have been invited to submit a full paper.
- Full papers are due on: 1 August 2021.
- Notifications to authors of referee decisions: 1 September 2021.
- Final versions due: 10 November 2021.
- Please send your abstract and short biographical note to: olga.moskatova@fau.de.
OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Max Weber Stiftung (15. Februar 2021). CFP: Networked Images in Surveillance Capitalism. Fotografieforschung. Abgerufen am 8. Oktober 2024 von https://doi.org/10.58079/atiz