84
Views
0
CrossRef citations to date
0
Altmetric
Research Article

‘Tele_Trust’ and ‘Touch My Touch’: co-creating social touch and trust experience through telematic platforms

, &
Received 16 Sep 2023, Accepted 20 Feb 2024, Published online: 24 Apr 2024

ABSTRACT

‘Can telematic platforms be created to share embodied experience of social touch?’. This paper addresses the design of shared presence via telematic platforms, on the basis of shared embodied experience of touching and feeling touched, in empathic interplay.

Two Artistic Social Labs platforms that re-orchestrate and merge visual, haptic and facial input, supported by telematic platforms, are discussed and analysed. The hybrid platform ‘Tele_Trust’ (2009) makes use of smart textiles with touch sensors, and mobile phone app and urban screen technologies. The online platform ‘Touch My Touch’ (2021) is based on streaming, face-recognition and portrait merging technologies.

The CITYO model ‘Can I Touch You Online?’ (Lancel 2023) is used to analyse and evaluate the effects of design choices on the experience of social touch. The results show that a sense of reciprocal influence (characteristic to social touch) can be established via telematic platforms, through sensory and social design for co-creation. Additional results provide insights for the future design of shared presence, through experience of social touch in reciprocal empathic interplay, via telematic platforms.

Introduction

Embodied experience of social touch, or interpersonal touch, shared in empathetic processes is fundamental to our sense of presence and trust. Although telematic platforms have been designed to support touch, until recently, design for social touch has been underexplored. Research into haptic technologies to this purpose have dominantly paid attention to the ‘potential performativity of technology’, and less to an ethical approach (Jewitt, Price, and Mackley Citation2020). Limited discussion has focused on an ethical approaches that include designing for meaningful experience, based on empathic interplay and dialogue on the experience (Lancel Citation2023) of social touch. This effects the proliferation of being reflexively ‘present’ to one another (Davis and Garcia Citation2021) in empathic processes.

‘Can telematic platforms be created to share embodied experience of touching and feeling touched?’Footnote1 is the question on which this paper focuses. Two ‘Artistic Social Labs’ (ASL), supported by telematic platforms that re-orchestrate and merge visual, haptic and facial input, are key to the approach taken in this paper. The ASL1 ‘Tele_Trust’ (2009–2017) is a hybrid, telematic platform; and the ASL2 ‘Touch My Touch’ (2020–2023) is an online ‘Streaming platform for touching each other’ (Lancel/Maat Citation2012). In both ASLs, participants act as co-researchers.

The interaction model ‘Can I Touch You Online?’ (CITYO) (Lancel Citation2023) – for technologically mediated shared social touch experience, for multiple participants – is the second key element to the approach. This prescriptive and descriptive model is used to analyse the two ASL platforms focusing on the effects of design choices on the experience of social touch: sensory and social co-creation, in reciprocal empathic interplay.

This analysis leads to new insights on the design of reciprocal interaction and trust in intimate, empathic interplay via telematic platforms, in hybrid, mixed and merging realities.

Literature

Physical social touch has been characterized as ‘unmediated, interpersonal interaction’, that includes ‘all those instances in which people are in each other’s presence and have a reciprocal influence on each other’s actions’ and ‘in which people touch each other’ (Haans and IJsselsteijn Citation2006, 151). Others have argued that an intended message is crucial to social touch (Saarinen Citation2021).

Although social, interpersonal touch is one of the most direct forms of communication (van Erp and Toet Citation2015; Huisman Citation2017), research is still in its infancy. Until recently, interaction design for shared presence through social touch focused on what technology can do, and less on human experience (Jewitt et al. Citation2021). Often, telematic, streaming platforms facilitate seemingly seamless technological communication, as ‘perceptual illusions of non-mediation’ (Lombard and Ditton Citation1997).

In a different approach, current emerging design research places ethical aspects of control, human agency and trust centre stage. This approach introduces technologically mediated social touch interaction as a form of ‘mutual touch creation’ (Jewitt et al. Citation2021). It emphasizes the importance for a shared practice, or performativity, of mutual touch for creating awareness and social imaginaries in future ‘Technoscapes’ (Jewitt, Price, and Mackley Citation2020, 90 (citing Appadurai 1990)). Design for mutual creation is the focus of telematic platforms for shared presence (Lombard and Jones Citation2013), or ‘third spaces’ (Gould and Sermon Citation2015), that include processes of shared visual and verbal meaning-making (Lomanowska and Guitton Citation2016; Lombard and Jones Citation2013), but not in combination with performativity of social touch (Lancel Citation2020). Technologically mediated social touch interaction is essential to further understanding of this combination.

‘Technically mediation of social touch’ has been characterized as ‘The ability of one actor to touch another actor over a distance, with influence on each other’s actions, by means of tactile or kinaesthetic feedback technology’ (Haans and IJsselsteijn Citation2006). Technological mediation not only disrupts direct shared physical social touch sensations, it also disrupts the ‘reciprocal’ influence on each other’s actions, that is a characteristic of physical social touch ().

Figure 1. ‘Tele_Trust’ at Stedelijk Museum Amsterdam (2011). Image: Adrienne Norman © Lancel/Maat.

Figure 1. ‘Tele_Trust’ at Stedelijk Museum Amsterdam (2011). Image: Adrienne Norman © Lancel/Maat.

This absence of reciprocityFootnote2 effects the shared experience of touch interaction. Absence of reciprocal connections in time, space and context (Huisman Citation2017) can obscure the mutual identification of senders and receivers, of ‘who is touching and being touched’, and effects ethical aspects such as of control, consent and trust (Jewitt, Price, and Mackley Citation2020, 215). It changes the ‘interpersonal interactions’ compared to non-mediated situations, e.g. in terms of ‘social facilitation’ and ‘inhibition’ (Haans and IJsselsteijn Citation2006, 153). It limits the real-time perception of facial expression, a key factor to empathic interaction (Iacoboni Citation2009). Overall, it challenges the empathic processes that can emerge from reciprocal interaction.

Empathic processes enable to feel what others feel and to identify with other people’s emotions of pain and loneliness, fear and desire, and to share these emotions (Jamieson Citation2013; Turkle Citation2011; Verhaeghe Citation2018). Such empathic processes rely on entangled, both physiological and cognitive processes (Decety and Moriguchi Citation2007). The physiological processes emerge via mirror-neural perception (vicarious perception, e.g. through mirror-touch), and requires a lowered, disrupted awareness of self-other distinction (Martin Citation2018; Ward Citation2018). However, simultaneously, empathic processes also require intact awareness of self-other distinction, to enable cognitive reflection on the perceived experience (of the other’s physiological signals and behavior) (id.). Ambiguously, both simultaneous and non-simultaneous, disrupted and intact awareness of self-other distinction are crucial to emphatic processes (Bollen Citation2023; Decety and Moriguchi Citation2007).

In a physiological shared embodied experience, ‘the distinction between oneself and other becomes mere blurred’ (Stepanova et al. Citation2022). Synchronous social touch gestures to establish such experience, has been explored in research into flexible brain and body ownership identification (Huisman, Merijn Bruijnes, and Kolkmeier Citation2013; Ma and Hommel Citation2013; Petkova and Henrik Ehrsson Citation2008; IJsselsteijn, de Kort, and Haans Citation2006). In these experiments, technically mediated, affective social touch gestures have been explored to stimulate physiological connections between actors and a virtual persona on a screen. Such affective touch gestures include, among others, caressing gestures or handshakes. Through acts of stroking the skin (performed between 1 and 10 cm/s), these slow touch gestures have the ability to stimulate specific skin receptors, that provide a sense of social connection (Björnsdotter, Morrison, and Olausson Citation2010).Footnote3

For example, in Enfacement illusion experiments (Tajadura-Jiménez et al. Citation2012) (), actors have received caressing gestures over their faces (performed by someone else), in synchronized visuo-haptic motor data interactionFootnote4 with caressing gestures over the face of virtual persona on a screen (the term ‘motor’ in this type of interaction refers to the physical movement of caressing, in interaction with a screen). In these experiments, actors have experienced an immersive physiological connection with the virtual persona on the screen. They have identified and even confused their own face with the virtual persona’s face.

Figure 2. Experimental set up ‘Enfacement illusion’ experiment. © Lancel/Maat.

Figure 2. Experimental set up ‘Enfacement illusion’ experiment. © Lancel/Maat.

Media performance art

Telematic platforms that support immersive physiological connections stimulated through affective touch gestures have been explored in Media Performance Art. Such artworks have interwoven physicality and technology into synthetic experience. They have explored a transformative shift of sensory interaction (Kozel Citation2007, in Sermon Citation2020) and fluid body-ownership and body boundaries (Petkova and Henrik Ehrsson Citation2008) in digital performative data spaces and information flows (Hansen Citation2012; Manuel Castells Citation2020, Salter Citation2010). These art works have seduced the audience in different types of ambiguous interaction (Kwastek Citation2013, Lancel Citation2023). Familiar sensory connections have been disrupted, to evoke both physiological experience, and cognitive reflection on this experience.Footnote5

Often, understanding of touch as one sense has been replaced by multi-sensory, synaesthetic perception of touch (Gsöllpointner, Schnell, and Schuler Citation2016) (), as a result of neural and cognitive processes within a large, complex network (Stenslie Citation2010). For example, mirror-touch synaesthesia connects perception of seeing someone else being touched, to sensations on the own body of being touched (Ward Citation2018). These art works have explored perception of touch and touch responses as perceived and processed by the brain, instead of only emerging from tactile perception ‘on the body’ (Stenslie Citation2010, 87).

Figure 3. Multi-sensory reciprocal interaction of touch including tactile, audible and visual synaesthetic connections. © Lancel/Maat.

Figure 3. Multi-sensory reciprocal interaction of touch including tactile, audible and visual synaesthetic connections. © Lancel/Maat.

In these artworks, a sense of embodied vulnerability has been orchestrated to provoke empathic behaviour and social bonds, calling for re-negotiation and dialogue about ethics of trust (Nevejan Citation2012; Roeser, Alfano, and Nevejan Citation2018). Artists have situated themselves in physical, psychological and socially vulnerable relationship to potential participants; provoking them to approach, caress, hold or even abuse their bodies. These artworks provoke experience of vicarious touch interaction in the imagination; in environments that are hybrid (e.g. Cillari 2006–2009, 2016; van der Vlugt Citation2015) and/or networked (e.g. Cheang Citation1998; Stelarc Citation2015). These challenging, vulnerable interpersonal connections often evoke self-reflection and renegotiation concerning ethical values relating to issues of shared well-being, responsibility, consent, and trust (Kwastek Citation2013; Benford et al. Citation2012). Loke and Khut (Citation2014) have argued that meaningful experience of intimate and vulnerable interaction, requires artists to facilitate a participant’s trajectory, through guidance and a form of reflection (‘debriefing’) on the experience.

In some cases, art works that visually mirror and merge participants’ body representations (e.g. ‘The Mirror-Box’ (Daalder Citation2011)), have played with identification with another person on a screen through body centred responses in visual interaction. These art works, however, did not include the design of haptic interaction; and/or did not focus on shared experience by multiple participants and spectators as part of the performance script (Park Citation2018). Representations of haptic movements and gestures have also been shared in hybrid VR and online shared spaces, but without physically touching and being touched (e.g. Sermon Citation1992). Shared VR multiplayer spaces with visuo-tactile connections, have been designed to evoke illusions extending a sense of body-ownership (e.g. Crew Citation2016; Beanother Lab (Citation2012, ongoing)) on the basis of live and/or pre-recorded feedback; although without participants’ autonomy to express touch gestures. Such artworks often lead to limited shared spatial embodied memory (Van der Ham and Keizer Citation2021).

In almost all art works, the performer’s body has been instrumental to the participants and spectators’ physical touch interaction, as part of the interface (Fedorova Citation2020). In the case of participants performing with other participants, multiple hybrid orchestrations have explored sensations of disrupted physical proximity, in playful, kinaesthetic relations and motor interaction (e.g. Lozano-Hemmer Citation2001; Salter Citation2017; Roosegaarde Citation2022), but without physical self-touch or reciprocal touch. Lancel/Maat evoked reciprocal, intimate connections through physical (self-)caressing (Citation2012) and kissing (Citation2018; Lancel, Maat, and Brazier Citation2019)) on the basis of kinaesthetic and visuo-haptic motor interaction for multiple participants, not including telematic platforms. To the authors knowledge there are few artistic orchestrations in which participants perform physical reciprocal touch interaction with multiple other participants, to connect via telematic platforms.

In conclusion, this paper argues that characteristics of technologically mediated shared, embodied experience of social touch via telematic platforms must include, firstly, Sensory Disruption of physical touching and being touched with Reciprocal influence (Haans and IJsselsteijn Citation2006) in empathic interplay (Decety and Moriguchi Citation2007; Gerdes and Segal Citation2009); that requires types of touch performativity that are vulnerable (Kwastek Citation2013; Benford et al. Citation2012) in relation to Response of observing co-participants.

Secondly, Shared Reflection on the Experience is essential, which requires Socially and technically Hosting (Loke and Khut Citation2014) of mediated verbal and/or visual communication (Lombard and Jones Citation2013).

Research method

As Research in both Art and Design is key to this study, a Science and Technology Studies (STS) (Borgdorff, Peters, and Pinch Citation2019; Lysen Citation2019) perspective is taken. From this perspective, the objective of artistic experiments is not so much to qualify or validate something (as in an engineering or science laboratory) but to convey content, to tell something, and to provoke (critical) awareness and imagination (Lysen Citation2019).

This paper explores the design space for experience of social touch through telematic platforms for multiple participants. The interaction model ‘Can I Touch You Online?’ (CITYO) (Lancel Citation2023) depicted in , constructed based on an extensive literature review, is used to analyze two art-science works. Sensory Disruption of interaction, and Shared Reflection on experience, are at the heart of this model.

Figure 4. Interaction Model ‘Can I touch you online?’ (CITYO) (2023)© Lancel/Maat.

Figure 4. Interaction Model ‘Can I touch you online?’ (CITYO) (2023)© Lancel/Maat.

The Legenda () presents the elements in the interaction model. The model depicts an infinite loop that facilitates connections between Actors (active participant) and Co-Actors (less active participants, or potential participants, spectators in the general public).Footnote6 Reciprocal sensory, technological and social connections, through performativity and perception of social touch are represented explicitly, including Affective Physical Touch Gestures by Actors. Responses to Sensory disruption of the ambiguous multi-sensory, visuo-haptic motor data are perceived by the Actor whose gestures are then shared with Co-Actors through Mirror Perception. In turn, the Co-Actors’ technologically mediated Responses are Perceived by the Actors, influencing their interaction. The interaction includes ambiguous (predictable and unpredictable; simultaneous or synchronous and non-simultaneous) connections. Shared Reflection on the experience by Actors and Co-Actors, through dialogue, is hosted.

The interaction model has been used to analyze the effects of interface design choices on the experience of shared embodied experience of social touch in different artworks (Lancel Citation2023). Three sources of information have been analysed: (1) observations (by the hosts) of the (Co-)Actors’ actions and reactions; (2) thick descriptions of open ended interviews with (Co-)Actors; (3) photo and short video documentation that support these observations.

The analysis in this paper focuses on new insights for shared embodied experience of social touch via telematic platforms. Firstly, it explores whether a sense of reciprocal influence can be facilitated through transfer of social touch gestures into shared, ambiguous visuo-haptic motor data interaction. Secondly, it explores whether cognitive reflection on the shared embodied experience can be shared by Actors and Co-Actors via telematic platforms, to support an empathic interplay.

Artistic social labs (ASL)

‘Tele_Trust’ (2009–2017) and ‘Touch My Touch’ (2021–2023) are two art-science works created by Lancel/Maat (Citation2000–ongoing). These works are spatial and telematic, participatory performance installations. They have been placed in public spaces to function as meeting places, ‘Artistic Social Labs’ (ASL), with the audience as ‘co-researchers’.

In poetic, performative ‘Meeting Rituals’ and ‘Trust Systems’, Lancel and Maat explore shared social touch experience: as shared, embodied, empathic and intimate touch experience in hybrid and telematic connections. Inspired by neuro-scientific insights, the ASLs propose novel ‘inter-corporeal’ systems, expanding a shared perceptual field and synaesthetic connections for multiple (Co-)Actors (Hansen Citation2012; Merleau-Ponty Citation2013) through affective touch gestures. The artists state that shared experience of social touch takes place on the skin, in the brain, and in the imagination.

Since 2000, both ASLs have been presented in cities of Europe, Asia and USA, in different cultural contexts. Venues include: World-Expo 2010 Shanghai, ‘Mobile Platform’; University of Applied Science Amsterdam (2023); Stedelijk Museum Amsterdam (2011); ISEA 2011 Istanbul, International Symposium Electronic Art; Transmediale Berlin (2016); Delft University of Technology Participatory Systems (2011); Iaspis Stockholm (2012); Theatre Festival a/d Werf Utrecht (2011); Sonic-Acts Xlll Amsterdam (2010); V2_Lab Rotterdam (2009); Kulturstiftung des Bundes at Kunstverein Frankfurt (2017); Gogbot Enschede (2012); Leonardo@ARS-ELECTRONICA Linz (2011); Waag Society Amsterdam (2009). They have been commissioned by: University Twente (2022); Festival Lumineus Amersfoort (2009); ElectroSmog Festival (2010) for De Balie Amsterdam, ADA-network New Zealand, Banff New Media Centre Canada; and by UP Projects London (2021–2022).

Connections between Actors and Co-Actors

In the ASLs, (Co-)Actors explore technically mediated social touch connections with each other, on the basis of performance scripts. These scripts are inspired by neuroscientific experiments discussed in the literature (Ijsselsteijn 2006; Tajadura-Jiménez et al. Citation2012), for individual visuo-haptic motor data interaction, based on perception of being caressed; with fluid integration of technology, and disrupted self-other distinction. In these experiments, physiological body ownership identification is evoked with a virtual persona on a screen; but without cognitive reflection for an empathic interplay to emerge (Decety and Moriguchi Citation2007).

The ASLs explore performance scripts for shared social touch experience, and shared identification with human and virtual others in a shared telematic space, for multiple participants. Shared performativity of self-touch and mirror-touch is explored for haptic connections with multiple tele-present others. This approach makes use of aesthetic principles, of ambiguous interaction (Kwastek Citation2013), to investigate combined physiological perception and cognitive reflection (for immersive and conscious identification with others (Ward Citation2018))in telematic interaction design.

Hosting Actors and Co-Actors

Participation in both ASLs is facilitated by the artists, in two different hosting designs. In Tele_Trust, human hosting is performed by the artists. ‘Touch My Touch’ explores non-human hosting, through an online visual interface design including a shared questionnaire. Through hosting, firstly, the ‘rules of play’ in the performances are clarified, and a sense of safety is secured (Benford et al. Citation2012). Participants can choose to be Actors (active participants) or Co-Actors (spectators, potential Actors)). Actors have been carefully hosted while their private bodies are extended with face-recognition technologies, smart textile, and brain computer interfaces.

In the ASLs, Actors and Co-Actors meet through affective social touch gestures, simultaneously and over time, in visuo-haptic data interaction. They caress, embrace, hug and kiss themselves and each other, in unique multi-modal and multi-sensory syntheses for direct and virtual touch. Perception of touching the skin and of ‘being touched’, as a form of vicarious interaction (Kwastek Citation2013) with mirror-touch synaesthesia (Gsöllpointner, Schnell, and Schuler Citation2016; Ward Citation2018) form the core of the shared experience.

From their visuo-haptic interaction, sensor data-are collected and presented (in ‘Reflexive DataScapes’) real-time. In both ASLs, sensor data of caressing gestures are transferred and spatially staged, as digital, emergent visual portraits of the Actors and Co-Actors. In ‘Tele_Trust’, these portraits are combined with visual and audible statements, in words. The ‘Reflexive DataScapes’ are real-time co-created by Actors and Co-Actors.

In the last phase of each performance script, the host invites Actors and Co-Actors to share cognitive reflection, through dialogue. The dialogues facilitate a process of mutually attuning and ‘grounding’ (Huisman Citation2017; Clark and Brennan Citation1991). Focus of these dialogues is the shared embodied experience, in reference to both biometric data interpretation and subjective (personal) memories and imagination (Morton Citation2018).Footnote7 (Co-)Actors co-create narratives about their experience, addressing the future of touching and feeling touched (Loke and Khut Citation2014).

Artistic Social Lab 1: ‘Tele_Trust’ (2009–2017)

The Artistic Social Lab ‘Tele-Trust’ (2009–2017) ()Footnote8 was created in the period after 9/11 in which fear for terrorism became a widespread public fact. It was launched during the global financial crisis of 2008. Both contexts triggered distrust in economic, political, technical and social systems. This changing social eco-system created a paradox: while increasingly demanding transparency, people’s physical bodies were simultaneously being covered with personal communication-technology. ‘Tele_Trust’ raises questions about trust and tele-presence in public space: exploring emotional and social tension between visibility, embodied presence, privacy, surveillance and trust.

Figure 5. ASL1 ‘Tele_Trust’ (2009). Waag Society Amsterdam. Image Pieter Kers © Lancel/Maat.

Figure 5. ASL1 ‘Tele_Trust’ (2009). Waag Society Amsterdam. Image Pieter Kers © Lancel/Maat.

Body interface for a participatory telematic system

The ASL Tele-Trust has explored social touch interaction for multiple participants, in multiple installations in public space. Its design disrupts familiar embodied connections; and replaces them with unfamiliar, immersive yet ambiguous technologically mediated connections, through a practice of touching and feeling touched.

The sensory design orchestrates wearable smart textiles (Castano and Flatau Citation2014; Phan and Thai Citation2022; Ramachandran et al. Citation2021) interwoven with touch sensors (DataVeils), mobile phone and urban screen technologies, into a DataVeil interface, through a network for ambiguous visual, haptic, and sonic interactions. The DataVeils function as a second skin, or membrane, for Actors and Co-Actors to connect.

During performances, multiple Actors, wearing the textile DataVeils, are encouraged to caress their own bodies to connect with Co-Actors. The hood of the DataVeils disrupts face-to-face connections: Actors can see Co-Actors while being visually unidentifiable. In response, Co-Actors connect with the Actors by caressing their phones, digitally revealing the Actors faces; and by sending them an audio message in response to the question: ‘Do you need to see my eyes to trust me? Do we need to touch each other?’. Actors caressing their bodies can hear the Co-Actors voices and messages, audible in their headsets.

Through caressing and responding, simultaneously and over time, (Co-)Actors co-create a semi (real-time) an unpredictable, shared data composition of images (portraits) and audio statements, in a participatory growing networked database.Footnote9 The audible messages and portraits are visible on the urban screen, shared by all Actors and Co-Actors. Stories from different cities and countries weave together into an exchanging narrative and distributed tapestry – with the artistic intention to create an engaging agoraFootnote10 founded on the notion of trust.

After wearing the DataVeil, Actors are invited for reflection on the experience with the Host, staged to Co-Actors around. The DataVeil design and interaction have provoked intense dialogues.

The textile full body wearable DataVeil () is interwoven with touch sensors that connects Actors with Co-Actors. The sensors, woven into the veil’s fabric, connect with WIFI to the online ‘Tele_Trust’ platform, to which the smart phone app, screen projections and database are connected. The hand depicted on the phone’s screen ( at the right side) caresses the screen to visually ‘unveil’ a digital portrait (the Actor’s veiled face) after which a message can be sent.

Figure 6. Schematic representation of the DataVeil with smart textile and the phone screen app with a caressing hand. Image © Lancel/Maat.

Figure 6. Schematic representation of the DataVeil with smart textile and the phone screen app with a caressing hand. Image © Lancel/Maat.

The spatial participatory design () includes (1) an Actor wearing a textile DataVeil; (2) Urban screen that shows digital portraits form the database (containing actual and previous portraits of texts and Actors); and (3) Co-Actors using the phone app.

Figure 7. The Tele-Trust spatial interaction Design Image © Lancel/Maat.

Figure 7. The Tele-Trust spatial interaction Design Image © Lancel/Maat.

In the ‘Tele_Trust’ haptic interaction design, identification of senders and receivers remain ambiguous. And through caressing their DataVeiled bodies, Actors ambiguously, randomly unveil the visual portraits of actual and previous Actors on the electronic screens; randomly combined with transcriptions of the auditory messages.

In the first design, Actors could hear Co-Actors responses to the question: ‘Do you need to see my eyes to trust me?’ through their headsets, and see the transcribed audio responses on the screen. The responses were based on previous (anonymous) merged recordings (stored in the database). This first orchestration did not achieve the shared experience of immersion and connection for which it was designed. Although Actors perceived immersion, less immersion was experienced by co-located Co-Actors. Interdependent performative connections between them were not established.

In the second design, the Co-Actors responses were socially and technically mediated by an interviewing Host during the performance. Co-located Co-Actors were facilitated to directly send audio-responses to the database anonymously, semi real-time audible for Actors in their headsets. As a result, Co-Actors stayed with Actors moving around, observing their caressing gestures, discussing the events with other co-located Co-Actors, while the mediated audio-responses were visible on the screen.

In the final design (), a smart phone app was used to transfer the Co-Actors’ audio-responses). Co-Actors only expressed connections with Actors if a visual representation of the Actors’ faces accompanies the app’s text environment.

Figure 8. ASL1 ‘Tele_Trust’ (2010). Banff Center Canada Image © Lancel/Maat.

Figure 8. ASL1 ‘Tele_Trust’ (2010). Banff Center Canada Image © Lancel/Maat.

The table below presents the ‘Tele_Trust’ performance script that include five Phases.

Results

‘Tele_Trust’ has been performed in more than 21 exhibitions internationally in which approximately 450 Actors have worn the DataVeils and thousands of Co-Actors participated.

(Co-)Actors responses

On many occasions, the Host observed that at first, (Co-)Actors expressed discomfort in caressing their bodies while being observed by others. However, when the DataVeil was covering their faces, they almost always started caressing their bodies. They walked around and some DataVeil wearers even walked to different parts of cities or took a bus. Always they were followed by a Host to ensure their bodily and social safety. The average duration for wearing the DataVeil was between 20 minutes and an hour. After wearing the DataVeil, the Host took time to discuss their experience, as a part of the performance script. The dialogue was staged, and often, Co-Actors stood close by to listen to the described experience. On a number of occasions, Actors responses have been recorded on video, after wearing the DataVeil. A selection of these responses are described below, with a focus on the shared embodied experience, of touching and feeling touched, in line with this article’s research question. These responses are typical of many of the dialogues that took place ().

Figure 9. ASL1 ‘Tele_Trust’ (2010). Telematic connections between Banff Canada – Amsterdam – Dunedin New Zealand, via DataVeils on each location. Commissioned by ‘ElectroSmog’, De Balie Amsterdam; and the Banff Center Canada. Image © Lancel/Maat.

Figure 9. ASL1 ‘Tele_Trust’ (2010). Telematic connections between Banff Canada – Amsterdam – Dunedin New Zealand, via DataVeils on each location. Commissioned by ‘ElectroSmog’, De Balie Amsterdam; and the Banff Center Canada. Image © Lancel/Maat.

Actors responses recorded on video include expression of immersive, embodied experience, in among others: ‘Back in the world … You are with someone, with those voices, but really you are in another world.’ (De Balie, Amsterdam 2011). Immersive experience triggering a sense of confusion about the self-other distinction, has been expressed in statements such as: ‘It feels important to stay in it for a long time. The voices entering are almost like my own thoughts, but much clearer.’(V2 Rotterdam 2009); and ‘I experienced a lot of voices (…) possibly even different levels of reality, transcending space, and distance’ (Banff New Media Centre Canada, 2010). Physical experience of being in a safe environment was expressed in among others: ‘It does feel safe with the hood over my head, like a blanket to cover my face.’ (V2 Rotterdam 2009). Shared embodied experience with others have been described as: ‘You touch yourself, to get in contact with someone else. So you are thrown back on your own body to make contact.’ (De Balie, Amsterdam 2011); and

‘I was aware that I was touching, that I was looking and searching on my body, to make this weird connection. Kind of unsettling in a way, that all of a sudden you will get a sensation at a nerve point that will deliver someone’s else’s voice.’ (Banff New Media Centre Canada, 2010).

Actors expressed to the HOST (not on video) ‘When I touch myself, I am together with others, when I hold off, I am alone’ (De Balie, Amsterdam 2011); and ‘I could hear your voice in my skin. I remembered you remembering. My body is your body.’ (Banff Media Centre Canada, 2010).

In contrast, sometimes a radical individual point of view was expressed to the Host (not on video), for example in: ‘I liked this feeling, I can see others and they cannot see me. Like I am a walking surveillance monitor.’ (Lumineus Amersfoort 2009).

Co-Actors have expressed to the hosts as feeling connected with Actors if firstly, they can make a statement which is added to the database, thus influencing the Actors experience while caressing their bodies. Secondly, the Actors portraits must be visible in urban space and on the smart phone screens.

Discussion ASL1 ‘Tele_Trust’

The analysis of ASL1 makes use of the CITYO Interaction Model () (that is based on elements of Sensory Disruption and Shared Reflection ().

Figure 10. CITYO Interaction Model for ASL1 ‘Tele_Trust’ (2023) © Lancel/Maat.

Figure 10. CITYO Interaction Model for ASL1 ‘Tele_Trust’ (2023) © Lancel/Maat.

Overall, visual connections between Actors and Co-Actors are direct. However, visibility of the Actors’ emotional facial expressions is disrupted (by a veil) for the Co-Actors. Actors visibly share self-body-caressing gestures with Co-Actors. The Actor's individual perception of self- self-caressing is sensory disrupted through a ‘DataVeil’ Interface. The caressing gestures are transferred to an unpredictable data-sonification (individually perceived); and data-visualization (with shared visibility to all). Co-Actors can Reciprocally, simultaneously, auditory respond (through haptic gestures of direct screen caressing). Their audible responses are perceived by Actors, semi real-time. Over time, together, (Co-)Actors realize unique, unpredictable shared, composed data-visualizations (of Actors portraits) and transcriptions of auditory responses (by Co-Actors) on the screens.

Shared Reflection takes place through human hosted Dialogue with Actors, Staged to Co-Actors.

Insights

Insights from the ASL1 ‘Tele_Trust’ platform are that the shared embodied social touch experience can be facilitated if the multi-sensory connections that influence each other, are interdependent, or replace each other.

In ASL1, reciprocal caressing can be partly replaced by sensored self-caressing. Connection with direct, tacit perception of touch can be partly replaced by vicarious interaction, through mirror-perception and shared feedback data. Auditory connections can partly replace immediate technically mediated feedback of self-touch, and responses by Co-Actors. Disrupted direct visual connections (with the DataVeiled face) between all can be partly replaced by semi-unpredictable visual and transcribed audio connections created and shared by all. An audio-visual data repository is needed to share data representations and responses over time.

Co-Actors only feel part of the experience, and reflect on the experience, if they can influence the Actor’s experiences. The interaction relies on co-creation of a real-time emerging shared data composition shared by all.

Notes on the visual design process

The design for a full body veil for physical public space was chosen to strengthen a sense of immersive, enclosed embodied experience. Different cultures have inspired the visual DataVeil design: a burqa, a monk’s habit, the game persona Darth Vader.

Design parameters were first discussed with the Muslim Women Group ‘Jasmijn’ in Groningen. Five women were approached as experience specialists in wearing veils. Over the course of two months, four discussions took place addressing the question: ‘What should an ideal veil look like?’. The sometimes fierce discussions focused on visual, familial, social, spiritual, political, psychological and physical aspects of wearing a veil. Six design parameters were identified as being essential.

Firstly, the DataVeil must be Inclusive. This parameter specifically means to resist female repression, or any repression. It should promote the inclusive performance of autonomy, agency and power. It must fit anyone, which leads to the second and third parameters: the DataVeils must be Multi-Gender, and One Size Fits All.

Fourthly, the DataVeils must be Easy to Wear and not restrict spatial or bodily movements. It also must be easy to take off at any moment.

As a fifth parameter, the textile should be beautiful and comfortable, and inviting to touch.

The final sixth parameter concerns the artistic context in which the DataVeils are presented. It dictates that the veil should Not Stand Out Visually in Public Space: instead of positioning the DataVeils as theatrical entertainment, they should manifest as proposals for ways of living together in the world of everyday communication.

These parameters were discussed with designer AZIZ in Amsterdam for the design of six DataVeils. The six veils are made of high-quality soft wool used for business suits (97% wool, 3% elastane). This textile could be related to non-transparent business behaviour and the economic crises, raising questions around sharing trust. The DataVeils’ sober colours mingle visually with grey and black colours in city public spaces ().

Figure 11. ASL1 ‘Tele_Trust’ (2010). DataVeils in the city public space of the Amsterdam Central Train Station. Image © Lancel/Maat.

Figure 11. ASL1 ‘Tele_Trust’ (2010). DataVeils in the city public space of the Amsterdam Central Train Station. Image © Lancel/Maat.

Figure 12. ASL1 ‘Tele_Trust’ (2009). Amersfoort. Image © Lancel/Maat. Commissioned by Lumineus Amersfoort. In the historical city centre, the DataVeil was explored as a novel control device, replacing the medieval city wall by a hybrid surveillance ‘Dataveil Network’.

Figure 12. ASL1 ‘Tele_Trust’ (2009). Amersfoort. Image © Lancel/Maat. Commissioned by Lumineus Amersfoort. In the historical city centre, the DataVeil was explored as a novel control device, replacing the medieval city wall by a hybrid surveillance ‘Dataveil Network’.

Figure 13. ‘Touch My Touch’, performance Installation (2022), University of Applied Science Amsterdam (HvA). Image © Lancel/Maat.

Figure 13. ‘Touch My Touch’, performance Installation (2022), University of Applied Science Amsterdam (HvA). Image © Lancel/Maat.

The Dataveils fit all sizes and all genders. The DataVeils are easy to take on and wear, but also to take off: Everyone can wear a DataVeil.

Artistic Social Lab 2: ‘Touch My Touch’

‘Touch My Touch’ (2020-2023)Footnote11 was created in the context of the COVID pandemic, emergent bodily isolation and ‘skin hunger’. Online streaming platforms became a natural part of our homes, limiting our communication to the audio-visual senses without touching each other. Touching and feeling touched, seeing each other touching and teaching to touch, seemed increasingly obsolete.

ASL2 ‘Touch My Touch’ is a streaming platform for ‘online touching each other’, for two (Co-)Actors. www.TouchMyTouch.net makes use of streaming technology (comparable to ZOOM or TEAMS), face-recognition and merging technologies.Footnote12 The platform supports streaming visual, sonic and haptic connections.

Via the streaming platform, firstly, (Co-)Actors make a photo-portrait that becomes visible on their individual screens. These portraits are then over-layered with facial recognition points, ‘like a veil’. Together, they can ‘unveil’ their mutual photo-portraits from the facial recognition points (a,b and ). While the Actor slowly caresses the own face in front of the screen, simultaneously, the Co-Actor caresses the own screen through synchronously moving over the Actor’s photo-portrait (thus removing the facial recognition points). They then shift roles. Finally, both unveiled photo-portraits on the screens ‘merge’ into a unique, unpredictably composed Virtual Persona; that becomes part of an online ‘community’ of Virtual Personas.

Figure 14. a,b. www.TouchMyTouch.net (2020); reference to the Performance Script © Lancel/Maat.

Figure 14. a,b. www.TouchMyTouch.net (2020); reference to the Performance Script © Lancel/Maat.

Figure 15. www.TouchMyTouch.net (2020); reference to the Performance Script © Lancel/Maat.

Figure 15. www.TouchMyTouch.net (2020); reference to the Performance Script © Lancel/Maat.

In reference to the literature discussed above in this paper (Tajadura-Jiménez et al. Citation2012), visuo-haptic motor data interaction with a digital persona on a screen is established through face-caressing. Differently, two (Co-)Actors meet each other in roles of ‘caresser’ (Actor) and ‘co-caresser’ (Co-Actor). Actors self-caress their own faces.

The technical set up of the platform has been built with Microsoft Teams Azure network protocol, using opensource TensorFlow Javascript Technology for face recognition. Face tracking and Face rendering have been built in Java-Script & C++.

Ethical GDPR protocols for consent and privacy have been applied. No online sound or images are saved nor distributed. Only the last 20 undecodable, anonymous, merged digital personas are saved temporarily. Inclusiveness and merging of skin colours are subject to dedicated design.

ASL2 invites to co-create a shared touching experience, and share reflection on the future of touching distant friends, family, lovers, and strangers.

The visual design of the streaming platform presents three circles (a,b and ). The left and right circles are dedicated to streaming interaction between both (Co-)Actors; and the middle circle to the shared data visualization (the portraits).

The table below presents the ‘Touch My Touch’ performance script that distinguishes five phases with a brief description of each.

A more detailed description of Phases 3, 4 and 5 are presented below.

In Phase 3 of the performance script, depicted in a, the central circle shows the photo-portrait of the Actor (caresser) who is visible in the right circle. The Co-Actor (co-caresser) is visible in the left circle. The photo portrait is over-layered with facial recognition points, ‘like a veil’.

During Phase 3, the Actor directly softly slowly caresses the own face. Simultaneously, the Co-Actor caresses the Actor’s photo-portrait on the screen, in mirroring, synchronizing movements with the slow Actor’s gestures. This shared form of caressing results in ‘unveiling’ the facial recognition points from the photo portrait in the middle circle, making the portrait clearly visible. Actors and Co-Actors then shift roles (b): the person in the right circle becomes Co-Actor, and the person in the left circle becomes Actor.

In Phase 4 of the performance script (), the central circle shows a portrait, that has been created through merging the caressed facial parts of the two (Co-)Actors in the left and right circle, into a shared Virtual Persona. The (Co-)Actors can choose to agree to a final merging process, in which both portraits merge in averaged (50-50%) balance. A black button at the left gives access to download the image of the Virtual Personas on the (Co-)Actors’ personal computers.

In Phase 5, a list of questions in the middle circle encourages (Co-)Actors to start a dialogue, to share reflection on the embodied experience of the performance. A list of questions is explored to replace human hosted dialogue, of which three questions focused on the experience of mutually (self)-caressing the skin and tracing the caressing gestures:

Q1. ‘How did you find the experience of caressing your skin?’

Q2. ‘How was your experience when tracing your partners caressing movements?’

Q3. ‘Describe the experience of synchronizing the caressing of your face with your partners tracing on the mousepad.’

Results

The ASL2 has been tested during two online workshops (‘Touch Labs’, UP Projects London, in the ‘This is Public Space’ seriesFootnote13) and two workshops with touch screens integrated in a physical installation (University Twente (UT) and University for Applied Science Amsterdam (HvA)). The results presented below originate from (1) the second Touch Lab, (2) the workshop at the UT and (3) the workshop at the HvA.

1. (Co-)Actors responses: Online Touch Lab, UP Projects London

In the online TouchLab, sixteen (Co-)Actors from the UK, Netherlands, and USA who had been personally invited to participate with a partner entered the online interaction flow (Note, that at this stage in the development, interaction was only possible with a mouse pad; not yet on a touch screen). Eleven of these (Co-)Actors responded to a questionnaire. Responses were given in English, no translation took place. Their responses are based on their subjective experience of being a (Co-)Actor. Analyses of these responses are described below,Footnote14 with a focus on shared embodied experience, of touching and feeling touched, supported by a telematic platform.

Analysis of the responses show that nine of the eleven (Co-)Actors mention the word ‘intimate’Footnote15 or words related to intimacy,Footnote16 (often in relation to (Q1) about the experience of ‘skin-caressing’), for example in ‘it was intimate, maybe a bit awkward but also very open’; ‘It felt intimate and friendly’, and ‘It felt quite intimate, especially because I did not know my partner too well’. They expressed empathic engagement in relation to the other (Co-)Actor, through words that refer to a sense of ‘self-revealing’, ‘self-disclosure’, and ‘feeling vulnerable’. Ambiguous empathic connections have been expressed as ‘uncomfortable’, ‘awkward’, ‘strange’, ‘weird’.

Responses to tracing, synchronizing (Q2) with a (Co-)Actor’s caressing movements on the screen has led to seven of eleven expressions of intimacy and empathy: ‘Nice to properly see her face’ and ‘interesting to see my partner touching her face, I was wondering how she was feeling’; and pleasure, as in ‘strange but fun movement’, with shifting attention between the act of tracing movements on the screen and watching the other person. Five responders expressed enhanced shared awareness and emotional ambiguity in relationship to the caressing interaction.

Three of the responses specifically describe confused perceived body ownership. (Co-)Actors describe confusion between perceiving the own body, the partner’s or someone else’s body. Co-Actors described the experience as ‘Quite intense, almost accompanied by a feeling of embodiment as my partner’s fingers as my own.’; while Actors expressed: ‘Like I was someone else touching my face.’; and ‘Almost like I was touching someone else’. This is in line with findings of body ownership identification experiments, discussed in the literature (Petkova and Henrik Ehrsson Citation2008; Tajadura-Jiménez et al. Citation2012; IJsselsteijn, de Kort, and Haans Citation2006).

In conclusion, analysis of the the first Touch lab questionnaire’s data, on experience of shared embodied gestures shows that, firstly, 9 different responders out of 11 mention words (related to) ‘intimate’, while 5 responders express enhanced shared awareness. Secondly, analysis shows that, ‘shared embodied experience’ has been expressed 5 times out of 11 responses. In total 5 cases, (3 questionnaire responses and 2 host observations), motor agency was confused between the own and the partner’s actions, in line with the above-mentioned body-ownership identification experiments.

2. (Co-)Actors responses: University Twente (UT)

During 2022–2023 the telematic platform was integrated in a physical installation; to explore the effect of interfering with caressing gestures in the physical public space. The installation was designed to enable interaction both in physical proximity and online: between Actors, Co-Actors and spectating bystanders, described below.

At University Twente (), the ‘Touch My Touch’ installation consisted of 2 transparent plexiglass boxes of 1,5 meters (the regulated social distance during the COVID2019 pandemic), with integrated touchpads for online communication on both sides. Actors and Co-Actors could see each other both online and in the physical space. Interaction between approximately 26 participating couples was observed by a Host and recorded on video.

Figure 16. ASL 2 ‘Touch My Touch’ performance Installation at the University Twente (2021). Two (Co-)Actors meet via the online platform. Image © Lancel/Maat.

Figure 16. ASL 2 ‘Touch My Touch’ performance Installation at the University Twente (2021). Two (Co-)Actors meet via the online platform. Image © Lancel/Maat.

In contrast to the previous orchestration that took place only online, (Co-)Actors were observed to seek visual connections in the physical space with each other, limiting their focus via online interaction. Emotional experience, such as intimacy, or confusion of self-other distinction, was not expressed in words nor observed in other ways. Although the spatial installation was described by (Co-)Actors as being aesthetically attractive, they described the plexiglass boxes as ‘cold’, ‘hard’ and not stimulating to socially caress, to the host. Nevertheless, often, the Host observed the (Co-)Actors socially distant movements changing, right after the performance, into playfully touching each other’s and their own faces, hugging and embracing.

3. (Co-)Actors responses: University of Applied Science Amsterdam (HvA)

In response to the insights and observations at the University Twente, at the Hogeschool of Amsterdam (), the installation was covered with felt. The felt disrupted physical face-to-face connections. (Co-)Actors could see each other’s faces only via the online platform.

The felt was soft and red, designed to evoke sensitivity to tactile connection. Big, white letters presenting the question ‘Can I touch you online?’ were visible from a great distance. In contrast to the first installation, the Host observed students talking and pointing at the installation, walking from far away to the installation to ask what is was about. Or, in absence of the host, they were observed from a distance touching the felt and starting to interact with the touch screens.

Twenty Actors participated in this second TouchLab. In comparison to the first Touch Lab, some expressed to the Host their engagement in the role of Co-Actor (tracing the Actor’s self-caressing gestures on the screen), in: ‘It felt more intimate to watch caressing.’; ‘I found it a bit scary to see her caressing.’; and ‘A bit weird to go over his face, normally you don’t do that, I got to get used to it.‘. A few times, Co-Actors described to the host confusion about their sense of body owner ship, in ‘I lost understanding about whether my fingers where mine or caressing his face.’; in line with findings of body ownership identification experiments in the literature (Petkova and Henrik Ehrsson Citation2008).

Discussion ASL 2 ‘Touch My Touch’

The analysis of ASL2 makes use of the CITYO Interaction Model () (based on elements of Sensory Disruption and Shared Reflection ().

Figure 17. CITYO Interaction Model for ASL2 ‘Touch My Touch’ (2023) © Lancel/Maat.

Figure 17. CITYO Interaction Model for ASL2 ‘Touch My Touch’ (2023) © Lancel/Maat.

The online telematic platform enables visual and audible connections between Actors and Co-Actors, facilitating mutual visibility of facial and audible emotional expression. Actors visibly share face-caressing gestures with Co-Actors.

The Actor’s individual perception of direct self-touch, through affective self-caressing gestures, is sensory disrupted through the Virtual Persona interface. Reciprocal Response by the Co-Actor takes place through simultaneously caressing the screen, in synchrony with the Actors caressing gestures, leading to a predictable shared data-visualization. They then shift roles. Finally, based on their separate data, Actors and Co-Actors realize a unique, unpredictable shared data-visualization, of portraits merging into Virtual Persona’s, in unpredictable ways. Shared reflection, through Shared Dialogue, takes place between the (Co-)Actors.

Insights

Insights are that interdependent performativity of co-creation through shared gestures is crucial to the shared embodied experience. Both (Co-)Actors must perceive ambiguous, direct, and technically mediated connections, and shared visual perception of affective touch gestures, in congruent, simultaneous, and shared synchronizing movements, for shared (immersive) embodied connections to emerge.

Vital to the direct and vicarious affective touch perception is shared, simultaneous performance of synchronous touch gestures and biofeedback. In some cases, this leads to ambiguous sense of shared performativity, and a sense of transfer of motor agency (discussed in the literature (Tajadura-Jiménez et al. Citation2012)).

Autonomous and intentional performance by two Actors of affective touch gestures (self-caressing the face), must lead to shared biofeedback data, in a shared space.

Having the potential of mutually gazing at each other’s faces (and thus observing each other’s facial, affective expressions), and verbal negotiation, is vital to empathic interaction, discussed in the literature (Iacoboni Citation2009).

The results show that shared dialogue can take place between the (Co-)Actors, supported by an online questionnaire (non-human host). The dialogues were, however, less focused than dialogues facilitated by a human host ().

Figure 18. ASL2 images of research and development of face recognition points (2020). Image © Lancel/Maat.

Figure 18. ASL2 images of research and development of face recognition points (2020). Image © Lancel/Maat.

Integration of the telematic platform in a physical space has shown to be possible, but requires design that stimulates tactility (e.g. soft textile, such as felt); and that (Co-) Actors must be visible only to each other in the telematic space.

Conclusion and future research

‘Can telematic platforms be orchestrated to facilitate shared embodied experience of touching and feeling touched?’ Until recently, limited attention has been paid to the ethical design of telematic shared social (interpersonal) touch experience (Lancel Citation2023).

This paper addresses telematic platform design for shared embodied experience of social touch in empathic interplay. The interaction model ‘Can I Touch You Online?’ (Lancel Citation2023) was used to analyse two Artistic Social Labs (ASLs) supported by telematic platforms. The first ASL ‘Tele_Trust’ (2009–2017) connects smart textile wearables with touch sensors (DataVeils) to a mobile phone app, urban screen technologies and a database, for multiple participants. The second ASL ‘Touch My Touch’ (2021–2023) is a an online ‘streaming platform for touching each other’, that re-orchestrates streaming, facial and merging technologies, for two participants. The analysis shows the importance of platform design that supports shared embodied experience of social touch, in ambiguous (predictable and unpredictable; simultaneous (or synchronous) and non-simultaneous) connections. It shows that support for reciprocal influence through touching and feeling touched is essential, in a vulnerable, empathic interplay.

Insights are that a sense of reciprocal influence, characteristic to social touch, can be established via telematic platforms through support for shared embodied experience of co-creation. Crucial are performance scripts for co-creation through affective self-touching and co-touching gestures (ASL1,2); that lead to real-time emerging, Shared Data Compositions (ASL1,2). The performance of co-creation through the platform must rely on disrupted haptic connections that are (partly) replaced by other interdependent sensory connections between participants (ASL1,2).

Furthermore, in line with the interaction model, the performance scripts and supporting platforms must include shared reflection on the co-created experience, through dialogue that is hosted. An online questionnaire can support non-human hosting (ASL2), but leads to less focus compared to dialogues facilitated by human hosts (ASL1).

This research for shared experience of social touch differs from research focusing on individual immersive (physiological) experiences; or without facilitating shared reflection. Future research will firstly focus on the potential and quality of empathy when software agents in complex systems are part of the interaction; and secondly on an AI application with non-human hosting (e.g. a conversational agent) to support the dialogue.

The results provide a solid foundation for discussions on ethics, empathy and body ownership, reciprocity and responsiveness, autonomy and interdependency, in (neural) multi-actor networks and telematic platforms.

Ethical approval

Ethical approval for this article and research has been granted by the Delft University of Technology’s Human Research Ethics Committee (2023). The TU Delft has granted this approval on the basis of the authors interviewing the artists about their observations. The authors are in possession of appropriate, written and video-recorded informed consent provided by participants, to cite their responses and use their images relevant to this research in this article.

Acknowledgements

The authors like to thank Prof. dr. Jan van Erp (University Twente); Frank Kresin and Prof. dr. Martijn de Waal (University of Applied science Amsterdam), Erik Kluitenberg (KABK Art Academy The Hague); Prof. dr. Caroline Nevejan (UVA Amsterdam), Lucas Evers (De Waag Society Amsterdam); Moira Lascelles and Lili-Maxx Hager (UP Projects London) and the TU Delft (Participatory Systems Lab), for their contribution to joint research and development.

The technical, research production, and development and presentation of the telematic platforms was made possible in collaboration with V2 Lab Rotterdam, Banff Canada (Research Lab ‘Liminal Screens’ by Nina Czegledy, Marcus Neustatter), AZIZ Amsterdam, Jasmijn Groep Groningen, Eagle Science Software Amsterdam and Mart van Bree.

The authors thank Mondriaan Fund, Arts Council England, AFK Amsterdam Fund for the Arts, and the Canada Arts Council for their generous grants. With many thanks to UP Projects London, Lumineus Amersfoort, De Balie Amsterdam, and Banff Center Canada for their commissions to create the art works; and to Eagle Science Software Amsterdam for sponsorship.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Karen A. Lancel

Dr. Karen Lancel (Lancel), artists and science researchers duo, work in interdisciplinary in art, computer technology and society. They are considered pioneers exploring shared experience of embodiment, empathy and intimacy, identity, privacy and trust, in bio-technical entanglement with (non-)human others, in sustainable ecologies.Their prize-winning artworks are presented internationally (Venice Biennale, Ars Electronica Festival), and include visual art, media art, (large-scale) participatory performances and spatial installations; theater and internet art; along with (keynote) lectures, academic affiliations and publications.

Hermen J. Maat

Hermen J. Maat (Maat), artists and science researchers duo, work in interdisciplinary in art, computer technology and society. They are considered pioneers exploring shared experience of embodiment, empathy and intimacy, identity, privacy and trust, in bio-technical entanglement with (non-)human others, in sustainable ecologies.Their prize-winning artworks are presented internationally (Venice Biennale, Ars Electronica Festival), and include visual art, media art, (large-scale) participatory performances and spatial installations; theater and internet art; along with (keynote) lectures, academic affiliations and publications.

Frances M. Brazier

Frances M. Brazier is a Dutch computer scientist, known as one of the founders of NLnet, the first Internet service provider in the Netherlands and one of the first in Europe. She is a professor in Engineering Systems Foundations at the Delft University of Technology, where her research concerns multi-agent systems and participatory systems design.

Notes

1 Parts of this paper have been previously published in the PhD thesis created at the Delft University for Technology: ‘Lancel, Karen (Citation2023) ‘Can I touch you online? Embodied, empathic, intimate experience of Shared Social Touch in Hybrid Connections’’, (https://pure.tudelft.nl/ws/portalfiles/portal/151902141/CanITouchYouOnline_LancelCompresed.pdf). This paper extends the thesis with insights into design for telematic platforms.

2 The term ‘reciprocity’ has been be defined as “shared, felt, or shown by both sides, with mutual dependence, action, or influence” (https://www.merriam-webster.com/dictionary/reciprocity); (https://www.dictionary.com/browse/reciprocity).

3 In line with the “Social Touch Hypothesis” (Björnsdotter, Morrison, and Olausson Citation2010; Saarinen Citation2021), C-tactile (CT) afferents in human hairy skin react to soft, stroking touches are perceived as a route to social communication and body awareness. Perceived pleasantness, comfort and safety evoked by affective touch gestures, has been related to licking and stroking gestures by mother animals, cherishing and cleaning their new-born mammals; influencing touch perception later in life (Erp & Toet Citation2015).

4 Often the term ‘visuo-haptic motor data interaction’ (IJsselsteijn, de Kort, and Haans Citation2006) has been used.

5 Many of these art works extend the changing relations between the self, the other and the world at the level of the interface (Fedorova Citation2020), for a sense of what has been described as “digitally embedded love” (Ascott Citation2003; Malinowska and Gratzke Citation2017; Shanken Citation2001) with (non-)human others (Haraway Citation2016; Latour Citation1990).

6 In this context ‘acting’ does not refer to the notion of performance as role playing, but instead to “performativity”(Butler 1990) which is considered to be a repetitive act designed for public spaces, to share reflection on social engagement (Lancel, Hermen, and Frances Citation2020).

7 Morton (Citation2018) argues that in science, data represents something outside; while in the Arts, data instigate reflection, on the data themselves. Inspired by this perspective, the ASL designs combine shared artistic and scientific data-interpretation, in ‘intra-action’ (Barad Citation2007).

8 ‘Tele_Trust’ (2009–2017). www.lancelmaat.nl/work/tele-trust//, last accessed 2023/11/10.

9 This database is instrumental to the artwork combining anonymous audio-visual responses, that are presented on the screen and in the headset. The data is not shared in any further way. A DIO is therefore not applicable.

11 ‘Touch My Touch’ (2021–2023). https://www.lancelmaat.nl/work/touchmytouch/, last accessed 2023/9/4.

12 In this article, ‘Merging technologies’ include AI algorithms that can merge digital portraits into (fake) digital personas.

14 A full description can be found in author’s dissertation (Lancel Citation2023).

Unknown widget #5d0ef076-e0a7-421c-8315-2b007028953f

of type scholix-links

References

  • Ascott, Roy. 2003. Telematic Embrace: Visionary Theories of Art, Technology, and Consciousness. Durham: University of California Press.
  • Barad, Karen. 2007. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham: Duke University Press.
  • Beanother Lab. 2012-ongoing. http://beanotherlab.org/, last accessed 2022/5/22.
  • Benford, Steve, Chris Greenhalgh, Gabriella Giannachi, Brendan Walker, Joe Marshall, and Tom Rodden. 2012. “Uncomfortable Interactions.” In The Sigchi Conference on Human Factors in Computing Systems, 2005–2014, CHI'12 Austin Texas.
  • Björnsdotter, Malin, India Morrison, and Håkan Olausson. 2010. “Feeling Good: On the Role of C Fiber Mediated Touch in Interoception.” Experimental Brain Research 207: 149–155. doi: 10.1007/s00221-010-2408-y
  • Bollen, Caroline. 2023. “A Reflective Guide on the Meaning of Empathy in Autism research.” Methods in Psychology 8: 100109. doi: 10.1016/j.metip.2022.100109
  • Borgdorff, Henk, Peter Peters, and Trevor Pinch. 2019. Dialogues Between Artistic Research and Science and Technology Studies. Yorkshire: Routledge.
  • Castano, Lina M., and Alison B. Flatau. 2014. “Smart Mater.” Struct 23: 053001.
  • Castells, Manuel. 2020. “‘Space of Flows, Space of Places: Materials for a Theory of Urbanism in the Information Age’.” In The City Reader, 240–251. New York: Routledge.
  • Cheang, Shu Lea. 1998. BRANDON. http://rhizome.org/editorial/2012/may/10/shu-lea-cheang-on-brandon/, last accessed 2019/4/30.
  • Clark, Herbert H., and Susan E. Brennan. 1991. “‘Grounding in Communication’.” In Perspectives on Socially Shared Cognition, edited by L. B. Resnick, J. M. Levine, and S. D. Teasley, 127–149. Washington DC: American Psychological Association.
  • Crew. 2016. C.a.p.e. Drop_Dog. https://crew.brussels/en/productions/c-a-p-e-drop-dog, last accessed 2018/10/19.
  • Daalder, Megan May. 2011. The mirror-Box. https://themirrorbox.info/, last accessed 2022/5/3.
  • Davis, Tom, and David Garcia. 2021, 8th July. ‘Telepresence to Teletrust Symposium’. Bournemouth University. https://blogs.bournemouth.ac.uk/research/2021/07/07/telepresence-to-teletrust-symposium-8th-july/.
  • Decety, Jean, and Yoshiya Moriguchi. 2007. “The Empathic Brain and Its Dysfunction in Psychiatric Populations: Implications for Intervention Across Different Clinical Conditions.” BioPsychoSocial Medicine 1 (1): 1–21. doi: 10.1186/1751-0759-1-22
  • Fedorova, Ksenia. 2020. Tactics of Interfacing: Encoding Affect in art and Technology. Cambridge, MA: MIT Press.
  • Gerdes, Karen E., and Elizabeth A. Segal. 2009. “A Social Work Model of Empathy.” Advances in Social Work 10 (2): 114–127. doi: 10.18060/235
  • Gould, Charlotte, and Paul Sermon. 2015, August. “Occupy the Screen: A Case Study of Open Artworks for Urban Screens.” In Proceedings of ISEA, Simon Fraser University Vancouver.
  • Gsöllpointner, Katharina, Ruth Schnell, and Romana Karla Schuler, eds. 2016. Digital Synesthesia: A Model for the Aesthetics of Digital Art. Berlin: Walter de Gruyter GmbH& Co KG.
  • Haans, Antal, and Wijnand IJsselsteijn. 2006. “Mediated Social Touch: A Review of Current Research and Future Directions.” Virtual Reality 9: 149–159. doi: 10.1007/s10055-005-0014-2
  • Hansen, Mark B. N. 2012. Bodies in Code: Interfaces with Digital Media. New York: Routledge.
  • Haraway, Donna. 2016. Staying with the Trouble: Making Kin in the Chthulucene. Durham: Duke University Press.
  • Huisman, Gijs. 2017. “Social Touch Technology: A Survey of Haptic Technology for Social Touch.” IEEE Transactions on Haptics 10 (3): 391–408. doi: 10.1109/TOH.2017.2650221
  • Huisman, Gijs, Merijn Merijn Bruijnes, Jan Kolkmeier, et al. 2013. ‘Touching virtual agents: embodiment and mind’. Innovative and Creative Developments in Multimodal Interaction Systems: 9th IFIP WG 5.5 International Summer Workshop on Multimodal Interfaces, eNTERFACE 2013, Lisbon, Portugal. Proceedings 9. Springer Berlin Heidelberg, 2014, 114–138.
  • Iacoboni, Marco. 2009. Mirroring People: The New Science of How We Connect with Others. New York: Farrar, Straus and Giroux.
  • IJsselsteijn, Wijnand, Yvonne A. W. de Kort, and Antal Haans. 2006. “Is This My Hand I See Before Me? The Rubber Hand Illusion in Reality, Virtual Reality, and Mixed Reality.” Presence: Teleoperators AndVirtual Environments 15 (4): 455–464. doi: 10.1162/pres.15.4.455
  • Jamieson, Lynn. 2013. “Personal Relationships, Intimacy and the Self in a Mediated and Global Digital Age.” In Digital Sociology: Critical Perspectives, 13–33. London: Palgrave MacMillan.
  • Jewitt, Carey, Sara Price, Kerstin Leder Mackley, Nikoleta Yiannoutsou, and Douglas Atkinson. 2020. “Digital Touch Ethics and Values.” Interdisciplinary Insights for Digital Touch Communication 7: 107–122. doi: 10.1007/978-3-030-24564-1_7
  • Jewitt, Carey, Sara Price, Jürgen Steimle, Gijs Huisman, Lili Golmohammadi, Narges Pourjafarian, William Frier, et al. 2021. “Manifesto for Digital Social Touch in Crisis.” Frontiers in Computer Science:3.
  • Kozel, Susan. 2007. Closer: Performance, Technologies: Performance, Technologies, Phenomenology. Cambridge, MA: MIT Press.
  • Kwastek, Katja. 2013. Aesthetics of Interaction in Digital Art. Cambridge, MA: MIT Press.
  • Lancel, Karen. 2023. “Can I Touch You Online? Embodied, Empathic, Intimate Experience of Shared Social Touch in Hybrid Connections.” Dissertation TU Delft.
  • Lancel, Karen, Hermen Maat, and Frances M. Brazier. 2019. “EEG KISS: Shared Multi-Modal, Multi Brain Computer Interface Experience, in Public Space.” In Brain Art: Brain-Computer Interfaces for Artistic Expression, Human-Computer Interaction Series, edited by Anton Nijholt, 207–228. Cham: Springer Verlag.
  • Lancel Karen, Maat Hermen, and Brazier Frances M. 2020. “Hosting Social Touch in Public Space of Merging Realities” In ArtsIT 2019 Proceedings Interactivity, Game Creation, Design, Learning, and Innovation, edited by A. Brooks and E. Brooks, Vol. 328. Cham: Springer.
  • Lancel/Maat. 2000-ongoing. https://www.lancelmaat.nl/, last accessed 2024/1/3.
  • Lancel/Maat. 2012. Saving Face. http://lancelmaat.nl/work/saving-face/, last accessed 2024/1/3.
  • Lancel/Maat. 2018. Kissing Data Symphony. https://www.lancelmaat.nl/work/kissing-data-symphony/, last accessed 2024/1/3.
  • Latour, Bruno. 1990. “Technology is Society Made Durable.” The Sociological Review 38 (1_suppl): 103–131. doi: 10.1111/j.1467-954X.1990.tb03350.x
  • Loke, Lian, and George P. Khut. 2014. “Intimate Aesthetics and Facilitated Interaction.” In Interactive Experience in the Digital Age: Evaluating New Art Practice, 91–108. Heidelberg: Springer.
  • Lomanowska, Anna M., and Matthieu J. Guitton. 2016. “Online Intimacy and Well-Being in the Digital Age.” Internet Interventions 4: 138–144. doi: 10.1016/j.invent.2016.06.005
  • Lombard, Matthew, and Theresa Ditton. 1997. “At the Heart of It All: The Concept of Presence.” Journal of Computer-Mediated Communication 3 (2): JCMC321.
  • Lombard, Matthew, and Matthew T. Jones. 2013. “Telepresence and Sexuality: A Review and a Call to Scholars.” Human Technology: An Interdisciplinary Journal on Humans in ICT Environments 9 (1): 22–55. doi: 10.17011/ht/urn.201305211721
  • Lozano-Hemmer, Rafael. 2001. Body Movies, Relational Architecture 6. http://www.lozano-hemmer.com/body_movies.php, last accessed 2018/7/12.
  • Lysen, Flora. 2019. “Kissing and Staring in Times of Neuro-Mania: The Social Brain in Art-Science Experiments.” In Artful Ways of Knowing, Dialogues between Artistic Research and Science & Technology Studies, edited by Trevor Pinch, Henk Borgdorff, and Peter Peters, 167–183. New York: Routledge.
  • Ma, Ke, and Bernhard Hommel. 2013. “The Virtual-Hand Illusion: Effects of Impact and Threat on Perceived Ownership and Affective resonance.” Frontiers in Psychology 6 (4): 604.
  • Malinowska, Anna, and Michael Gratzke, eds. 2017. The Materiality of Love: Essays on Affection and Cultural Practice. New York: Routledge.
  • Martin, Daria., ed. 2018. Mirror-Touch Synaesthesia: Thresholds of Empathy with Art. Oxford: Oxford University Press.
  • Merleau-Ponty, Maurice. 2013. Phenomenology of Perception. New York: Routledge.
  • Morton, Thimoty. 2018. Being Ecological. Cambridge, MA: MIT Press.
  • Nevejan, Caroline. 2012. Witnessing You, on Trust and Truth in a Networked World. Delft: Participatory Systems Initiative, Delft University of Technology.
  • Park, Lisa. 2018. Blooming. https://www.youtube.com/watch?v = wUQkuoxAQoU, last accessed 2020/7/29.
  • Petkova, Valeria I., and H. Henrik Ehrsson. 2008. “If I Were You: Perceptual Illusion of Body Swapping.” PLoS One 3 (12): e3832. doi: 10.1371/journal.pone.0003832
  • Phan, Phuok T., Mai T. Thai, et al. 2022. “Smart Textiles Using Fluid-Driven Artificial Muscle Fibers.” Scientific Reports 12: 11067. https://doi.org/10.1038/s41598-022-15369-2.
  • Ramachandran, Vivek, Fabian Schilling, R. Amy, and Dario Floreano. 2021. “Smart Textiles That Teach: Fabric-Based Haptic Device Improves the Rate of Motor Learning.” Advanced Intelligent Systems 3 (11): 2100043. doi: 10.1002/aisy.202100043
  • Roeser, Sabine, Veronica Alfano, and Caroline Nevejan. 2018. “The Role of Art in Emotional-Moral Reflection on Risky and Controversial Technologies: The Case of BNCI.” Ethical Theory and Moral Practice 21: 275–289. doi: 10.1007/s10677-018-9878-6
  • Roosegaarde, Daan. 2022. Touch. https://studioroosegaarde.net/stories/touch, last accessed 2022/5/26.
  • Saarinen, Aino, et al. 2021. “Social Touch Experience in Different Contexts: A Review.” Neuroscience & Biobehavioral Reviews 131: 360–372. doi: 10.1016/j.neubiorev.2021.09.027
  • Salter, Chris. 2010. Entangled: Technology and the Transformation of Performance. New York: MIT Press.
  • Salter, Chris, TeZ, and Valerie Lamontagne. 2017. llinx. http://phonomena.net/ilinx/, last accessed 2022/5/26.
  • Sermon, Paul. 1992. Telematic Dreaming. http://arts.brighton.ac.uk/staff/sermon/telematic- dreaming, last accessed 2022/5/26.
  • Sermon, Paul. 2020. “Shared Objective Empathy in Telematic Space.” In Shifting Interfaces: An Anthology of Presence, Empathy, and Agency in 21st-Century Media Arts, edited by Hava Aldouby, 75–90. Belgium: Leuven University Press.
  • Shanken, Edward. 2001. Telematic Embrace: A Love Story? Roy Ascott's Theories of Telematic Art. Durham: Department of Art History, Duke University Press.
  • Stelarc. 2015. Re-Wired / Re-Mixed: Event for Dismembered Body. http://stelarc.org/?catID = 20353, last accessed 2019/4/30.
  • Stenslie, Stahl. 2010. Virtual Touch: A Study of the Use and Experience of Touch in Artistic, Multimodal and Computer-Based Environments. PhD dissertation. Oslo School of Architecture and Design.
  • Stepanova, E. R., J. Desnoyers-Stewart, K. Höök, and B. E. Riecke. 2022. “Strategies for Fostering a Genuine Feeling of Connection in Technologically Mediated Systems.” In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1–26. New Orleans: CHI 2022.
  • Tajadura-Jiménez, Ana, Matthew R. Longo, Rosie Coleman, and Manos Tsakiris. 2012. “The Person in the Mirror: Using the Enfacement Illusion to Investigate the Experiential Structure of Self-identification.” Consciousness and Cognition 21 (4): 1725–1738. doi: 10.1016/j.concog.2012.10.004
  • Turkle, Sherry. 2011. Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books.
  • Van der Ham, Ineke, and Anouk Keizer. 2021. “What it Means to Have a Body in VR.” Lecture VR Lab Cinedans Fest 21.
  • van der Vlugt, Marloeke. 2015. Performance as Interface| Interface as Performance. IT&FB Amsterdam.
  • van Erp, Jan, and Alexander Toet. 2015. “Social Touch in Human–Computer interaction.” Frontiers in Digital Humanities 2: 2. doi: 10.3389/fdigh.2015.00002
  • Van Erp, Jan, and Alexander Toet. 2015. “‘Social touch in human-computer interaction.” Frontiers in Digital Humanities 2 (2.
  • Verhaeghe, Paul. 2018. Intimiteit. Bezige Bij bv, Uitgeverij De.
  • Ward, Jamie. 2018. “The Vicarious Perception of Touch and Pain: Embodied Empathy.” In Mirror Touch Synaesthesia. Thresholds of Empathy with art (Vol. 61), edited by D. Martin. Oxford University Press.