555
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Feeling, thinking, and not seeing: how images engage and disengage in an information-saturated world – a neurophenomenological perspective

ORCID Icon
Pages 35-55 | Received 20 Mar 2023, Accepted 28 Jul 2023, Published online: 13 Oct 2023

ABSTRACT

We can no longer realise images solely through semiotic theories of interpretation and judgement fixed to earlier modes of communication, such as print. Instead, we engage with images through various networked digital devices and online social interactions. These engagements offer many possible experiences with images, some of which we have agency and others that are purely autonomic and some interactions resembling a healthy-bodied manifestation of visual agnosia or the loss of ability to recognise and identify. This paper utilises research from the ‘Fast Image’ study, a study of graphic design students and practitioners and their views on photographic image use, comparing print and online media. The author uses an interpretive approach supported by mixed data-gathering methods, including photo-elicitation, interviews and semi-structured questions. This discussion encourages advancing visual literacy and visual culture discourses to incorporate a Neurophenomenological approach toward understanding the effects of emerging technologies and viewing environments on photographic image use. These effects include sensory and cognitive responses to images and the influences of external stimuli on our phenomenological apprehension of images.

Introduction

With the increasing coalescence of human and digital interaction, our bodies and minds adapt to evolving digital environments, responding to the affordances of the particular technologies we use. We interact with images in many ways in these environments, via desktop computers in study or work settings, during travel, while multitasking on smartphones, in social situations, or seeking entertainment. Our interactions with images give us emotional affectivity, provided, for example, through personal and intimate interactions with images viewed on smartphones. However, at other times our interactions prevent us from a singular, focused engagement with them, occasioning not seeing or only the partial seeing of images.

Using data gathered from the author’s ‘Fast Image’ study (Marotta Citation2022) on photographic image apprehension by graphic design students and practitioners and supporting literature, this paper describes how our interactions with images occur, the resulting experiences during the viewing process and the final effect on our phenomenological apprehension of images. These interactions are discussed primarily concerning the digital environment; digital environments are those where a change in cognitive functioning is most significant (Marotta Citation2022). However, comparisons with images viewed in the printed environment are made when necessary to highlight differences between participant reactions toward those viewed in digital mediums, the nature of the change in engagement with images, and how apprehension occurs to achieve a phenomenological outcome.

Scope of the fast image study

The ‘Fast Image’ study aimed to ascertain and describe if and how our apprehension of images has changed from print to digital mediums and to what degree technology influences these changes. The researcher investigated the image-viewing experiences and practices of five established Graphic Design and Advertising industry practitioners and ten Graphic Design students from three Australian universities through a photo elicitation exercise where participants compared their image-viewing experiences in the printed Australian Geographic Magazine, Tasmania 2016 edition, with its online counterpart, the Australian Geographic Magazine Website using the same set of images. In addition, further information related to image-viewing experiences was collected using research methods such as questionnaires, focus groups and individual interviews. These methods were designed to gain insight into participants’ experiences in other situations using other technologies and platforms, such as smartphones and social media.

While this paper uses first-person approaches to provide insights into the image-viewing experience, the discussion encourages including third-person perspectives using a neurophenomenological approach to understanding image apprehension concerning our interactions with the viewing environment and the interdependence of the mind, brain, and body toward phenomenological experience.

The neurophenomenological perspective

Francisco Varela (Citation1996) originally proposed Neurophenomenology as a methodological framework for understanding consciousness through embodied and situated lived experience combining the first-person experiential approaches of phenomenology with third-person neuroscientific methods (Gallagher Citation2009). Neurophenomenology is derived from what Varela calls the enactive approach in cognitive science (Varela, Thompson, and Rosch Citation1993). This approach includes the idea that cognitive processes emerge from the interdependent sensorimotor actions of the body, nervous system, and environment (Thompson, Lutz, and Cosmelli Citation2005).

To use these ideas to help explain the relationship between our cognitive and biological responses to images and our final apprehension of them, we must understand how the nervous system and the brain work together to provide a phenomenological outcome to our online visual experiences. The interplay between the nervous system and the brain shapes the resultant interpretation of what we see, indicating that mental phenomena arise from the brain’s ‘self-organised activity’ (Fazelpour and Thompson Citation2015, 223). Kahn (Citation2013) posits that self-organisation is our biological response to environmental change. In other words, the brain influences and adapts to what we see based on the functional interactions of its components. Furthermore, we respond to our encounters with objects and the technological environment through the connections between our experiences, emotions, feelings and thoughts. So how do we experience images through technology and our environment, and how do they affect our visualisation to apprehension process?

An investigation of the literature provides context through a neuropsychological perspective to the question above. It is presented through pertinent theories of the brain’s impact on behavioural aspects of the nervous system concerning the physical, mental and emotional features of the human experience when viewing images in a digital environment.

Links between phenomenological interpretation and brain functions indicate the importance of people’s kinesthetic experiences toward images. To explain this idea further, let us consider vision primarily enabled through optical, physiological and neural processes as the first part of a neurological to phenomenological sequence. This process is illustrated in . Albertazzi (Citation2013) argues the standard scientific practice has been to consider the meanings and qualities of visual phenomena as extraneous to these processes. However, there is an increasing body of work from scientific (Khachouf, Poletti, and Pagnoni Citation2013; Luck and Vogel Citation2013; Zeki Citation2007) and philosophical (Hansen Citation2011; Hayles Citation2012; Malabou Citation2008; Massumi Citation2002) fields seeking to validate and articulate connections between the objective and subjective in visual phenomena. These connections allow for a phenomenological experiencing of our physiological awareness of the world. (Albertazzi Citation2013; Bagdasaryan and Le Van Quyen Citation2013) argue that the phenomenological results of our physiological awareness of the world.

Figure 1. ACIVA Model of Visual Apprehension. Marotta Citation2022.

Figure 1. ACIVA Model of Visual Apprehension. Marotta Citation2022.

Hansen offers further insight by describing ‘a completely different kind of image – our perceptual image of the world outside’ (Hansen Citation2011, 84) arriving from the process of seeing. This action includes our conception of the mental image (the one formed in the brain), beginning with the image we see transmitted to and processed by the brain. Hansen is referring to the process whereby the mental image is converted to the phenomenological image in the biological to phenomenological sequence of seeing. However, what happens during the conversion process, and how does this process affect our phenomenological apprehension of the image? Hansen’s observation suggests a meaningful connection between the neurological and the phenomenological that could be further examined through combined scientific and social sciences studies (neurophenomenology), offering the possibility for valuable insights into the influence of technology on cognitive and somatosensory (perception through the senses) visual information processing and the subsequent phenomenological experience (Marotta Citation2022).

When the body and mind cooperate in the translation of stimuli [the image] through the act of visual perception, phenomenological affirmation is observed through pre-reflective human experience operating with other brain faculties, such as autobiographical memory activation, as indicated in the participant comment below. Malibou (Citation2008) posits that these biological to mental translations are more than scientifically predicated.

One of the images I remember was the school group … because of the happiness of my family before we had problems. I remember happiness, enjoying my childhood. (8s)

The comment above aligns with other participant responses, indicating the affective role memory plays in image recall, evoking the emotion of happiness from the participant and associating it with the image viewed. Emotionally experiencing images involves a dynamic interaction between neural processes and bodily sensations (Marotta Citation2022).

Therefore, a contemporary philosophical and neuroscientific approach to investigating the cooperation of body and mind toward images will offer new and richer possibilities for understanding how we engage with and understand them.

The following sections expand upon the neurophenomenological aspects of the image-viewing experience. These include understanding our perceptual experience of images through relevant neuroscientific theories and how external and temporal effects and sensorial experiences influence neurological and biological actions toward phenomenological apprehension.

External and temporal effects on image viewing

Visual psychophysics

Physical stimuli in the outside world impact our visual processing of images and actions toward them. Visual Psychophysics, or the study of visual perception, examines this connection between external stimulus and human action (Lu and Barbara Dosher Citation2014). In the context of this discussion, it is a scientific field that helps us understand how external stimuli affect how we view and use images involving the temporal aspects of viewing and the brain’s responses during this process.

Considering how external factors influence our visual capabilities and neurological and phenomenological processes is essential in identifying the extent of their impact on our viewing experience. This will help us understand how visual apprehension operates concerning the type of cognitive responses generated during the image-viewing process, how this occurs and to what intensity we experience and understand images relative to the viewing environment.

Visual noise and visual masking

Participant responses during the original study indicated the impact of the external effects of visual noise on the cognition of information, as illustrated by the comment below concerning viewing images on a smartphone.

Smartphones are very distracting. There are too many things there for you to focus on the image. (1s)

This discussion defines ‘visual noise’ as random visual stimuli affecting our attention and hindering our ability to see. Associations are evident between online-related cognitive impacts and structural changes in the brain, including the effects of ‘rapid attention shifting, distractibility, reduced deliberations and impaired executive control abilities’ (Loh and Kanai Citation2016, 506).

For example, visual masking is an instance of visual noise. It is a phenomenon of visual perception that produces situations that interfere with the viewing task and contributes to the experience of ‘looking but not seeing’ – visual access without phenomenal visual consciousness, preventing the progression toward higher-order processing, such as assessing the semantic properties of the image. The impediment toward visual apprehension can occur when competing stimuli mask the image being viewed, as demonstrated by the following participant’s response when asked whether exposure to multiple online information streams affects the communication value of the image.

There’s so much there. It’s like having a noticeboard. There’s stuff overlapping, sooner or later you just switch off. I think it affects the viewers ability to absorb information. (10s)

The communication value of images viewed in this situation is diminished due to the tendency to ‘switch off’ in an environment of competing stimuli, creating conditions where the image is not seen or the viewer chooses not to it.

In addition, visual masking theory concerns how stimuli are registered in the mind, including examining ‘feature, object and scene representations, attentional control mechanisms, and phenomenal awareness’ (Breitmeyer Citation2008, 9). The registration of stimuli is a feature of attention, a vital consideration playing a crucial role in determining whether the visual cortex can process the features of objects or scenes effectively to reach a conscious representation. Lamme (Citation2004) argues that attention does not determine whether stimuli reach a conscious state but determines whether ‘a (conscious) report about stimuli is possible’ (Lamme Citation2004, 863). To identify the conditions in which stimuli can reach a conscious state, we need to acknowledge the characteristic differences among various stimuli [images] and the diversity of the contexts in which they appear. The following participant expresses the differences in the visual experience comparing desktop computer and smartphone viewing contexts. Furthermore, the participant’s comment indicates different digital devices and viewing environments have conditions that facilitate or prevent focused attention.

If the image is on a [desktop] computer, there are a lot of other things happening. There might be a menu bar, pop up ads or this and that happening. There are a lot of things fighting for attention. If it’s [image] on a Smartphone, there might not be as many things happening around it, so I can focus more on the image. (11p)

Temporal aspects of visual masking

The effects of visual masking on the image-viewing process can be further described through spatiotemporal qualities. To illustrate this process, let us call the primary image to be viewed the ‘target’ image. The presence of another image or stimulus, referred to as a ‘mask’, affects the viewing of the target image, occluding visibility and inhibiting critical information about the image, particularly if the mask image is of high contrast and luminance (Bacon-Macé et al. Citation2005).

Consequently, we experience the effects of visual masking as visual sensations relative to the presence of other stimuli appearing at temporal intervals. These stimuli can be of ‘different sensory types and origin moving through time and space’ (Hansen Citation2011, 84), processed by the brain and perceived at a conscious and unconscious level (Lamme Citation2004). Visual masking affects the way the image appears in consciousness because the stimuli we see before the target image, which Herzog and Koch refer to as ‘feature inheritance’ (Herzog and Christof Koch Citation2001, 4271); features of one object incorrectly associated with another, affect the act of viewing and bias the way the image is apprehended (Herzog and Christof Koch Citation2001). Images are considered among the objects to which Herzog and Koch refer. For example, on a Smartphone, one can view a succession of images on an image carousel by swiping them in and out of view. Previous images viewed, those observed in proximity to the target image and those following impact our cognition of the target image as indicated in the following two participant comments. The second comment indicates the temporal nature and the possible impression on memory (memorising a representation of the viewed image at a different time to the one on which we transpose the memory) of Herzog & Koch’s ‘feature inheritance’ effects.

On your phone, you’ve got different things happening at the same time. You’re not concentrating on one specific thing. It affects the way you interpret images because you focus less on them. (11p)

If you see something one day and see another thing another day, what you saw previously in another image could affect how you see a different image. It’s just that kind of business to everything. It affects people for sure. (5s)

In addition to the effects of competing images, the qualities of the viewing technology affect the visibility of the ‘target’ image by diminishing its visibility or altering our attentional intensity and how we interpret the same image under different viewing conditions (see participant comment 8s below). In a digital environment, the rate at which competing visual stimuli (masks) enter the visual field and potentially alter comprehension is more intense than in the printed environment (Marotta Citation2022). The following participant comments illustrate these effects, with participant 14p indicating viewing the image in print offers more potential for concentrated attention.

On a smartphone and tablet you’re not really putting a lot of attention into the images; you’re just skimming through content mostly. (7s)

You’re never really there when you have all these other things happening. You just have to turn off all your digital devices to really absorb all the information. (2s)

When you’re looking at print you don’t have as many distractions and therefore, you’re concentrating more on the image and what it conveys. (14p)

On digital devices, you're not getting the same output on every single device. The image each person sees can be interpreted differently depending on which device they're viewing it. There may be difference in contrast, in colours or shades. This can make significant differences in the way we interpret images. (8s)

The temporal applications of the regions in the brain's visual cortex are a further influence on the visual masking process. How these regions react toward visual stimuli can reduce or increase the degree to which masking affects our cognition of images. Let us consider phenomenologically experienced visual apprehension as involving the mutual interaction of relevant regions of the brain's visual cortex. These regions are specialised to process different attributes of the visual scene at different micro-temporal moments. This processing occurs in temporal sequence requiring all pertinent areas to operate in unity for apprehension to occur. Zeki describes this as visual consciousness made up of ‘different microconsciousnesses’ (Zeki Citation2005 , p. 1178) resulting from visual input, cognitive processing and apprehension.

Zeki is referring to the asynchronous processing of visual information by the relevant regions within the brain’s visual cortex (Zeki Citation2005). There is a relationship between these areas and the temporal correlates of the visual cortex, which process different components of the optical system, such as colour, motion, location and orientation in temporal sequence (Ilias et al. Citation2014; Zhou et al. Citation2018).

Accordingly, if given adequate time to process visual input, each attribute of the viewed image is handled in ‘perfect temporal’ and ‘spatial registration’ (Zeki Citation2007 , p. 583). However, in many digital viewing situations of fragmented attention and short bursts of viewing time, signals from one area of the visual cortex do not bind because they are not able to reach those of another in time (Zeki Citation2007), resulting in only partial processing of the visual scene. Consequently, the resulting phenomenological experience of the image and the quality of semantic decoding are adversely affected.

Therefore, identifying which situations and technologies intensify visual masking effects and are favourable to the temporal processing aspects of the brain’s visual cortex can help us minimise their impact and develop technologies and methods to enhance viewing and create arresting images in visually cluttered environments effectively.

However, despite the hindering effects of visual masking and the reliance on adequate processing time for visual apprehension, an image may have qualities pertinent to the viewer, allowing it to become visible and override masking effects. For example, the brain’s unconscious processes (‘automatic senses’) may favour one image among many. This favourability may be due to the viewer’s subjective response to salient characteristics within the viewed image, as indicated by the following participant:

It’s like walking around the art gallery. I wait till the picture grabs me. I won’t go and find it; I wait till it finds me. It’s the same when you swipe through things really quickly on a smartphone. Something goes ‘ah stop’, and then that’s the one you spend more time on. (10s)

The automatic process described above validates visual inputs by the brain’s motor processes before the conscious representation of these inputs. These processes allow stimuli not consciously detected or recognised to affect a viewer’s behaviour (Jaśkowski, Skalska, and Verleger Citation2003) and indicate the relationship between human perception, cognition, and unconscious and conscious processing dynamics; we can process visual information without awareness. Breitmeyer (Citation2014) posits that much of the processing of visual information before conscious registration is, by definition, unconscious.

These cognitive operations mean that various types of visual processing precede perception influencing human sensibilities toward the image, something beyond the ability for direct human intervention. Incorporating these processes into a more extensive ecological theory of visual perception will expand philosophical discussions with image apprehension at its centre (Hansen Citation2015).

I'm not seeing

The previous section discussed the effects of visual masking as a component of visual noise which can affect our ability to apprehend images in visually cluttered digital environments. This section speculatively presents that visual noise creates a viewing environment where we demonstrate similar impacts to neurological disorders such as visual agnosia. The introduction of this idea extends the preceding discussion to include the occlusive effects of this neuropsychological condition that prevents a person from understanding the meaning of a visually presented stimulus [image], despite the affected individual having an intact sensory and low-level vision (detecting image elements) and typical language and semantic function (Behrmann and Nishimura Citation2010).

For example, in many instances, digital environments expose us to an overabundance of potentially unnecessary information. This exposure increases the working memory’s load resulting in distraction from the main message or image, making it difficult to perceive and interpret visual information. This paper identifies the relationship between ‘digitally induced’ visual agnosia and our apprehension of images and encourages further research from a neurophenomenological perspective. The following participant comments indicate visual agnosia characteristics during their image-viewing experience.

I can watch a thousand of images and I will absorb just one part of the information from that image. (8s)

You are presented with so much information and you’re just scrolling through it. You’re not absorbing all the information you see. At the end of the day everything is just cluttered in your head and you don’t absorb any real information at all. (7s)

There are two forms of visual agnosia pertinent to this discussion, apperceptive and associative. Biran and Coslett (Citation2003) describe apperceptive visual agnosia as being unable to construct a good perceptual representation or working model from visual input. Associative visual agnosia indicates perception occurs, but recognition does not. This results from the viewer seeing but unable to group image elements despite allocating attention toward it (Abrams and Law Citation2002).

These agnosias prevent the image from being fully apprehended in cluttered digital environments because working memory, or the ‘small amount’ of the information held in mind and used to execute cognitive tasks such as information processing and comprehension (Cowan Citation2014) becomes overloaded with extraneous information and has little chance of association with items stored in long term memory that would help phenomenological apprehension.

A detailed definition of the different forms of visual agnosia and how each of them affects our viewing experience is beyond the scope of this paper. However, the author recommends further investigation into identifying the qualities of the various sources of visual noise experienced within the digital media ecosystem and how each source affects our neurological processing of images.

Affect and arousal: experiencing images sensorially

The impact of external factors in the viewing environment on visual cognitive processes has been described. In addition, the previous section discussed the biological aspects of perception using visual psychophysics theories (visual masking and visual agnosia) and their influence on our phenomenal interpretation of images. The following discussion will focus on another critical part of visual perception, our emotional affective response toward images. How are we affected by what we see, and how does this affect influence our cognisance of images? Exploring the answers to these questions is essential because affective experiences, positive and negative, including excitement, shock, happiness and sadness, influence our thoughts and behaviours toward images. Clore and Palmer (Citation2009) describe affect as an embodied response to pleasure or displeasure toward something, connecting affect and cognition. This connection allows affective reactions to provide information about the value and importance of what we experience (Clore and Palmer Citation2009). People experience their affect as being a reaction to ‘whatever happens to be in mind at the time’ (Clore and Palmer Citation2009, 25), with positive affects adding value to one's current thoughts and disposition, in turn promoting higher-level cognitive responses including reasoning, and comprehension (Clore et al. Citation2001; Clore and Palmer Citation2009).

Alternatively, negative affect leads to a perceptual, stimulus-bound focus invalidating and inhibiting higher-level cognitions (bottom-up processing). However, there are challenges in applying these aspects of affect theory too broadly to the image-viewing experience, particularly if memory processes are involved. For example, if extraordinary enough, a negative response to an image can allow information to progress to higher-order cognitive domains and facilitate a semantic analysis of the image, as noted in the example below.

To further elaborate, let us consider affective responses or subjectively experienced feelings as being delivered through the primary motivational system. This process allows us to express emotion toward stimuli [images] (Frank and Wilson Citation2020), with affect as the conscious subjective aspect of emotion (Cacioppo and Berntson Citation1999). Biological drives such as arousal and the desire for emotional experiences are part of the primary motivational system (Niven and Miles Citation2013) and make up the physiological bases for higher-level affects, such as curiosity and surprise (Buck Citation1999); the brain, processing sensory information from the external world in the context of sensations from the body. This ‘body to brain’ sequence allows experientially felt interactions to progress to higher-order cognitive processing of the mind. Consequently, how we are affected when viewing images impacts how we experience and process them. For example, when asked to recall a memorable image, the following participant describes Malcolm Browne’s 1963 image of the self-immolating Monk in Saigon and describes the averse affect of shock when recalling the image together with the semantic associations.

The burning monk was such a shocking image. For me it’s hugely memorable because of the loss of life and the sacrifice and all those kinds of things. That’s one that sticks in my head. (9s)

The example above indicates affect is instrumental in promoting cognitive, relational processing, responding indirectly to stimuli (from memory) using prior knowledge. In addition, the response illustrates the cogency of affect on the higher-order cognitive memory processes of recall and retrieval. Clore and Palmer (Citation2009) suggest that this action occurs by the averse effect acting as a conduit for embodied information retrieved from long-term memory.

Storbeck and Clore (Citation2008) posit that affect moderates many of the phenomena in cognitive psychology. The arousal dimension of affect often promotes and intensifies engagement with stimuli, enhancing long-term memory of events. Affective responses such as happiness, sadness and excitement are part of a biological and phenomenological process whereby in ‘emotionally arousing situations’ (Storbeck and Clore Citation2008, 1834), androgenic hormones are released and enhance memory processes, through which personal experience draws attention to the arousing event (Storbeck and Clore Citation2008).

In the following example, emotional connections to a photo work with affective responses toward perception, giving significance to a photo’s salient features that may not be possible without emotion derived from personal experience. Emotional influences shape visual perception (Zadra and Clore Citation2011) and influence attention, judgment, thought, and affective responses. These influences motivate us toward a deeper interaction with the image, beginning with viewing and progressing toward further semantic decoding in an immediate sensory way. Massumi (Citation2002) argues the affective experiences we derive from varying intensities of physical and biological interactions we have with stimuli [images] are the autonomic starting point, which, together with emotion, leads toward higher-level cognitive activities of the mind. Emotional experiences are not fixed states but are shaped by our subjective interpretations and the contexts we encounter images. Therefore, emotions become part of a broader system of signification situated within narratives and action-reaction patterns embedded in embodied and affective processes.

Emotion is qualified intensity, the conventional, consensual point of insertion of intensity into semantically and semiotically formed progressions, into narrativizable action-reaction circuits, into function and meaning (Massumi Citation2002, 28).

When asked to recall a memorable image, the following participant recalls one that was emotionally salient, associating feelings with the representational content of family.

Seeing images of the 9/11 memorial that I took when my whole family was together. That feels good because we rarely get together. (1s)

The following participant expressed the quality of engagement with an image through an internally felt physical response, ‘spark a feeling inside of me’, which triggered the verbal, externally expressed affective response of ‘wow’ toward engaging visual content. In addition, in the Instagram environment, a personal aspect of social connectedness is evident with the image, a metaphor for a friend's voice.

If I see an image on Instagram, for example, and if it is taken well at a beautiful location, it would spark a feeling inside of me seeing it this way. ‘Wow.’ So, in a way, this image is you speaking to me. (6s)

Therefore, affect and emotion are essential in perceptual experience. The previous participants’ comments indicate emotions arise immediately upon the perception of emotionally evocative stimuli expressed as feelings and behaviour, which involve cognitive processes helping us conceptualise what we see. Furthermore, Bergson posits that ‘there is no perception without affection’ (Bergson Citation1911, 60), meaning that every act of perceiving an object (or image) offers the potential for one’s body to act on that object as a sensory self-affection of the body. This challenges the mindset that affect is ‘a distinct psychological phenomenon separate from cognition and perception’ (Siegel et al. Citation2018, 496) and indicates a knowing in an immediate and sensory way (Tomkins Citation2008).

The following section discusses the corporeal aspects of our interaction with images, expressed through qualities afforded by the viewing technologies of print and screen. These interactions are presented as physical interactions of touch and sensation, externally felt and internally processed in combination with higher-level cognitive brain functions.

Physical and haptic experiences

As a constituent of the visual process, the senses intensify the viewing experience, indelibly transferring the experience of the visual scene to memory; the sensations mediated by the skin link with the outside world (Zimmermann Citation1989). The combination of the senses offers an immersive experience, stimulating richer responses to what we see, activated through the tactility provided by the viewing medium. Massumi describes tactility as ‘the sensibility of the skin as the surface of contact between the perceiving subject and the perceived object’ (Massumi Citation1996, 58).

The tactile perception of our environment can be understood as responses to external and internal stimuli; tactile sensibility (exteroceptive), and visceral sensibility (interoceptive) (Massumi Citation2002). External environmental stimuli trigger haptic sensations by impacting the internal pathways of thought (exteroceptive), often leading to cognition. In addition, others are activated by internal stimuli related to the body’s physiological state (interoception).

A comprehensive body of literature (Barsalou Citation2010; Glenberg, Witt, and Metcalfe Citation2013; Shapiro Citation2011) supports the involvement of the human body’s sensorimotor characteristics in higher cognition. Additionally, bodily actions and reactions toward environmental stimuli influence cognitive processes, actively registering information from the visual scene through motor behaviour, including movements of the eyes and hands together with sensory receptors such as eyes, nose, and skin (Gibson, Citation1966). We understand through an ecology of world and body; they are a ‘constituent of, and not merely a causal influence on, cognition’ (Shapiro Citation2011, 4).

The following participant response implies interoceptive visceral sensibilities emanating from an internally generated positive affect expressed toward the ‘visually appealing’ image as a feeling or sensation (‘vibes’).

It was a very visually appealing image. It gives you positive vibes. (7s)

The following participant responses indicate extroperceptive tactile sensibilities operating. These sensibilities promote further engagement with the viewed image, leading to higher-order cognitive processes of perception, allowing for the interpretation of information and the acquisition of meaning, as indicated in the second response.

If you’re viewing an image online, it’s not tactile like something in print. Sometimes the texture of the paper feels nice, or the smell of the ink. (11s)

It was a tangible book, meaning I could feel the true meaning of the image. (1s)

Gibson (Citation2014) posits that ‘visual control of the hands is inseparably connected with the visual perception of objects’ and that manipulation of a surface ‘subserves many other forms of behaviour’ (Gibson Citation2014, 224) facilitating and influencing the quality of perception as indicated by the following participant comments.

I like the experience of holding something physical in your hands. It gives you more. (3s)

It’s hands-on, interactive, more authentic. (5s)

The images stuck in my head a little better maybe because of the physical feel of paper while analysing. (2s)

Participants in the study did not express these sensory phenomenological experiences as much toward their digital viewing experiences as they did when viewing images in print. The tactility afforded by screen technology attempts to simulate physical bodily actions. However, the rich kinesthesia experienced through touching a malleable surface such as paper, the synergism of pressure exerted onto the skin and blood vessels, and the subsequent neuronal message to the brain indicates a connection between a more intense neurological percept of reality with experiential apprehension when interacting with printed images.

Nevertheless, increasingly, newer technologies such as virtual, augmented, and mixed realities continue to replicate these sensory experiences with some success (Gatter, Hüttl-Maack, and Rauschnabel Citation2021; Kautish and Khare Citation2022), the inclusion of which is beyond the scope of this paper. However, it must be noted that through touch screen affordances, devices such as tablets and smartphones can offer the richness of another type of sensory engagement, one combining vision and the tactility of image manipulation, stimulating the visual senses and resulting in immersive, affective responses toward the image-viewing experience.

Kroupi et al. (Citation2016) regard immersive qualities as enhancing depth perception, reality and emotion in immersive multimedia environments. Participants enjoyed the interactive possibilities offered by screen technologies allowing them to engage with visual elements. They described these viewing experiences in terms of zooming in and out and the ability to change the magnification of the image. These interactive affordances offered an immersive 3D quality to the viewing engagement, as expressed by the following participant.

You really experience the panoramic 360 view. If it’s a big landscape of a mountain range, you can zoom in over there. It’s not just a flat thing anymore. It’s got so much more depth, its nearly like a movie now. (5s)

Consequently, the resultant hedonic experiences acquired through various visual technologies play an essential part in acquiring knowledge and meaning (Bloch Citation2008; Dhanani Citation1994; Fulkerson Citation2020; Paterson Citation2016). Therefore, when we are interacting with images, we are participating in an ecology of tactile functionality and multisensory embodied experience through haptic perception (Marks Citation2008).

A neuroscientific to phenomenological framework: the aciva model (Affective, cognitive interaction toward visual apprehension)

The previous sections indicated that the impetus for image apprehension comes mainly from the interaction between the brain, body, mind, and environment. Furthermore, many elements in the visual apprehension process activate specific cognitive pathways during this interaction. They contextualise our responses and shape the possibilities of how images are perceived. These elements include semiotic content, emotional valence, arousal, personal intent when viewing, the qualities of the viewing device, the context of use and the source of the image.

We encounter combinations of these elements simultaneously or sequentially – streams of stimuli that trigger either the appetitive or aversive motivational systems (Schupp et al. Citation2012). One system organises a person’s response to rewarding or appetitive stimuli, and the other organises the individual’s response to unpleasant or aversive stimuli. Although applied mainly in the fields of Psychology (Corr Citation2013; Buck Citation1999) and Neuroscience (Bissonette et al. Citation2014) and as a framework to understand adaptive and reactive behaviours towards threat and reward cues as part of an organism’s drive to survive (Andreatta and Pauli Citation2015: Lang and Bradley Citation2013), aspects of these systems can be applied toward understanding the image viewing process. These include their role in enhancing or diminishing the processes of memory and perception (Tomkins Citation1982), attention allocation, information intake (Lang and Bradley Citation2013), and motivation (Clore and Huntsinger Citation2007) for engagement or disengagement toward an image (see ).

In addition to the influence of the appetitive and aversive motivational systems on our perceptive processes, the influences of bottom-up and top-down processing on our cognition (Theeuwes Citation2010) can further aid our understanding of how we apprehend images.

Bottom-up and top-down processing help us make sense of what we see. Our encounters with images, when experienced sensorially, shape our perception during bottom-up processing without the imposition of preconceived ideas. At the same time, we can use our pre-existing knowledge and expectations in top-down processing to interpret what we see. Firstly, our reaction to these encounters can be automatic in response to motivationally relevant images and promote ongoing cognitive processing (Lang Citation2006). In this reaction, the image is experienced sensorially (bottom-up processing) or by cognitive control, where sensory information is used to guide and select motor actions. This process includes deciding whether to stop (aversive) or continue (appetitive) the viewing process (Scott Citation2016). Secondly, our reaction can be intentional and semantically motivated (top-down processing). For example, when directing our intention toward an activity such as a Google image search. In top-down processing, perception is partly driven by prior learned experience projected onto antecedent cortical areas with top-level processing creating a ‘predictive’ coding of the image (Gilbert and Wu Citation2013).

In the bottom-up processing example, the nature of the encounter affects how individual emotional cues are processed and can ‘facilitate attention allocation, information intake and sympathetic arousal’ (Lang and Bradley Citation2013, 230). These emotional cues determine whether an image is salient enough to motivate the viewer toward deeper engagement making the image more memorable. In the Google image search example (top-down processing), the viewer is looking for a particular type of image, forming a semantic expectation in the ‘mind’s eye’ of one already known. The relevant neurons then carry this information (Kim, Park, and Suh Citation2020), facilitating the recognition of the desired image choice, which is then available for further semiotic analysis. This process represents the phenomenological act of intentionality toward the image, an a priori of lived experience. Therefore, each processual stage can progress or diminish engagement with the image being viewed and affects the quality of the final interpretive outcome (see ).

, the ACIVA model (Affective and Cognitive Interaction toward Visual Apprehension), illustrates the cognitive process from the initial reception of the image through to our phenomenological apprehension of it. The process of apprehension can progress from an intellectual type of top-down processing or a bottom-up process driven by the senses. Beginning with the example of bottom-up cognitive processing, we can see that after the visual input (, step 1), a corporeal interaction (, step 2) occurs, affecting motivation and the quality of interaction (engagement or disengagement) one will have toward an image. Finally, image processing occurs on a conscious or unconscious level, translating the affective experience into conscious perception (somatosensory process, , step 3b) (Marotta Citation2022).

For example, observing the top-down cognitive process in the ACIVA model, apprehension can occur either consciously if the image appears in the dominant visual sphere of focus or unconsciously (, step 3a) when the brain automatically processes all or part of an image that may be invisible or masked. Dehaene refers to this as subliminal priming when an image is experienced below the perception threshold (Dehaene Citation2009). If the image is perceived consciously, decoding occurs through phenomenologically interpreted semiotic modes of analysis (Figure1, step 4). These include world view, group norms and symbolic interactionist perspectives of understanding meaning through people’s social interactions – a part of the social media image environment.

Top-down and bottom-up processing can also influence each other (Schipper, Ernst, and Fahle Citation2008) through a feedback loop involving the interaction between prior knowledge and sensory information (Dura-Bernal, Wennekers, and Denham Citation2012). For example, when the makeup of an image has enough salience to attract the viewer’s attention, the image is initially stored in short-term memory (, step 3c). This action occurs in the following way: Sensory information from the somatosensory (, step 3b) region of the brain is transferred (fed forward) to higher-order areas of the brain (Bullier Citation2001) for higher-level processing. The feedback connections from these areas then move information in the reverse direction (Khorsand, Moore, and Soltani Citation2015), modulating responses by activating cortical processes such as attention and figure-ground segregation or grouping, which allows us to perceive the [image] more fully (Roelfsema, Tolboom, and Khayat Citation2007). The image can then be further processed and semantically decoded using higher neural functions (, steps 3–4).

Conclusion and recommendations

This paper aims to expand current photographic image theory research and practice. It highlights the need to consolidate mind, brain, and body approaches to visual literacy pedagogy and practices when investigating how we apprehend photographic images. While not exhaustively presenting the findings, this paper has focused on particular insights from the ‘Fast Image’ study. On a practical level, the integration of the ACIVA model of image apprehension () can be adopted and incorporated into visual literacy curriculum at a tertiary level. The purpose of this model is to engage students to understand better how images make meaning through a neurophenomenological process, how visuality and our engagement with images are changing and how the different and evolving presentations of the image affect our emotional and critical engagement and experience at each stage of the visual apprehension process illustrated in .

On a theoretical level, this paper encourages further discussion into the changing nature and our understanding of image apprehension and the need to evolve interpretive and semiotic image theories to incorporate neuroscientific ideas involving the coalescence of mind, brain and body as we attempt to create meaning in what we see. In addition, we need to consider the fluidity of images, observing them moving through diverse and emerging temporal spaces provided by new visual and experiential technologies such as Generative A.I. and virtual, augmented and mixed realities. These visual and technical spaces come with their associated affordances, where images are accessed in environments that bring intensities of visual noise and intent, influencing the degree and the quality to which we experience images. How are images understood in these spaces?

Additionally, the synergy between neuroscientific and phenomenological aspects of image apprehension can be further explored using science-oriented research techniques such as eye tracking, EEG (electroencephalogram) and fMRI (Functional Magnetic Resonance Imaging) to map visual activity and stimuli to brain responses during image viewing. These techniques were previously limited to research investigating cognitive functional aspects of viewing, such as effects of viewing conditions on perceptual load (Lavie Citation2005), internet search tasks and reading text (Small et al. Citation2009), visual masking effects (Ansorge et al. Citation2008) and ‘real world’ scene perceptions (Wu, Wick, and Pomplun Citation2014). In contrast, others related to visual communication focus on areas such as online advertisement cues (Yen and Chiang Citation2021), evaluating the appeal of body representations in artistic and photographic images through neurocognitive processing (Lutz et al. Citation2013) and investigating consumers’ sensorimotor, cognitive, and affective responses to marketing stimuli (Suomala et al. Citation2012). Little research has examined photographic image apprehension incorporating neurophenomenological approaches. Combining the advantages of quantitative and qualitative research methods into a research project offers the possibility for a broader and richer range of findings not confined to only one approach (Creswell and Plano Clark Citation2007; Johnson and Onwuegbuzie Citation2004).

Limitations

In the original ‘Fast Image’ study, the small sample size of 10 students and five practitioners made comprehensive statistical analysis difficult (Marotta Citation2022). However, the study was conducted from a qualitative standpoint and designed as an exploratory gathering of information that successfully provided insights into the experiential and cognitive aspects of image viewing which are the focus of this paper.

Final note

The researcher acknowledges the experiential and cognitive overlap of effect and affect when we engage with images in different situations, including digital and print environments. Although each viewing situation has affordances triggering the apprehension process and influencing how it occurs differently, viewing in these situations is experienced through degrees of bottom-up and top-down processing (), some fast and some automatic, of which we are unaware (Varela, Thompson, and Rosch Citation1993) and others more intentional and contemplative.

The emerging neuroscientific and philosophical theories of Varela (Citation1996), Bechara, Damasio, and Damasio (Citation2000), Lombardi (Citation2019), Massumi (Citation2002), Dehaene (Citation2009) and Hansen (Citation2011; Citation2015) are essential in understanding how we apprehend images across different technologies. These theories are about human cognition that is embodied and enactive (Lombardi Citation2019; Varela Citation1996), shifting the discussion about the way we understand and engage with images, one that is inclusive of but not solely technologically or cognitively driven but acknowledges the interaction of mind, brain and body combining in different intensities of experience and cognition leading to a wide range of possibilities of understandings or misunderstandings. From this perspective, through various human cognitive and biological actions influenced by environments and technologies, we feel, think and do not see the image.

Acknowledgements

Ethical approval for this project was given by Central Queensland University Human Ethics Committee: H15/07-175

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Thomas Marotta

Tom Marotta (Ph.D.) is a sessional lecturer at Central Queensland University with interests in the field of visual communication and digital media.

References

  • Abrams, R. A., and M. B. Law. 2002. “Random Visual Noise Impairs Object-Based Attention.” Experimental Brain Research 2 (3): 349–353. https://doi.org/10.1007/s00221-001-0899-2.
  • Albertazzi, L. 2013. Handbook of Experimental Phenomenology. Visual Perception of Shape, Space and Appearance. Chichester, WS: John Wiley & Sons.
  • Andreatta, M., and P. Pauli. 2015. “Appetitive vs. Aversive Conditioning in Humans.” Frontiers in Behavioral Neuroscience 9 (128): 1–8. https://doi.org/10.3389/fnbeh.2015.00128.
  • Ansorge, U., G. Francis, H. Michael, M. H. Herzog, and H. Öğmen. 2008. “Visual Masking and the Dynamics of Human Perception, Cognition, and Consciousness: A Century of Progress, a Contemporary Synthesis, and Future Directions.” Advances in Cognitive Psychology 3 (1–2): 1–8. https://doi.org/10.2478/v10053-008-0009-0.
  • Bacon-Macé, N., M. J. M. Macé, M. Fabre-Thorpe, and S. J. Thorpe. 2005. “The Time Course of Visual Processing: Backward Masking and Natural Scene Categorization.” Vision Research 45 (11): 1459–1469. https://doi.org/10.1016/j.visres.2005.01.004.
  • Bagdasaryan, J., and M. Le Van Quyen. 2013. “Experiencing Your Brain: Neurofeedback as a new Bridge Between Neuroscience and Phenomenology.” Frontiers in Human Neuroscience 7: 1–10. https://doi.org/10.3389/fnhum.2013.00680.
  • Barsalou, L. W. 2010. “Grounded Cognition: Past, Present, and Future.” Topics in Cognitive Science 2 (4): 716–724. http://doi.org/10.1111/tops.2010.2.issue-4.
  • Bechara, A., H. Damasio, and A. R. Damasio. 2000. “Emotion, Decision Making and the Orbitofrontal Cortex.” Cerebral Cortex (New York, N.Y. 1991) 10 (3): 295–307. https://doi.org/10.1093/cercor/10.3.295.
  • Behrmann, M., and M. Nishimura. 2010. “Agnosias. Wiley Interdisciplinary Reviews.” Cognitive Science 1 (2): 203–213. https://doi-org.ezproxy.uws.edu.au/10.1002/wcs.42.
  • Bergson, H. 1911. “Of the Selection of Images for Conscious Presentation.” In What Our Body Means and Does, edited by N. M. Paul and W. S. Palmer, 1–85. George Allen and Co (Original work published in 1896). https://doi.org/10.1037/13803-001.
  • Biran, I., and H. Coslett. 2003. “Visual Agnosia.” Current Neurology and Neuroscience Reports 3 (6): 508–512. https://doi-org.ezproxy.cqu.edu.au/10.1007/s11910-003-0055-4.
  • Bissonette, G. B., R. N. Gentry, S. Padmala, L. Pessoa, and M. R. Roesch. 2014. “Impact of Appetitive and Aversive Outcomes on Brain Responses. Linking the Animal and Human Literatures.” Frontiers in Systems Neuroscience 8: 1–15. https://doi.org/10.3389/fnsys.2014.00024.
  • Bloch, M. 2008. “Truth and Sight: Generalizing Without Universalizing.” The Journal of the Royal Anthropological Institute 14 (s1): S22–S32. http://www.jstor.org/stable/20203795.
  • Breitmeyer, B. G. 2008. “Visual Masking: Past Accomplishments, Present Status, Future Developments.” Advances in Cognitive Psychology 3 (1–2): 9–20. https://doi.org/10.2478/v10053-008-0010-7.
  • Breitmeyer, B. G. 2014. The Visual (un)Conscious and its (dis)Contents: A Microtemporal Approach. Oxford: Oxford University Press.
  • Buck, R. 1999. “The Biological Affects: A Typology.” Psychological Review 106 (2): 301–336. https://doi.org/10.1037/0033-295x.106.2.301.
  • Bullier, J. 2001. “Integrated Model of Visual Processing.” Brain Research Reviews 36 (2–3): 96–107. https://doi.org/10.1016/S0165-0173(01)00085-6.
  • Cacioppo, J. T., and G. G. Berntson. 1999. “The Affect System: Architecture and Operating Characteristics.” Current Directions in Psychological Science 8 (5): 133–137. https://www.jstor.org/stable/20182585.
  • Clore, G. L., and J. R. Huntsinger. 2007. “How Emotions Inform Judgment and Regulate Thought.” Trends in Cognitive Sciences 11 (9): 393–399. https://doi.org/10.1016/j.tics.2007.08.005.
  • Clore, G. L., and J. Palmer. 2009. “Affective Guidance of Intelligent Agents: How Emotion Controls Cognition.” Cognitive Systems Research 10 (1): 21–30. https://doi.org/10.1016/j.cogsys.2008.03.002.
  • Clore, G. L., R. S. Wyer, B. Dienes, K. Gasper, C. Gohm, and L. Isbell. 2001. “Affective Feelings as Feedback: Some Cognitive Consequences.” In Theories of Mood and Cognition: A User’s Guidebook, edited by L. L. Martin, and G. L. Clore, 27–62. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
  • Corr, P. J. 2013. “Approach and Avoidance Behaviour: Multiple Systems and their Interactions.” Emotion Review 5 (3): 285–290. http://doi.org/10.1177/1754073913477507.
  • Cowan, N. 2014. “Working Memory Underpins Cognitive Development, Learning, and Education.” Educational Psychology Review 26 (2): 197–223. https://doi-org.ezproxy.cqu.edu.au/10.1007/s10648-013-9246-y.
  • Creswell, J. W., and V. L. Plano Clark. 2007. Designing and Conducting Mixed Methods Research. Thousand Oaks: Sage.
  • Dehaene, S. 2009. Reading in the Brain: The Science and Evolution of a Human Invention. New York: Viking/Penguin Group.
  • Dhanani, A. 1994. The Physical Theory of Kalam: Atoms, Space and Void in Basrian Mu’tazili Cosmology. Leiden: E.J. Brill.
  • Dura-Bernal, S., T. Wennekers, and S. L. Denham. 2012. “Top-down Feedback in an HMAX-Like Cortical Model of Object Perception Based on Hierarchical Bayesian Networks and Belief Propagation.” PloS One 7 (11): 1–25. https://doi.org/10.1371/journal.pone.0048216.
  • Fazelpour, S., and E. Thompson. 2015. “The Kantian Brain: Brain Dynamics from a Neurophenomenological Perspective.” Current Opinion in Neurobiology 31: 223–229. https://doi.org/10.1016/j.conb.2014.12.006.
  • Frank, A. J., and E. A. Wilson. 2020. A Silvan Tomkins Handbook: Foundations for Affect Theory. Minneapolis: University of Minnesota Press.
  • Fulkerson, M. 2020. “Touch.” In The Stanford Encyclopedia of Philosophy, edited by E. N. Zalta. Stanford, CA: Stanford University.
  • Gallagher, S. 2009. “Neurophenomenology.” In The Oxford Companion to Consciousness, edited by T. Bayne, A. Cleeremans, and P. Wilken, 470–472. Oxford: Oxford University Press.
  • Gatter, S., V. Hüttl-Maack, and P. A. Rauschnabel. 2021. “Can Augmented Reality Satisfy Consumers’ Need for Touch?” Psychology and Marketing 39 (3): 508–523. https://doi.org/10.1002/mar.21618.
  • Gibson, J. J. 1966. The Senses Considered as Perceptual Systems. Houghton Mifflin.
  • Gibson, J. J. 2014. The Ecological Approach to Visual Perception: Classic Edition. New York: Psychology Press.
  • Gilbert, C. D., and L. Wu. 2013. “Top-down Influences on Visual Processing.” Nature Reviews Neuroscience 14 (5): 350–363. https://doi.org/10.1038/nrn3476.
  • Glenberg, A. M., J. K. Witt, and J. Metcalfe. 2013. “From the Revolution to Embodiment.” Perspectives on Psychological Science 8 (5): 573–585. http://doi.org/10.1177/1745691613498098.
  • Hansen, Mark B. N. 2011. “From Fixed to Fluid: Material-Mental Images Between Neural Synchronization and Computational Mediation.” In Releasing the Image. from Literature to new Media , edited by J. Khalip, and R. Mitchell. 83-111. Redwood, CA: Stanford University Press.
  • Hansen, M. B. N. 2015. “The Operational Present of Sensibility.” The Nordic Journal of Aesthetics 24 (47): 38–53. https://doi.org/10.7146/nja.v24i47.23054.
  • Hayles, K. N. 2012. How we Think. Chicago, IL: The University of Chicago Press.
  • Herzog, M. H., and C. Christof Koch. 2001. “Seeing Properties of an Invisible Object: Feature Inheritance and Shine-Through.” PNAS 98 (7): 4271–4275. https://www.jstor.org/stable/3055406.
  • Ilias, E., A. R. Nikolaev, D. C. Kiper, and C. van Leeuwen. 2014. “Distributed Processing of Color and Form in the Visual Cortex.” Frontiers in Psychology 5: 1–14. https://doi.org/10.3389/fpsyg.2014.00932.
  • Jaśkowski, P., B. Skalska, and R. Verleger. 2003. “How the Self Controls its “Automatic Pilot” When Processing Subliminal Information.” Journal of Cognitive Neuroscience 15 (6): 911–920. https://doi.org/10.1162/089892903322370825.
  • Johnson, R. B., and A. J. Onwuegbuzie. 2004. “Mixed Methods Research: A Research Paradigm Whose Time has Come.” Educational Researcher 33 (7): 14–26. https://doi.org/10.3102/0013189X033007014.
  • Kahn, D. 2013. “Brain Basis of Self: Self-Organization and Lessons from Dreaming.” Frontiers in Psychology 4: 408. https://doi.org/10.3389/fpsyg.2013.00408.
  • Kautish, P., and A. Khare. 2022. “Investigating the Moderating Role of AI-Enabled Services on Flow and awe Experience.” International Journal of Information Management 66: 102519. https://doi.org/10.1016/j.ijinfomgt.2022.102519.
  • Khachouf, O. T., S. Poletti, and G. Pagnoni. 2013. “The Embodied Transcendental: A Kantian Perspective on Neurophenomenology.” Frontiers in Human Neuroscience 30 (7): 611. https://doi.org/10.3389/fnhum.2013.00611.
  • Khorsand, P., T. Moore, and A. Soltani. 2015. “Combined Contributions of Feedforward and Feedback Inputs to Bottom-up Attention.” Frontiers in Psychology 6 (155): 1–11. https://doi.org/10.3389/fpsyg.2015.00155.
  • Kim, B. W., Y. Park, and I. H. Suh. 2020. “Integration of top-Down and Bottom-up Visual Processing Using a Recurrent Convolutional–Deconvolutional Neural Network for Semantic Segmentation.” Intel Serv Robotics 13 (1): 87–97. https://doi.org/10.1007/s11370-019-00296-5.
  • Kroupi, E., P. Hanhart, J.-S. Lee, M. Rerabek, and T. Ebrahimi. 2016. “Modeling Immersive Media Experiences by Sensing Impact on Subjects.” Multimedia Tools and Applications 75 (20): 12409–12429. https://doi.org/10.1007/s11042-015-2980-z.
  • Lamme, V. A. 2004. “Separate Neural Definitions of Visual Consciousness and Visual Attention; a Case for Phenomenal Awareness.” Neural Networks 17 (5–6): 861–872. https://doi.org/10.1016/j.neunet.2004.02.005.
  • Lang, A. 2006. “Using the Limited Capacity Model of Motivated Mediated Message Processing to Design Effective Cancer Communication Messages.” Journal of Communication 56 (suppl_1): S57–S80. https://doi.org/10.1111/j.1460-2466.2006.00283.x.
  • Lang, P. J., and M. M. Bradley. 2013. “Appetitive and Defensive Motivation: Goal-Directed or Goal-Determined?” Emotion Review 5 (3): 230–234. https://doi.org/10.1177/1754073913477511.
  • Lavie, N. 2005. “Distracted and Confused?. Selective Attention Under Load.” Trends in Cognitive Sciences 9 (2): 75–82. https://doi.org/10.1016/j.tics.2004.12.004.
  • Loh, K. K., and R. Kanai. 2016. “How has the Internet Reshaped Human Cognition?” The Neuroscientist 22 (5): 506–520. https://doi.org/10.1177/1073858415595005
  • Lombardi, R. 2019. “Developing a Capacity for Bodily Concern: Antonio Damasio and the Psychoanalysis of Body–Mind Relationship.” Psychoanalytic Inquiry 39 (8): 534–544. https://doi.org/10.1080/07351690.2019.1671066.
  • Lu, Z., and B. Barbara Dosher. 2014. Visual Pychophysics. From Laboratory to Theory. MIT Press.
  • Luck, S. J., and E. K. Vogel. 2013. “Visual Working Memory Capacity: From Psychophysics and Neurobiology to Individual Differences.” Trends in Cognitive Sciences 17 (8): 391–400. https://doi.org/10.1016/j.tics.2013.06.006.
  • Lutz, A., A. Nassehi, Y. Bao, E. Pöppel, A. Sztrókay, M. Reiser, F. Kai, and E. Gutyrchik. 2013. “Neurocognitive Processing of Body Representations in Artistic and Photographic Images.” NeuroImage 66 (1): 288–292. https://doi.org/10.1016/j.neuroimage.2012.10.067.
  • Malabou, C. 2008. What Should we do with our Brain? Bronx, NY: Fordham University Press.
  • Marks, L. U. 2008. “Thinking Multisensory Culture.” Paragraph 31 (2): 123–137. http://www.jstor.org/stable/43151879.
  • Marotta, T. 2022. Fast image: A study of photographic image usage and apprehension by graphic design students and practitioners comparing print and online media [Doctoral dissertation, Central Queensland University]. Thesis. https://doi.org/10.25946/22207732.v1.
  • Massumi, B. 1996. “The Bleed: Where Body Meets Image.” In Rethinking Borders, edited by J. C. Welchman, 18–40. Minneapolis, MN: University of Minnesota Press.
  • Massumi, B. 2002. Parables for the Virtual: Movement, Affect, Sensation. Durham, NC: Duke University Press.
  • Niven, K., and E. Miles. 2013. “Affect Arousal.” In Encyclopedia of Behavioral Medicine, edited by M. D. Gellman, and J. R. Turner, 50–52. New York: Springer.
  • Paterson, P. 2016. Seeing with the Hands: Blindness, Vision and Touch After Descartes. Edinburgh: Edinburgh University Press.
  • Roelfsema, P. R., M. Tolboom, and P. S. Khayat. 2007. “Different Processing Phases for Features, Figures, and Selective Attention in the Primary Visual Cortex.” Neuron 56 (5): 785–792. https://doi.org/10.1016/j.neuron.2007.10.006.
  • Schipper, M., U. Ernst, and M. Fahle. 2008. Investigating the Interactions Between Top-Down and Bottom-Up Visual processing. The influence of prior expectations on contour integration. Conference Abstract. Bernstein Symposium/ Frontiers of Compute. Neuroscience. http://doi.org/10.3389/conf.neuro.10.2008.01.108.
  • Schupp, H. T., R. Schmälzle, T. Flaisch, A. I. Weike, and A. O. Hamm. 2012. “Affective Picture Processing as a Function of Preceding Picture Valence: An ERP Analysis.” Biological Psychology 91 (1): 81–87. http://doi.org/10.1016/j.biopsycho.2012.04.006.
  • Scott, S. H. 2016. “A Functional Taxonomy of Bottom-up Sensory Feedback Processing for Motor Actions.” Trends in Neurosciences 39 (8): 512–526. https://doi.org/10.1016/j.tins.2016.06.001.
  • Shapiro, L. A. 2011. Embodied Cognition. Oxfordshire: Routledge.
  • Siegel, E. H., J. B. Wormwood, K. S. Quigley, and L. F. Barrett. 2018. “Seeing What you Feel: Affect Drives Visual Perception of Structurally Neutral Faces.” Psychological Science 29 (4): 496–503. https://doi.org/10.1177/0956797617741718
  • Small, G. W., T. D. Moody, P. Siddarth, and S. Y. Bookheimer. 2009. “Your Brain on Google: Patterns of Cerebral Activation During Internet Searching.” American Journal of Geriatric Psychiatry 17 (2): 116–126. https://doi.org/10.1097/JGP.0b013e3181953a02.
  • Storbeck, J., and G. L. Clore. 2008. “Affective Arousal as Information: How Affective Arousal Influences Judgments, Learning, and Memory.” Social and Personality Psychology Compass 2 (5): 1824–1843. https://doi.org/10.1111/j.1751-9004.2008.00138.x.
  • Suomala, J., L. Palokangas, S. Leminen, M. Westerlund, J. Heinonen, and J. Numminen. 2012. “Neuromarketing: Understanding Customers’ Subconscious Responses to Marketing.” Technology Innovation Management Review 2 (12): 12–21. https://doi.org/10.22215/timreview/634
  • Theeuwes, J. 2010. “Top–Down and Bottom–up Control of Visual Selection.” Acta Psychologica 135 (2): 77–99. https://doi.org/10.1016/j.actpsy.2010.02.006.
  • Thompson, E., A. Lutz, and D. Cosmelli. 2005. “Neurophenomenology: An Introduction for Neurophilosophers.” In Cognition and the Brain: The Philosophy and Neuroscience Movement, edited by A. Brook, and K. Akins, 40–97. Cambridge University Press.
  • Tomkins, S. S. 1982. “Affect Theory.” In, Emotion in the Human Face, 2nd ed., edited by P. Ekman, 353–395. Cambridge: Cambridge University Press.
  • Tomkins, S. S. 2008. Affect Imagery Consciousness. The Complete Edition. New York: Springer.
  • Varela, F. J. 1996. “Neurophenomenology: A Methodological Remedy to the Hard Problem.” Journal of Consciousness Studies 3: 330–349. https://philpapers.org/rec/VARNAM.
  • Varela, F. J., E. Thompson, and E. Rosch. 1993. The Embodied Mind: Cognitive Science and Human Experience, 9–49. Cambridge, MA: The MIT Press.
  • Wu, C.-C., F. A. Wick, and M. Pomplun. 2014. “Guidance of Visual Attention by Semantic Information in Real-World Scenes.” Frontiers in Psychology 5: 54. http://doi.org/10.3389/fpsyg.2014.00054.
  • Yen, Chiahui, and Ming-Chang Chiang. 2021. “Examining the Effect of Online Advertisement Cues on Human Responses using Eye-tracking, EEG, and MRI.” Behavioural Brain Research 402: 113128. http://doi.org/10.1016/j.bbr.2021.113128.
  • Zadra, J. R., and G. L. Clore. 2011. “Emotion and Perception: The Role of Affective Information.” Wiley interdisciplinary Reviews. Cognitive Science 2 (6): 676–685. https://doi.org/10.1002/wcs.147.
  • Zeki, S. 2005. “The Ferrier Lecture 1995 Behind the Seen: The Functional Specialization of the Brain in Space and Time. Philosophical Transactions of the Royal Society of London.” Series B. Biological Sciences 360 (1458): 1145–1183. http://doi.org/10.1098/rstb.2005.1666.
  • Zeki, S. 2007. “A Theory of Micro-Consciousness.” In The Blackwell Companion to Consciousness, edited by M. Velmans, and S. Schneider, 580–588. Oxford: Blackwell Publishing. https://doi.org/10.1002/9780470751466.
  • Zhou, J., N. C. Benson, K. N. Kay, and J. Winawer. 2018. “Compressive Temporal Summation in Human Visual Cortex.” Journal of Neuroscienc 38 (3): 691–709. https://doi.org/10.1523/JNEUROSCI.1724-17.2017.
  • Zimmermann, M. 1989. “The Somatovisceral Sensory System.” In Human Physiology, edited by R. F. Schmidt, and G. Thews, 196–222. New York: Springer.