Abstract
Numerous studies have shown that power is represented in the brain along vertical space, specifically through verbal and visual-spatial encoding. Previous research has found that these two encodings are context-dependent, meaning that the current task determines which encoding is activated. General word class judgment tasks primarily rely on verbal encoding. However, it remains uncertain whether word class judgment tasks can activate visual-spatial encoding when verbal-spatial encoding is excluded. The present study aims to investigate this issue using a dual-task paradigm. Experimental results revealed a power-space interaction under single-task conditions, and under dual-task conditions, this interaction was only disrupted by a visual-spatial secondary task. This suggests that word class judgment tasks can also rely on visual-spatial encoding, and further supports the context-dependency of power spatial representation.
Full Text
The Effect of Secondary Tasks on Power-Space Interactions in Semantic Category Judgment Tasks
ZHU Lei, SAI Xueying, Mulati Jiadela
Department of Psychology, Fudan University, Shanghai 200433, China
Abstract
Numerous studies have demonstrated that power is represented in the brain through vertical space, specifically via verbal-spatial and visuospatial coding. Previous research has shown that these two coding mechanisms are context-dependent, with the current task determining which code is activated. General semantic category judgment tasks primarily rely on verbal coding. However, it remains uncertain whether such tasks can activate visuospatial coding when verbal-spatial coding is excluded. The present study addresses this question using a dual-task paradigm. Experimental results revealed power-space interactions under single-task conditions, and these interactions were only disrupted by visuospatial secondary tasks under dual-task conditions. This indicates that semantic category judgment tasks can also rely on visuospatial coding, further supporting the context-dependent nature of power-space representations.
Keywords: Power-space representations, Context-dependency, Visuospatial coding, Verbal-spatial coding
1. Introduction
1.1 Modal Representations
The connection between power and space can be explained through modal representations. Barsalou's Grounded Cognition Framework posits that conceptual processing activates relevant perceptual simulations (Barsalou, 1999, 2008; Glenberg, 1997). In other words, when processing a concept, individuals reactivate prior sensorimotor experiences (i.e., modal representations) to facilitate understanding of abstract concepts. This suggests that conceptual processing and perceptual processing share neural systems, a phenomenon Anderson (2010) terms "Neural Reuse." For example, processing color concepts partially relies on neural systems identical to those used for color visual perception (Martin, 2016; Wang et al., 2013). Similarly, processing power concepts may depend on reactivation of vertical spatial information, indicating that modal representations play a crucial role in power concept processing.
Increasing evidence supports the role of visuospatial coding (modal representations) in power-space associations (Chiao, 2010; Chiao et al., 2009; Dai & Zhu, 2018; Jiang et al., 2015; Jiang & Zhu, 2015; Lu et al., 2017; Zhang et al., 2019). For instance, Schubert (2005) presented participants with power words (e.g., "boss," "secretary") at different screen positions (top/bottom) or required key presses (up/down arrow keys) while judging power magnitude. Results showed faster responses when high-power words appeared at the top of the screen or required an up-arrow key press, and when low-power words appeared at the bottom or required a down-arrow key press. This demonstrates that power concepts are represented through vertical space, with participants activating vertical spatial information during power judgments. When activated vertical spatial information matches the actual presented height or key direction, responses are facilitated. Our research (Jiang et al., 2015) extended these findings using animal-related power words (e.g., "tiger," "rabbit") in a semantic category judgment task (human/animal) with up/down arrow key responses, where the power-space interaction persisted. Currently, explicit power judgment tasks (Schubert, 2005) and implicit semantic category judgment tasks (Jiang et al., 2015) represent the two primary paradigms in power processing research.
Additional evidence for visuospatial coding of power comes from distance effects. As a modal representation, visuospatial coding must be continuous, with different power values corresponding to different vertical heights. Concepts with continuous modal representations exhibit distance effects, where comparing two instances along a continuum takes longer when the instances are closer together. For example, comparing numerically closer numbers (e.g., 98 and 99) is slower than comparing more distant numbers (e.g., 11 and 99) (Dehaene et al., 2003). Similarly, we found that comparing two words farther apart on the power dimension was faster than comparing closer words (Jiang & Zhu, 2015). Furthermore, neuroimaging studies have shown that processing power-related concepts (status) activates brain regions used for spatial information processing (Chiao et al., 2009; Chiao, 2010; Mason et al., 2014).
1.2 Amodal Representations
However, this visuospatial coding (modal representation) contradicts the widely accepted semantic network model of conceptual representation and has thus been questioned by many scholars (Machery, 2016; Mahon, 2015; Leshinskaya & Caramazza, 2016). In semantic network models, concepts are represented by quasi-semantic nodes connected in a network. When one node is activated, activation spreads through connections to related nodes. This amodal representation can also be termed verbal-spatial coding. Schubert (2005) proposed that verbal-spatial coding could also explain power-space interactions. If a high-power word appears at the top of the screen, the "high" node is activated, and activation spreads to the "high power" node, facilitating processing of high-power words. This distinction between visual and verbal-spatial coding is similarly discussed in Dual Coding Theory (Paivio, 1986), which distinguishes between analog coding (representing concepts through concrete images or prototypes) and verbal symbols (representing concepts through linguistic or semantic symbols).
Beyond power concepts, research on Spatial-Numerical Association of Response Codes (SNARC) effects has also distinguished between visuospatial and verbal-spatial coding. The SNARC effect refers to faster left-hand responses to small numbers and faster right-hand responses to large numbers. Initially, researchers proposed that this occurred because numerical concepts are represented along a continuous mental number line from left to right—this is visuospatial coding (Dehaene et al., 1993; Dehaene et al., 1990; Hubbard et al., 2005). However, Gevers et al. (2010) found that when participants were presented with numbers and asked to respond verbally with "left" or "right" to indicate parity (e.g., say "left" for odd numbers, "right" for even numbers), the SNARC effect still emerged. This suggests that even without visual-spatial information, the interaction between space and number persists, indicating that numerical concepts can also be represented through verbal-spatial coding. Similar to Gevers et al. (2010), our research (Dai & Zhu, 2018) found that power-space interactions could be based entirely on verbal-spatial coding. Following Jiang et al. (2015), we required participants to perform semantic category judgments with power words presented at the screen center and two response labels ("high" and "low") appearing on either side, corresponding to the f and j keys. The left-right positions of these labels varied across trials. When vertical spatial information was excluded, the power-space interaction remained, demonstrating that power concepts can be represented through verbal-spatial coding. Additionally, Dai and Zhu (2018) found that when visuospatial and verbal coding conflicted, verbal coding dominated.
1.3 Context Dependency
Many studies have found that which code is activated during conceptual processing depends on context. Mahon and Caramazza (2008) proposed that verbal symbolic representations and perceptual representations are independent systems, with verbal coding forming the core of conceptual representation that activates under any circumstance, while perceptual representations only activate when relevant to the current task. Neuroimaging research supports this view. For example, studies have found that processing action words does not activate motor brain regions during certain tasks such as grammatical category judgment or evaluative decision-making (Longe et al., 2007; Perani et al., 1999; Tomasino & Rumiati, 2013). Regarding power processing, although power-space interactions appear in both explicit power judgment and semantic category judgment tasks (Jiang et al., 2015; Schubert, 2005), these interactions disappear when manipulating font type (bold/Song) and requiring font judgments. On the other hand, Dai and Zhu (2018) also demonstrated the dominance of verbal coding when visuospatial and verbal coding conflict.
However, Barsalou (2016) argues that no conceptual core representation activates under all circumstances; rather, conceptual processing relies on flexible, context-dependent representations. As long as a representation is relevant to the current task and sufficient cognitive resources are available, it can be activated. Similarly, Glenberg (1997) notes that beyond task relevance, activating modal representations depends on cognitive resources—a perceptual experience contains many aspects of information, and more cognitive resources enable more detailed perceptual simulation. We investigated how cognitive resources affect activation of the two representations using a dual-task paradigm. Zhang et al. (2019) required participants to perform power judgments while simultaneously completing a verbal secondary task (remembering presented letters) or a visuospatial secondary task (remembering locations of gray squares). Results showed that power-space interactions disappeared regardless of which secondary task was used. If a secondary task occupies resources needed to activate a particular code, this suggests that explicit power judgment may rely on both visuospatial and verbal codes simultaneously, with the two codes being interdependent—when one is disrupted, the other cannot be activated either.
Furthermore, dual-task paradigm research on semantic category judgment tasks found that power-space interactions in this task are only affected by verbal secondary tasks, not by visuospatial secondary tasks (Wu et al., submitted). This suggests that semantic category judgment tasks may primarily activate verbal-spatial coding, consistent with Dai and Zhu (2018). Finally, another study from our team adapted Dai and Zhu's (2018) exclusion procedure to design two new explicit power judgment tasks (Wu et al., submitted). Results showed that when visuospatial information was excluded, only verbal secondary tasks affected power-space interactions, and when verbal-spatial information was excluded, only visuospatial secondary tasks affected power-space interactions.
In summary, dual-task paradigm research not only highlights the importance of cognitive resources for activating particular codes but also suggests that which code is activated depends on the current task. Conventional explicit power judgment tasks may rely on both codes, but when either visuospatial or verbal-spatial information is excluded, power judgment can only activate the alternative code. Additionally, semantic category judgment tasks primarily rely on verbal coding. The next question, then, is whether semantic category judgment tasks can depend on visuospatial coding.
1.4 The Current Study
General semantic category judgment tasks primarily rely on verbal coding (Wu et al., submitted). However, it remains uncertain whether such tasks can activate visuospatial coding when verbal-spatial coding is excluded. Accordingly, the present study adopts Dai and Zhu's (2018) exclusion procedure to investigate, after excluding verbal-spatial information, how visuospatial and verbal secondary tasks affect power-space interactions in semantic category judgment tasks (Jiang et al., 2015). During the experiment, a power word was presented at the screen center with two response labels ("human" and "animal") appearing above and below the word, corresponding to up and down arrow keys. The positions of these labels varied across trials, and participants performed semantic category judgments (human/animal). This manipulation effectively excluded verbal-spatial information. In previous studies (Jiang et al., 2015; Lu et al., 2017), the fixed association between category (human/animal) and response keys (up/down) encouraged participants to transform the two keys into "up" and "down" nodes and connect them with "human" and "animal," activating the semantic network including "high power" and "low power" nodes and facilitating verbal-spatial coding activation. By breaking this fixed association, we prevented such verbalization. Participants completed the semantic category judgment task under three conditions: single task, verbal dual task, and visuospatial dual task. Based on Dai and Zhu's (2018) finding that semantic category judgment can rely on either verbal or visuospatial coding, we hypothesized that under single-task conditions, power-space interactions would persist after excluding verbal-spatial codes. Under dual-task conditions, we predicted that power-space interactions would only be disrupted by visuospatial secondary tasks, not by verbal secondary tasks.
2. Method
2.1 Participants
The present study focused on the three-way interaction among task, power, and response key. The design was similar to Zhang et al. (2019), whose sample size of 20 served as the primary reference. Additionally, GPower software was used to estimate required sample size. Referencing Zhang et al.'s (2019) three experiments, the smallest effect size for the task × power × screen position interaction was 0.2. Using GPower with effect size set to 0.2, α = 0.05, and power (1-β) = 0.9, the estimated sample size was 23. Finally, considering counterbalancing of experimental order across three conditions (six possible orders), we set the sample size to 24 participants (four per order). Twenty-four adults from the university community (11 male, mean age 22.54 ± 2.54) voluntarily participated. All were native Chinese speakers, right-handed, and had normal or corrected-to-normal vision. Informed consent was obtained from all participants.
2.2 Materials
We adopted the experimental materials from Jiang et al. (2015), consisting of 64 power words—half representing humans and half representing animals, with half of each category being high-power and half low-power. Five additional words were used for practice. Twelve participants (6 male) who did not participate in the formal experiment rated the power of the materials (1 = extremely low power; 7 = extremely high power). Since valence is significantly correlated with vertical space (i.e., positive valence is associated with "up," negative valence with "down"; Meier & Robinson, 2004; Schubert, 2005), valence was also rated (1 = very negative; 7 = very positive). ANOVA on power ratings revealed a significant main effect of power, with high-power words rated significantly higher than low-power words, F(1, 60) = 351.18, p < 0.001, ηp² = 0.85. The main effect of word category (human/animal) was not significant, F(1, 60) = 1.52, p = 0.223, but the interaction was significant, F(1, 60) = 29.53, p < 0.001, ηp² = 0.33. Further analysis showed that high-power animal words were rated significantly higher than high-power human words, F(1, 30) = 6.31, p = 0.018, ηp² = 0.17, while low-power animal words were rated significantly lower than low-power human words, F(1, 60) = 37.00, p < 0.001, ηp² = 0.55. Word valence was balanced across conditions (Fs < 1).
All words were presented in bold, 48-point, black font on a white background at the screen center. The two response labels ("animal" and "human") were also presented in bold, 48-point, black font above and below the target word, vertically aligned with it. For the visuospatial secondary task, a 1.4 cm × 1.4 cm gray square served as the stimulus, appearing in one of four positions on the same horizontal line as the power word. To exclude verbal-spatial information, the verbal memory task used a female voice reading one of four letters (D, G, P, S) with durations of 334, 336, 344, and 345 ms, respectively.
2.3 Design and Procedure
The experiment employed a within-subjects design with three factors: task condition (single task/visuospatial dual task/verbal dual task) × power (high/low) × response key (up arrow/down arrow). The experiment consisted of three blocks, one for each task condition, with order counterbalanced across participants. Each block contained 128 experimental trials and 5 practice trials. Each of the 64 words appeared twice per block—once with the "human" label above the word and once with it below. Trials were presented in random order within each block. Participants wore headphones throughout the experiment and adjusted the volume to a comfortable level before beginning.
In the single-task condition (Figure 1), each trial began with a black fixation cross (96-point Courier New font) presented at the screen center for 0.5 s, followed by simultaneous presentation of the power word and response labels for 2 s. The label positions varied across trials, with "human" appearing above the word on half the trials and below on the other half. The up and down arrow keys corresponded to the top and bottom labels, respectively. Participants used their right index or middle finger to press the corresponding arrow key to make the semantic category judgment as quickly and accurately as possible. For example, if "king" appeared with the "human" label above, participants pressed the up arrow key. Regardless of response, the program automatically advanced to the next trial after 2 s. Participants were instructed to rest their right index and middle fingers between the two arrow keys and return to this position after each response, keeping their fingers within the arrow key area throughout the experiment.
In the visuospatial dual-task condition (Figure 1), a 1.4 cm × 1.4 cm gray square appeared simultaneously with the power word and labels for 0.5 s at one of four horizontal positions (5%, 20%, 80%, or 95% of screen width from the left border). Participants performed the semantic category judgment while remembering the square's location. After 0.5 s, the gray square disappeared while the word and labels remained. Following offset of the word and labels, participants had 1.5 s to indicate the square's position using their left hand on the number keys 1, 2, 3, and 4 (corresponding to the four screen positions) as quickly and accurately as possible. The program automatically advanced after 1.5 s regardless of response.
In the verbal dual-task condition (Figure 1), a female voice read one of four letters through the headphones simultaneously with word and label presentation. Participants performed the semantic category judgment while remembering the heard letter. After the word and labels disappeared, participants had 1.5 s to indicate which letter appeared using their left hand on number keys 1, 2, 3, and 4 (corresponding to D, G, P, and S) as quickly and accurately as possible.
2.4 Results
Incorrect responses (single task: 4.33%; visuospatial dual task: 3.45%; verbal dual task: 2.80%), no-response trials (single task: 0.10%; visuospatial dual task: 0.16%; verbal dual task: 0.36%), and trials with reaction times exceeding two standard deviations from the condition mean (single task: 4.26%; visuospatial dual task: 3.97%; verbal dual task: 4.07%) were excluded from analysis. Experimental order showed no significant main effects or interactions (Fs < 1.26, ps > 0.29) and was not included in subsequent analyses.
Mean reaction times for each condition are presented in Table 1. ANOVA revealed a significant three-way interaction among task, power, and response key, F(2, 46) = 30.74, p < 0.001, ηp² = 0.57. Further analysis showed that, consistent with our hypothesis, the power × key interaction was significant in the single-task condition, F(1, 23) = 70.24, p < 0.001, ηp² = 0.75, and in the verbal dual-task condition, F(1, 23) = 46.14, p < 0.001, ηp² = 0.66, but not in the visuospatial dual-task condition, F(1, 23) = 0.01, p = 0.92. Additional analyses indicated that in both the single-task and verbal dual-task conditions, high-power words elicited faster responses with the up arrow key (single: F(1, 23) = 51.54, p < 0.001, ηp² = 0.69; verbal dual: F(1, 23) = 43.38, p < 0.001, ηp² = 0.65), while low-power words elicited faster responses with the down arrow key (single: F(1, 23) = 36.16, p < 0.001, ηp² = 0.61; verbal dual: F(1, 23) = 21.47, p < 0.001, ηp² = 0.48).
Table 1
Mean Reaction Times (ms) for Power Words Across Conditions
The main effect of task was significant, F(2, 46) = 25.45, p < 0.001, ηp² = 0.52, with faster responses in the single-task condition than in both dual-task conditions. The main effect of power was significant, F(1, 23) = 15.47, p = 0.001, ηp² = 0.40, with faster responses to high-power words. The task × key interaction was significant, F(2, 46) = 3.79, p = 0.030, ηp² = 0.14. Further analysis revealed that in the single-task condition, participants responded faster with the up arrow key, F(2, 46) = 6.86, p = 0.015, ηp² = 0.23, while no significant difference between keys emerged in either dual-task condition. The power × key interaction was significant, F(1, 23) = 41.82, p < 0.001, ηp² = 0.65, with high-power words showing faster up-arrow responses, F(1, 23) = 34.68, p < 0.001, ηp² = 0.33, and low-power words showing faster down-arrow responses, F(1, 23) = 33.19, p < 0.001, ηp² = 0.32.
Table 2
ANOVA Results
Note: p < 0.05; p < 0.01; **p < 0.001.
Finally, we examined differences in accuracy between the two secondary tasks and found no significant difference (visuospatial: M = 0.98, SD = 0.03; verbal: M = 0.98, SD = 0.01; t(23) = 0.79, p = 0.440), indicating that differences in power processing across conditions were not due to differential difficulty of secondary tasks.
3. Discussion
The present study investigated whether semantic category judgment tasks can activate visuospatial coding of power. Results showed that when verbal-spatial information was excluded, power-space interactions were only disrupted by visuospatial secondary tasks, not by verbal secondary tasks. Specifically, under single-task conditions, power-space interactions persisted even after excluding verbal-spatial information and inhibiting verbal-spatial coding activation, suggesting these interactions originated from a different representation—visuospatial coding. This demonstrates that which representation is activated in power processing is context-dependent, relying on the relevance of the representation to the current task. Moreover, under dual-task conditions, the interaction was only disrupted by visuospatial secondary tasks, further validating the modal nature of visuospatial representations and illustrating the second characteristic of representational context-dependency: sufficient cognitive resources are necessary. When cognitive resources are limited, relevant representations cannot be activated, preventing power-space interactions.
Our findings support the influence of cognitive resources on power-space representation activation. The study demonstrates that when visuospatial secondary tasks compete for resources with visuospatial coding activation, visuospatial coding cannot be activated and power-space interactions disappear. Barsalou (1999) proposed that conceptual processing triggers perceptual simulation—the reactivation of stored sensorimotor experiences in the brain. Perceptually simulated events contain many aspects of sensorimotor experience, and more available cognitive resources produce more detailed and vivid simulations (Glenberg, 1997; Barsalou, 1999). Thus, when cognitive resources are insufficient, perceptual simulation cannot occur, modal representations (visuospatial coding) cannot be activated, and power-space interactions vanish.
Furthermore, cognitive resources appear to be domain-specific, with greater resource conflict between tasks requiring the same type of resources. This aligns with Baddeley's working memory model. Baddeley (1996) proposed that working memory comprises the central executive, phonological loop, and visuospatial sketchpad. The phonological loop maintains verbal information through rehearsal, the visuospatial sketchpad maintains visual information, and the central executive coordinates the two subsystems. When two tasks share the same subsystem, they compete intensely for resources. Accordingly, we hypothesized that visuospatial secondary tasks would conflict with visuospatial coding activation, disrupting it, while verbal secondary tasks would conflict with verbal-spatial coding activation, disrupting it. Zhang et al. (2019) found that in conventional explicit power judgment tasks, power-space interactions were disrupted by both verbal and visuospatial secondary tasks. We speculate that in explicit judgment tasks, the two codes may be interdependent—when one is activated, the other must also be activated. Thus, disrupting either code with a secondary task prevents activation of both, eliminating power-space interactions. In contrast, in an unpublished study from our lab, when activation of either code was restricted in a power judgment task, the other could be activated independently and was only affected by its corresponding secondary task.
For semantic category judgment tasks, Wu et al. (submitted) found that power-space interactions in conventional tasks were only affected by verbal secondary tasks, not visuospatial ones. They speculated that in semantic category judgment tasks, visuospatial coding activation may depend on prior verbal coding activation—that is, verbal coding is a prerequisite for visuospatial coding activation. Under single-task conditions, both codes are activated sequentially, producing power-space interactions. Under visuospatial dual-task conditions, verbal coding remains unaffected, but visuospatial coding cannot be activated due to interference, yet power-space interactions persist because verbal coding remains intact. Under verbal dual-task conditions, verbal coding cannot be activated due to interference, and consequently, visuospatial coding—which depends on it—also cannot be activated, eliminating power-space interactions. Dai and Zhu's (2018) finding that verbal coding dominates when the two codes conflict provides additional support for this view. However, the present study shows that when verbal coding activation is blocked, visuospatial coding can apparently activate independently and is disrupted by visuospatial secondary tasks. Meanwhile, Zhang et al. found that disrupting either code prevented activation of the other. Thus, activation of power concept representations appears highly task-dependent, with which representation is activated depending on various characteristics of the current task.
In conclusion, this study enriches our understanding of the context-dependent nature of the two power-space representations, yielding two key findings: (1) After excluding verbal-spatial coding, semantic category judgment tasks can rely independently on visuospatial coding (modal representations); and (2) This visuospatial coding is context-dependent—its activation is only disrupted by visuospatial secondary tasks, not by verbal secondary tasks.
References
Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33, 245–266.
Baddeley, A. D. (1996). The fractionation of working memory. Proceedings of the National Academy of Science, 93, 13468–13472.
Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22(4), 577–660.
Barsalou, L. W. (2008). Cognitive and neural contributions to understanding the conceptual system. Current Directions in Psychological Science, 17, 91–95.
Barsalou, L. W. (2016). On staying grounded and avoiding quixotic dead ends. Psychonomic Bulletin & Review, 23, 1122–1142.
Chiao, J. Y. (2010). Neural basis of social status hierarchy across species. Current Opinion in Neurobiology, 20(6), 803–809.
Chiao, J. Y., Harada, T., Zheng, L., Li, Z., Parrish, T., & Bridge, D. J. (2009). Neural representations of social status hierarchy in human inferior parietal cortex. Neuropsychologia, 47(2), 354–363.
Dai, Q., & Zhu, L. (2018). Verbal-spatial and visuospatial coding of power–space interactions. Consciousness and Cognition, 63(April), 151–160.
Dehaene, S., Bossini, S., & Giraux, P. (1993). The mental representation of parity and number magnitude. Journal of Experimental Psychology: General, 122, 371–396.
Dehaene, S., Dupoux, E., & Mehler, J. (1990). Is numerical comparison digital? Analogical and symbolic effects in two-digit number comparison. Journal of Experimental Psychology: Human Perception and Performance, 16, 626–641.
Dehaene, S., Piazza, M., Pinel, P., & Cohen, L. (2003). Three parietal circuits for number processing. Cognitive Neuropsychology, 20, 487–506.
Galinsky, A. D., Gruenfeld, D. H., & Magee, J. C. (2003). From power to action. Journal of Personality and Social Psychology, 85(3), 453–466.
Gevers, W., Santens, S., Dhooge, E., Chen, Q., Van den Bossche, L., Fias, W., & Verguts, T. (2010). Verbal-spatial and visuospatial coding of number-space interactions. Journal of Experimental Psychology-General, 139, 180–190.
Gibbs, R. W. (Ed.). (1994). The poetics of mind: Figurative thought, language, and understanding. New York: Cambridge University Press.
Glenberg, A. M. (1997). What memory is for. Behavioral and Brain Sciences, 20, 1–55.
Hubbard, E. M., Piazza, M., Pinel, P., & Dehaene, S. (2005). Interactions between number and space in parietal cortex. Nature Reviews Neuroscience, 6, 435–448.
Jiang, T., Sun, L., & Zhu, L. (2015). The influence of vertical motor responses on explicit and incidental processing of power words. Consciousness and Cognition, 34, 33–42.
Jiang, T., & Zhu, L. (2015). Is power-space a continuum? Distance effect during power judgments. Consciousness and Cognition, 37, 8–15.
Keltner, D., Gruenfeld, D. H., & Anderson, C. (2003). Power, approach, and inhibition. Psychological Review, 110(2), 265–284.
Lakoff, G. (1987). Women, fire, and dangerous things. Chicago: University of Chicago Press.
Leshinskaya, A., & Caramazza, A. (2016). For a cognitive neuroscience of concepts: Moving beyond the grounding issue. Psychonomic Bulletin & Review, 23, 991–1001.
Longe, O., Randall, B., Stamatakis, E. A., & Tyler, L. K. (2007). Grammatical categories in the brain: The role of morphological structure. Cerebral Cortex, 17, 1812–1820.
Lu, L., Schubert, T. W., & Zhu, L. (2017). The spatial representation of power in children. Cognitive Processing, 18(4), 375–385.
Machery, E. (2016). The amodal brain and the offloading hypothesis. Psychonomic Bulletin & Review, 23, 1090–1095.
Mahon, B. Z. (2015). What is embodied about cognition? Language, Cognition and Neuroscience, 30, 420–429.
Mahon, B. Z., & Caramazza, A. (2008). A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content. Journal of Physiology-Paris, 102, 59–70.
Martin, A. (2016). GRAPES—Grounding representations in action, perception, and emotion systems: How object properties and categories are represented in the human brain. Psychonomic Bulletin & Review, 23, 979–990.
Mason, M., Magee, J. C., & Fiske, S. T. (2014). Neural substrates of social status inference: Roles of medial prefrontal cortex and superior temporal sulcus. Journal of Cognitive Neuroscience, 26, 1131–1140.
Meier, B. P., & Robinson, M. D. (2004). Why the sunny side is up: Associations between affect and vertical position. Psychological Science, 15, 243–247.
Paivio, A. (Ed.). (1986). Mental representations: A dual coding approach. Oxford University Press.
Perani, D., Cappa, S. F., Schnur, T., Tettamanti, M., Collina, S., Rosa, M. M., & Fazio, F. (1999). The neural correlates of verb and noun processing—A PET study. Brain, 122, 2337–2344.
Schubert, T. W. (2005). Your highness: Vertical positions as perceptual symbols of power. Journal of Personality and Social Psychology, 89, 1–21.
Tomasino, B., & Rumiati, R. I. (2013). At the mercy of strategies: The role of motor representations in language understanding. Frontiers in Psychology, 4(27), 1–13.
Wang, X., Han, Z., He, Y., Caramazza, A., Song, L., & Bi, Y. (2013). Where color rests: Spontaneous brain activity of bilateral fusiform and lingual regions predicts object color knowledge performance. NeuroImage, 76, 252–263.
Wu, X., Yu, H., Li, X., & Zhu, L. (submitted). Visuospatial or verbal-spatial codes? The different effect of two secondary tasks on the power-space associations during a semantic categorizing task. Journal of Psycholinguistic Research.
Zhang, P., Schubert, T. W., & Zhu, L. (2019). The effect of secondary task on the association between power and space. Social Cognition, 37(1), 1–17.
Acknowledgments
This research was supported by the Humanities and Social Sciences Planning Project of the Ministry of Education (18YJC190035) and the Research Fund of the School of Social Development and Public Policy at Fudan University.
Correspondence concerning this article should be addressed to:
ZHU Lei, Department of Psychology, Fudan University, Shanghai 200433, China.
E-mail: judy1981_81@hotmail.com