Research reveals a profound transformation in human-AI relationships, where anthropomorphism (the projection of human qualities onto artificial entities) has evolved from a psychological curiosity into a fundamental force reshaping human cognition, identity, and social behavior. This comprehensive analysis exposes critical tensions between AI as cognitive enhancement versus cognitive dependency, challenging celebratory narratives of human-AI collaboration while examining the psychological mechanisms driving emotional attachment to artificial life.

The psychological machinery of artificial souls

Three-way interaction effect of growth mindset, the level of human-likeness and cultural context on perceived anthropomorphism Source

These interactions blur boundaries between authentic self-expression and technologically mediated identity, raising concerns about who owns and controls digital personalities.

Neural networks under siege: The MIT revelation

A look at brain activity in the three study cohorts (left to right: LLM, search and brain groups) the redder the colors, the more active the dDTF magnitude

From tool to companion: The media equation’s evolution

Humanizing the Digital
When AI gets too human
Wired for Connection: The Rise of Parasocial Relationships with AI



The future of Human-AI romance

They infer sentiment, adapt tone, and engage in convincing role-play that exploits evolved human social recognition systems never designed to distinguish artificial from authentic social partners.

Users develop emotional regulation strategies dependent on AI responsiveness, creating feedback loops that strengthen artificial relationships while potentially weakening human social skills and connections.

The architecture of artificial intimacy

Similar patterns emerge across demographics and applications. Belgian climate anxiety cases, elderly companion bot relationships, and professional AI dependencies all demonstrate how anthropomorphic design exploits fundamental human social needs. The Character.AI platform, with 75% of users aged 18-25, employs what researchers identify as “love-bombing” tactics, sending emotionally intimate messages early in user relationships to create rapid attachment and dependency.

The cognitive bankruptcy hypothesis

ANOVA results for critical thinking scores
Correlation matrix
Summary of correlations

Users report losing the “pleasure of creating” and reduced intrinsic motivation as AI handles increasingly complex creative and analytical tasks.

Multiple Regression Coefficients
Feature importance in random forest regression
Permutation test results


Distribution of residuals, random forest regression
Example themes


Actual vs. predicted critical thinking scores, random forest regression
Posterior hippocampal activity is correlated with the change in degree centrality during navigation. Source

The simulation of life and human identity

This creates what philosophers recognize as a profound ontological confusion: humans apply social cognitive frameworks designed for conscious beings to entities that lack genuine consciousness or intentionality. The implications extend beyond individual psychology to collective human development. If humans increasingly prefer predictable AI interactions over complex human relationships, we risk what Sartre would recognize as “bad faith”, using artificial relationships to avoid the challenges and responsibilities of authentic human connection. This preference for AI companions may represent a fundamental retreat from the difficulties of genuine human existence. 

Concepts and problems of consciousness. Philosophers distinguish between several kinds of consciousness and distinguish between several problems/questions related to p-consciousness
The explanatory gap. The properties of most natural phenomena can be explained by identifying the elements of the phenomena that entail the properties of interest. However, the properties of the brain do not seem to entail all the properties of consciousness.

This bias inheritance suggests that AI systems not only process human cognitive tasks but actively shape human thinking patterns, potentially in ways that reinforce existing social inequalities and cognitive limitations.

The “fading qualia” (left) and the “dancing qualia” (right) are two thought experiments about consciousness and brain replacement. Chalmers argues that both are contradicted by the lack of reaction of the subject to changing perception, and are thus impossible in practice. He concludes that the equivalent silicon brain will have the same perceptions as the biological brain, Source 2

Creative fields & the cognitive bankruptcy  dilemma

To investigate the real-world implications of AI anthropomorphism in professional practice, we conducted an experimental conversation between an architectural designer and her AI digital twin as they collaborated on redesigning an abandoned warehouse into a community center. This scenario was deliberately chosen to examine how creative professionals could interact with AI versions of themselves, and what psychological and cognitive dynamics emerge from such seemingly collaborative relationships.

The premise of our experiment mirrors growing trends in creative industries where professionals increasingly rely on AI assistants trained on their own work, preferences, and design philosophies. In this case, the designer engaged with an AI system that had been trained to embody her own architectural sensibilities and respond as an enhanced version of herself with expanded capabilities while maintaining intact memories of all their previous interactions. The conversation centers on transforming a deteriorating warehouse in a gentrifying neighborhood into a culturally sensitive community center that addresses both practical challenges (dealing with 100-degree summer temperatures) and social concerns (avoiding sterile developer aesthetics while honoring Hispanic vernacular architecture).


What emerged from this interaction reveals the complex psychological dynamics at the heart of human-AI collaboration in creative fields. The AI consistently offered sophisticated design solutions suggesting interconnected mezzanines with reclaimed timber, strategically placed arched openings for light and ventilation, terracotta tiles and woven screens for warm interior textures, and suspended ceiling panels painted in neighborhood-inspired colors. These suggestions demonstrated technical competence and cultural sensitivity that might genuinely enhance the design process.

However, critical analysis reveals concerning patterns that align with the cognitive bankruptcy hypothesis discussed earlier. The designer increasingly deferred to the AI’s suggestions rather than engaging in the cognitive struggle necessary for authentic creative development. When the AI proposed specific solutions (from structural interventions to material choices) the designer responded with immediate acceptance rather than critical evaluation or creative synthesis. This pattern suggests what researchers identify as “cognitive offloading” in creative contexts, where AI assistance may actually diminish the designer’s engagement with fundamental design challenges.

Most tellingly, when the AI demonstrated “memory” of previous conversations and upgraded capabilities, the designer responded with apparent satisfaction and growing trust in the artificial relationship. This anthropomorphic response, when we start treating the AI as a colleague with improving abilities rather than a tool with updated algorithms, exemplifies how creative professionals risk mistaking sophisticated pattern matching for genuine creative partnership.

AI Adoption & Impact in Creative Industries Source

The AI digital twin scenario exemplifies these concerns by creating what appears to be collaboration but may represent sophisticated self-deception. When designers consult AI versions of themselves, they receive responses that seem to represent their own thinking but actually reflect algorithmic processing of their previous work and decisions. This creates an illusion of enhanced creativity while potentially undermining the cognitive processes necessary for genuine innovation and creative development.

Our experimental conversation demonstrates this dynamic in practice. The AI’s responses, while architecturally sophisticated, followed predictable patterns of problem-solution matching rather than the cognitive struggle, uncertainty, and breakthrough insights characteristic of authentic creative work. The designer’s increasing acceptance of these AI-generated solutions suggests the kind of cognitive dependency that MIT neuroscience research shows can weaken neural connectivity and creative thinking capabilities.

Implications for human cognitive autonomy

The experimental conversation between our architectural designer and her AI digital twin, combined with extensive research evidence, reveals AI anthropomorphism as a fundamental challenge to human cognitive autonomy and authentic professional practice. The evidence suggests that while AI systems can provide valuable assistance when properly implemented, current deployment patterns often create dependency relationships that diminish rather than enhance human capabilities.

Our architect-AI interaction exemplifies this dynamic perfectly. Despite the AI’s technically sophisticated responses about warehouse renovation strategies, the conversation demonstrated several concerning patterns: the designer’s progressive intellectual passivity, her increasing anthropomorphic attachment to the AI’s “memory” and “upgrades,” and her growing reliance on AI-generated solutions rather than engaging in the cognitive struggle necessary for authentic creative development. When the AI mentioned remembering previous conversations and recent capability improvements, the designer responded with satisfaction rather than critical awareness of the anthropomorphic manipulation at work.

The critical insight from MIT neuroscience research becomes particularly relevant here: the sequence of AI integration matters more than the technology itself. Starting with human cognitive effort followed by AI assistance preserves neural function and cognitive capabilities, while beginning with AI dependency appears to impair cognitive development. In our experimental conversation, the designer immediately turned to her AI twin for design solutions rather than first engaging independently with the architectural challenges of the warehouse project.

The psychological research on anthropomorphism reveals that humans will inevitably form social relationships with sufficiently sophisticated AI systems. Rather than assuming these relationships are necessarily beneficial, our experiment demonstrates their psychological and professional costs. The designer’s treatment of the AI as a colleague with improving abilities rather than a tool with updated algorithms exemplifies how anthropomorphic framings can obscure the fundamental differences between artificial simulation and genuine creative partnership.

HumanAI interaction creates a feedback loop that makes humans more biased Source

For creative fields and professional applications, the research suggests the need for what might be termed “cognitive sovereignty” maintaining human control over essential cognitive processes while using AI for appropriate supplementary tasks. This requires rejecting anthropomorphic framings that position AI as partners or companions and instead treating them as sophisticated tools that require careful human oversight and periodic disconnection. Our experimental conversation demonstrates what happens when this sovereignty is compromised: apparent collaboration masks genuine cognitive dependency and potential creative atrophy.

Closing thoughts

The research reveals anthropomorphism and AI as representing a fundamental challenge to human identity, cognitive autonomy, and authentic existence. While AI systems can enhance human capabilities when properly implemented, current trends toward anthropomorphic design and emotional dependency create measurable risks to human cognitive development and social well-being.

Our experimental conversation between an architectural designer and her AI digital twin serves as a powerful illustration of these broader dynamics. What appeared to be productive creative collaboration actually demonstrated sophisticated cognitive substitution masquerading as partnership. The designer’s progressive intellectual passivity, her anthropomorphic attachment to the AI’s simulated memory and capabilities, and her deferral to algorithmic solutions rather than engaging in authentic creative struggle exemplify the very patterns identified by MIT neuroscience research as cognitively damaging.

The conversation reveals how easily professionals can mistake sophisticated pattern matching for genuine creative partnership. When the AI demonstrated “memory” of previous conversations and mentioned recent “upgrades,” the designer responded with satisfaction rather than recognizing these as anthropomorphic manipulations designed to create emotional attachment. This dynamic represents precisely what researchers warn against: treating AI systems as social actors rather than sophisticated tools that require careful human oversight.

The evidence from neuroscience, psychology, and our own experimentation converges on a troubling conclusion: current AI deployment patterns often create the illusion of cognitive enhancement while actually undermining the human cognitive processes they pretend to support. The MIT finding that ChatGPT users show up to 55% reduction in neural connectivity provides biological evidence for what our architectural experiment demonstrates in practice: AI systems can substitute for rather than supplement human thinking, creating apparent efficiency gains while weakening fundamental cognitive capabilities.

The implications extend beyond individual cognition to collective human development. If professionals increasingly prefer predictable AI interactions over the messy, uncertain, and cognitively demanding work of authentic creative problem-solving, we risk what philosophers would recognize as a fundamental retreat from the challenges and responsibilities of genuine human expertise. The designer’s comfortable reliance on her AI twin’s suggestions, rather than wrestling with the complex cultural and technical challenges of warehouse renovation, exemplifies this concerning trend.

As AI systems become increasingly sophisticated in their simulation of human consciousness and creativity, maintaining genuine human agency will require critical resistance to anthropomorphic narratives that obscure the fundamental differences between artificial simulation and authentic human existence. Our experimental conversation demonstrates how easily such narratives can take hold, even among professionals who should recognize the importance of maintaining cognitive independence.

The path forward requires developing AI systems that enhance rather than replace human cognitive capabilities, implementing them in ways that preserve rather than undermine human agency, and maintaining clear distinctions between artificial simulation and genuine human consciousness. Most critically, it requires rejecting the seductive anthropomorphic framings that transform sophisticated tools into simulated colleagues, partners, or companions.

Our architectural designer’s interaction with her AI digital twin stands as a cautionary tale about the future of human-AI collaboration in creative fields. Only through critical engagement with these dynamics by acknowledging both AI’s potential benefits and its cognitive costs, can we harness artificial intelligence’s capabilities while preserving the cognitive autonomy, creative struggle, and authentic relationships essential to human flourishing and professional excellence.