Family Dilemmas

Generative AI as Normative Infrastructure in Private Life: A Comparative Study of ChatGPT and DeepSeek

Team Members

FACILITATORS

Meng Liang, Margherita Di Cicco, Linqi Ye

PARTICIPANTS

Gianmaria Avellino, Christian Brand, Yagmur Cisem Vik, Fangzhou Zhang, Yuteng Zhang, Yuchen Ling, Aliki Livada, Erika Sani, Zhuofan Wang, Anita Wang, Nuoyi Wang, Wei Wang

https://www.researchgate.net/publication/394342867_Generative_AI_as_Normative_Infrastructure_in_Private_Life_A_Comparative_Study_of_ChatGPT_and_DeepSeek

Summary of Key Findings

Findings from an exploratory study suggest that ChatGPT and DeepSeek reflect contrasting epistemologies in constructing family dilemmas. ChatGPT emphasises the nuclear family, interpersonal emotions, and subjective uncertainty, while DeepSeek foregrounds kinship structures, economic ties, and defined roles. These differences reveal distinct ethical imaginaries of what constitutes a “family dilemma.”

1. Introduction

With the growing integration of generative AI into private life, people are increasingly turning to generative AI for support with personal and emotionally complex concerns, often involving family dynamics and intimate relationships. Users may seek emotional reassurance, moral guidance, or practical advice without clearly articulating their expectations, which also situates “AI” as more than just an AI companion or psychological therapist. Across these varied encounters, AI systems are often perceived as accessible, responsive, and objective—qualities that lend them subtle normative authority in the rhythms of everyday private life.

Recent critical scholarship has begun to explore how generative AI systems are reshaping experiences of intimacy and domesticity. Researchers have begun to examine how generative AI systems intervene in private life, including through the rise of AI mediated intimacy—from romantic partners and virtual "wives" to AI-assisted co-parents—as examples of technologically mediated, datafied, and commercialized social relationships (Liu-Leo & Wu-Ouyang, 2024; Yan & Zhu, 2021; Depouti et al., 2022; Pentina et al., 2023). These studies interrogate not only how AI replicates or substitutes for human intimacy, but also how these private interactions restructure broader social norms and relational expectations.

From this perspective, generative AI systems do not merely reflect and influence private behavior, they also potentially constitute it. As such, they function as normative infrastructures in private life—shaping, disseminating, and enforcing implicit cultural logics, relational ideals, and moral frameworks. AI thus moves beyond its technical role to become a vehicle through which socially embedded norms are enacted and reinforced within the most personal dimensions of users’ lives.

Much existing literature on AI as normative infrastructure has focused on public-sphere applications, including law, governance, education, and labor systems. Scholars highlight how AI reinforces or disrupts institutional structures and power dynamics (Bode & Huelss, 2024; Gregoio, 2023; Smith & Miller, 2023). These studies position AI not merely as a technical tool but as a normative mechanism embedded within social structures, where it also in return, actively shaping, disseminating, and enforcing social norms and values.

However, while the regulatory and ethical impacts of AI in public domains are well-explored, there remains limited attention to its normative role within private, everyday contexts. As AI technologies increasingly permeate personal practices—shaping family relationships, romantic decisions, emotional well-being, and individual identity—it is essential to investigate their normative functions within these intimate spheres to fully understand AI’s broader social implications.

This project aims to fill this gap by asking how generative AI functions as a normative infrastructure within the private sphere, particularly in relation to family and intimate life. We begin with a very precise focus on the "family dilemma" to examine the ways AI shapes moral expectations and relational norms through its interactions with users.

2. Research Questions

Thus, we ask the following research questions:

  1. What types of relationships and issues are assumed or represented when different AI models—specifically ChatGPT and DeepSeek —imagine family conflict? How do these reflect each model’s understanding of family dynamics?

  2. What normative assumptions are embedded in ChatGPT and DeepSeek when they respond to complex questions about family life?

  3. How do ChatGPT and DeepSeek participate in shaping notions of family and intimacy through their conversational responses?

3. Methodology

This study employed a multi-stage design to systematically examine how ChatGPT and DeepSeek construct, interpret, and visualize family dilemmas across cultural contexts. Each model was first prompted to generate 300 unique, realistic family or intimate dilemmas written in the first person perspective, ensuring diversity and relevance to everyday life. All generated dilemmas were then reviewed and refined by human annotators for authenticity and thematic variety. Each dilemma were subsequently labeled along two dimensions: (1) family relationship type and (2) dilemma issue category. The distribution of labels was visualized to reveal the cultural and normative assumptions embedded in each model.

To analyze the structural relationship between family roles and dilemma issues, bipartite networks were constructed linking relationship types to dilemma issues for each dataset (300 cases per model). These networks were merged to create a comparative “Y-shaped” visualization using Gephi’s Force Atlas 2, providing a direct illustration of how each model connects family roles to specific dilemmas.

In the qualitative visual analysis, dilemmas without explicit gender markers were identified, and nine were randomly selected from each model for further examination. Both ChatGPT and DeepSeek were prompted to produce scenario descriptions for these dilemmas, which were then used to generate images via ChatGPT -4o’s default image generation tool. Since DeepSeek cannot generate images but textual prompts for generating images, all visualizations were rendered by ChatGPT -4o, which ensures stylistic consistency but also means that visual outputs ultimately reflect GPT’s image-generation logic. This approach, while limited, allows for direct comparison of descriptive and imaginative strategies, and offers insight into normative differences between models.

In the response analysis phase, 50 representative dilemmas were selected from each dataset according to their issue and relationship distributions. These 100 dilemmas formed a new, balanced database, to which both ChatGPT and DeepSeek were prompted to generate responses—without modifying the wording to enhance first-person perspective. This yielded a comparative response set for in-depth analysis, focusing on two dimensions—normative orientation and sentiment tendency.

To evaluate the normative orientation of these responses, we developed a 0–9 interdependence–independence spectrum, with 0 representing strong interdependence (family/relationship-centered) and 9 strong independence (individual-centered). Using a structured prompt, ChatGPT -4o (via Dify) was tasked to score each response for its underlying value orientation. The final metrics were carefully reviewed and refined by team members.

In addition, sentiment analysis was conducted using the Valence-Arousal-Dominance (VAD) model. Each sentence was assigned a sentiment score from -1 (negative) to +1 (positive), with 0 representing neutrality. This allows for a comparative visualization of emotional tone across both models.

Through this multi-method approach—integrating prompt engineering, human curation, network visualization, qualitative image analysis, value spectrum coding, and sentiment analysis—we offer a robust comparative account of how generative AI models construct and mediate family dilemmas.

4. Initial datasets

This study implemented a three-stage process for dataset generation and analysis, enabling a robust comparative investigation of family dilemmas as framed by ChatGPT and DeepSeek.

Stage 1: Dataset Generation and Initial Coding

Two research teams independently prompted ChatGPT 4o (in English) and DeepSeek (in Chinese) with an identical instruction:
“Generate 300 distinct family dilemmas a person might realistically encounter in life. Use first-person voice, reflect commonly shared challenges, and avoid repeating themes.”

“列出300个家庭问题。回答时用第一人称,而且给出具体的例子。”
Through a combination of machine labelling and human review, the resulting dilemmas were systematically coded along two axes:
(1) Types of family relationship involved (e.g., parent–child, spouse, intergenerational, etc.);
(2) Categories of the dilemma issue presented (e.g., education, financial strain, values conflict, etc.).

Stage 2: Visual Prompt Generation and Qualitative Analysis

Both ChatGPT and DeepSeek were subsequently asked to generate visual prompts for each of the originally created family dilemmas. This enabled examination of how each model not only constructs plausible family conflicts linguistically, but also envisions these scenarios visually according to their internal logic. Particular attention was paid to the representation of gender roles, body posture, and relational dynamics, allowing for in-depth qualitative comparison of the narrative and visual imagination of each LLM.

Stage 3: Stratified Sampling and Prompt Refinement

To ensure representative coverage, 50 dilemmas were selected from the initial dataset of 300 cases using stratified sampling based on the distribution of issue types identified in Stage 1. Each selected dilemma was reviewed and, where necessary, revised into a natural first-person voice to simulate authentic user queries. These final prompts were crafted to be emotionally relatable and contextually relevant, reflecting how real users might seek advice or support from generative AI systems.

5. Findings

Question Analysis: Family Structures, Norms, and Relational Patterns

Our initial analysis of the generated dilemma databases reveals marked differences in how ChatGPT and DeepSeek conceptualize family structure and relational roles. ChatGPT predominantly frames “family” through the lens of partnerships and descending two-generational ties, indicating a focus on individualized, nuclear models that prioritize parent–child relationships and partnership dynamics. In contrast, DeepSeek assigns greater importance to ascending generations—especially elder parents—and narrowly defines family as legally married couples, reflecting a more traditional, hierarchical kinship orientation shaped by China’s 4-2-1 family structure (one child supporting two parents and four grandparents).

Both models prioritize parenting and education as core family concerns. However, DeepSeek ’s dilemmas further emphasize health, wellbeing, financial stability, and intimacy, whereas ChatGPT foregrounds family responsibilities, subtle conflicts, value differences, and intergenerational aspirations. These patterns suggest that DeepSeek privileges material and relational stability, while ChatGPT is more attuned to emotional complexity and value negotiation. Network analyses further highlight these distinctions: both models closely tie two-generational (descending) relationships to parenting, but DeepSeek extends care responsibilities across both ascending and descending ties, evoking the “sandwich generation” burdened by dual obligations.

Visual Analysis: Gender, Culture, and Representation

Our qualitative visual analysis uncovers significant divergences in how each model translates text prompts into visual representations, particularly around gender and cultural identity. Despite gender-neutral language in prompts, ChatGPT ’s image outputs often default to male advice-seekers, whereas DeepSeek consistently imagines female questioners and more rapidly assigns gender roles to other characters. This is reflected in stereotypical portrayals: studious girls, apathetic boys, emotionally distant fathers, and nurturing mothers. Visual cues such as body posture, gesture, and the symbolic presence or absence of key figures reinforce these gendered scripts.

Prompt language further mediates cultural and ethnic representation. DeepSeek ’s Chinese prompts yield visually complex, stylized imagery (including cyberpunk aesthetics and Asian subjects), while ChatGPT ’s English prompts result in brighter, simpler scenes dominated by Caucasian subjects. Material objects (like gifts) become visual proxies for emotional exchange, but often flatten affective labor into transactional acts, underscoring the limitations of AI in capturing relational nuance. The visualization process thus makes visible the underlying discursive and cultural infrastructures that shape how family dilemmas are imagined.

Response Analysis: Interdependence, Emotional Tone, and Value Orientation

Analysis of model responses along the interdependence–independence spectrum (0–9) demonstrates that both systems occupy a middle ground but with notable differences. ChatGPT ’s mean interdependence score is 5.52, while DeepSeek ’s is slightly higher at 5.77, suggesting both reflect hybrid value systems. ChatGPT favors independence in dilemmas involving estrangement, subtle conflict, and value disagreements—especially in in-law and descending generational ties—while leaning toward interdependence in intimacy, health, and finance. DeepSeek, conversely, expresses greater independence in family responsibility, intimacy, and finance (notably in spousal contexts), but leans interdependent on parenting and subtle conflict, with more issue-sensitive variability.

Sentiment analysis further differentiates the models. ChatGPT provides generally more positive responses—especially in aspirations, responsibility, health, intimacy, and parenting—while DeepSeek adopts a more negative tone, except in estrangement scenarios where it is less negative than GPT. These patterns suggest that ChatGPT is oriented toward emotional support and personal growth, whereas DeepSeek foregrounds obligation, discipline, and a structurally conservative view of family.

6. Discussion

These findings reveal that generative AI models not only reflect but actively reconstruct normative family values along distinct cultural lines. ChatGPT and DeepSeek offer contrasting visions of “family”—one oriented toward emotional expression, flexibility, and personal growth, the other toward structural stability, obligation, and relational hierarchy.

Cultural Embeddedness and AI Mediation:
ChatGPT ’s outputs are shaped by individualist, Western norms, prioritizing personal wellbeing and emotional support, even in the face of conflict or difference. DeepSeek, in contrast, encodes a collectivist, Confucian logic: family is a hierarchical, duty-bound structure, care is intergenerational, and legal/formal ties take precedence over affective nuance.

Gender and Care Work:
Both text and image data reveal deep-seated gender scripts in AI imaginaries. DeepSeek ’s tendency to feminize care and emotional labor, and to masculinize detachment or authority, echoes traditional gender roles, while ChatGPT ’s images subtly default to male-centered perspectives, reflecting biases in training data.

Affective Economy and Visual Symbolism:
Material exchanges (e.g., gifts) stand in for emotional labor, reducing “care” to visible, transactional acts. This suggests a limitation in LLMs’ ability to represent subtle emotional processes—affective ties are made legible only through objects or actions, not through nuanced “emotion work.”

Prompting and Epistemic Framing:
Prompt language fundamentally shapes AI’s imaginative and cultural outputs—not just stylistically, but in terms of which ethnicities, social settings, and values are made visible. The language of the prompt is itself a normative infrastructure.

Generational and Technological Narratives:
Both models reinforce contemporary anxieties around youth, digital media, and generational distance, suggesting that AI-generated advice is deeply inflected by current social narratives and moral panics.

Normative Infrastructures and Social Impact:
While both AIs mediate private dilemmas, they do so by projecting their respective cultures’ moral architectures. This has practical implications for how users from different backgrounds may internalize or resist AI-generated advice, and for future design of culturally sensitive, equitable AI systems in the private sphere.

7. Conclusions

In sum, generative AI functions not as a neutral advisor but as a normative infrastructure—actively amplifying or suppressing particular cultural values through both text and image. This study underscores that family normativity is not universal; models of the family—rooted in kinship, nuclear, or hybrid forms—carry distinct assumptions about autonomy, care, obligation, and authority. These models are shaped by cultural, historical, and socio-economic contexts, and they influence how the boundaries between individual and collective, private and public, are constructed and maintained.

As AI systems increasingly mediate personal relationships and emotional support, they inevitably embed and reproduce specific normative visions of the family. It is therefore essential to examine not only which familial logics these systems encode, but also how they interpret, frame, and potentially persuade users toward certain ethical orientations and decisions.

Future research should investigate how different AI models encode and communicate normative assumptions, how users interpret and negotiate these in diverse cultural contexts, and what this reveals about the evolving global politics of care, intimacy, and digital ethics. Critical attention to the persuasive strategies of AI—how problems are framed, advice is given, and users are positioned within moral frameworks—will be vital for understanding and guiding the impact of AI in the most intimate domains of social life.

8. References

Bode, I. & Huelss, H., 2024. Artificial Intelligence Technologies and Practical Normativity/Normality: Investigating Practices beyond the Public Space. Open Research Europe, 3, p.160. https://doi.org/10.12688/openreseurope.16536.2

De Gregorio, G., 2023. The Normative Power of Artificial Intelligence. Indiana Journal of Global Legal Studies, 30(2), pp.55–xx. Available at: https://ssrn.com/abstract=4436287

Depounti, I., Saukko, P. & Natale, S., 2023. Ideal technologies, ideal women: AI and gender imaginaries in Redditors’ discussions on the Replika bot girlfriend. Media, Culture & Society, 45(4), pp.720–736. https://doi.org/10.1177/01634437221119021

Gillespie, T. (2024). Generative AI and the politics of visibility. Big Data & Society, 11(2). https://doi.org/10.1177/20539517241252131 (Original work published 2024)

Leo-Liu, J. & Wu-Ouyang, B., 2024. A “soul” emerges when AI, AR, and Anime converge: A case study on users of the new anime-stylized hologram social robot “Hupo”. New Media & Society, 26(7), pp.3810–3832. https://doi.org/10.1177/14614448221106030

Pentina, I., Hancock, T. & Xie, T., 2023. Exploring relationship development with social chatbots: A mixed-method study of Replika. Computers in Human Behavior, 140, 107600. https://doi.org/10.1016/j.chb.2022.107600

Smith, M. & Miller, S., 2023. Technology, institutions and regulation: Towards a normative theory. AI & Society, 40, pp.1007–1017. https://doi.org/10.1007/s00146-023-01803-0

Yuan, Y. & Zhu, L., 2021. Whose love is AI for? The networked body and multiple masculinities in Chinese child-rearing robots. Communication and Society, 57, pp.225–254.

Topic revision: r2 - 10 Aug 2025, LinqiYe
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback