RE: https://gruene.social/@GrundeinkommenNiedersachsen/115735978560019517

Mein Termin am Montag... 😉 Kurz vor Weihnachten noch mal übers Grundeinkommen reden - dieses Mal in Verbindung mit KI. Wird bestimmt interessant!

#Grundeinkommen #KI #KünstlicheIntelligenz #LargeLanguageModels

"browser-use" đã tinh chỉnh và ra mắt phiên bản xem trước của mô hình AI Qwen3-VL-30B-A3B-Instruct. Đây là một bước tiến mới trong phát triển các mô hình ngôn ngữ lớn và đa phương thức.

#AI #LLM #Qwen3VL #browseruse #ArtificialIntelligence #LargeLanguageModels #TríTuệNhânTạo #MôHìnhNgônNgữLớn

https://www.reddit.com/r/LocalLLaMA/comments/1pojfmt/browseruse_fine_tuned_qwen3vl30ba3binstruct_as/

How Social Class Shapes ChatGPT Adoption at Work: Meng Li on AI Help-Seeking, Mentorship, and Workplace Inequality

 

Scott Douglas Jacobsen (Email: scott.jacobsen2025@gmail.com)

Publisher, In-Sight Publishing

Fort Langley, British Columbia, Canada

Received: September 11, 2025
Accepted: December 15, 2025
Published: December 15, 2025

Abstract

This interview with Professor Meng Li (University of Houston, C.T. Bauer College of Business) examines how social class background shapes the adoption of large language model (LLM) tools—such as ChatGPT—in workplace help-seeking. Li frames AI uptake not only as a productivity choice but as a substitute for hierarchical human support, akin to guidance from supervisors (or professors in academic settings). Drawing on survey and behavioral-experimental evidence with early-career professionals, Li argues that middle-class workers are the most receptive adopters because they combine sufficient resources and familiarity with LLMs with heightened perceived “social interaction costs” when requesting assistance from supervisors. By contrast, lower-class workers face knowledge and confidence barriers, while upper-class workers may be more comfortable leveraging interpersonal channels and human relationships. The conversation extends these findings into practical implications: AI substitution could reshape mentorship, influence managerial perceptions of help-seeking, and intensify stratification unless organizations invest in training, clear usage norms, and equitable support systems. The central claim is that AI integration is not socially neutral; it reconfigures workplace relationships and can either narrow or widen inequality depending on policy design and institutional culture.

Keywords

AI adoption, ChatGPT, Early-career professionals, Help-seeking behavior, Human-centered AI, Large language models, Mentorship, Social class background, Supervisor–employee relations, Workplace inequality, Workplace hierarchy

Introduction

The workplace is often described as a meritocratic machine: perform well, learn quickly, and advancement follows. In practice, modern organizations are also dense social systems—hierarchical, evaluative, and deeply shaped by who feels comfortable asking for help, from whom, and at what perceived cost. The arrival of large language models (LLMs) in everyday workflows introduces a new option into that system: workers can consult an always-available tool rather than a supervisor, colleague, or mentor. That choice can look purely technical—faster answers, fewer interruptions—but it may also be social, reflecting power dynamics and class-shaped habits of interaction.

In this interview, Professor Meng Li, a researcher at the University of Houston’s C.T. Bauer College of Business, explains why social class background is a critical variable for understanding AI uptake at work. Li and colleagues study whether LLM use functions as a substitute for supervisor help, and why middle-class workers appear especially inclined to make that substitution. Their focus on early-career professionals isolates a life stage where guidance is vital, supervisory relationships are formative, and family-of-origin class background can surface even among workers who currently occupy similar education and income brackets.

Rather than treating AI adoption as a uniform wave, Li frames it as a stratified process with equity consequences. If LLM-intensive workplaces reward those who know how to use these tools confidently, and human-centered workplaces reward those who can navigate managerial relationships smoothly, then the “AI era” risks becoming a new sorting mechanism—unless organizations deliberately design training, norms, and support structures to prevent class-based divergence.

Main Text (Interview)

Title: How Social Class Shapes ChatGPT Adoption at Work: Meng Li on AI Help-Seeking, Mentorship, and Workplace Inequality

Interviewer: Scott Douglas Jacobsen

Interviewees: Meng Li

Researchers at the University of Houston’s C.T. Bauer College of Business have found that middle-class workers are the most receptive to using AI tools like ChatGPT at work. Published in the Social Science Research Network, the study analyzed surveys and behavioral experiments with early-career professionals across class backgrounds. The findings suggest middle-class workers adopt AI more readily than their upper- or lower-class peers, who either prefer human supervisors or lack technological familiarity. Bauer Professor Meng Li, co-author and director of UH’s Human-Centered Artificial Intelligence Institute, emphasized that addressing class-based disparities in AI adoption will be key to preventing workplace inequality.

Scott Douglas Jacobsen: What motivated studying social class as a factor in AI adoption at work?

Professor Meng Li: Social class background plays a central role in shaping individuals’ thoughts and behaviors within hierarchical social systems. It has been shown to influence the development of self-identity, social cognition, social values, and social behaviors, and to extend its impact into key life outcomes such as educational attainment and employment opportunities. In our context, we focus on whether AI adoption could serve as a substitute for supervisors’ help, another form of hierarchical relationship within the workplace, and we thus propose that social class background may also play a role here.

In practice, as business school professors teaching and mentoring students from diverse social class backgrounds, we have observed this dynamic firsthand. Prior to the emergence of AI, students across all social classes regularly sought help during office hours. However, following the widespread availability of tools like ChatGPT, we noted a sharp decline in office hour visits, particularly among students from less affluent middle-class backgrounds, many of whom began turning to AI tools instead of seeking guidance from faculty. Given the parallels between the role of professors in academia and supervisors in the workplace, and the likelihood that students carry these help-seeking behaviors into their professional lives, we were motivated to investigate whether similar patterns also emerge in the workplace settings.

Jacobsen: Why focus on early-career professionals?

Li: The focus of early-career professionals is theoretically and methodologically driven.

On the one hand, they are highly reliant on the supervisor’s help to navigate workplace challenges as they are new to an organization, emphasizing the need for careful examination. On the other hand, as they share a similar current social class (such as similar education attainment, income, and occupation), this could provide a clear context to examine the impact of social class background (i.e., their family/parental social class).

Jacobsen: What unique advantages make middle-class workers comfy with AI?

Li: According to our findings, compared to those from lower-class backgrounds, middle-class individuals may have greater resources and understanding of how to use AI, which makes them more inclined to adopt it. At the same time, they also perceive higher social interaction costs when seeking help from supervisors, further motivating or pushing them to turn to AI for assistance. Together, these dual mechanisms position the middle class as the group most comfortable with using AI relative to other social class backgrounds.

Jacobsen: How do supervisors respond when workers substitute with AI?

Li: In our current research, we do not examine supervisors’ consequential behaviors; rather, we focus on documenting adoption patterns as a first step toward understanding AI’s impact on workplace interpersonal dynamics. Nevertheless, drawing on the dynamics observed in our study, we offer several conjectures. The substitution effect between LLMs and human supervisors may prompt both employees and supervisors to recalibrate their perceptions and help-seeking behaviors. Prior research suggests that individuals who actively seek advice are often perceived as more competent. However, the widespread integration of LLMs in the workplace may alter this perception. Supervisors who are aware that employees have access to LLMs might interpret help-seeking in divergent ways: either as a meaningful effort at relationship-building that merits support, or as an inefficient use of resources. These shifts could influence performance evaluations and, in turn, shape how employees from different social class backgrounds interpret supervisors’ expectations and adjust their help-seeking decisions. Whether such dynamics ultimately mitigate or exacerbate workplace inequality remains an open question for future research.

Jacobsen: What specific barriers face lower-class workers adopting LLMs?

Li: According to our findings, lower-class workers face barriers primarily due to a lack of objective resources for understanding and effectively using LLMs. These barriers include limited knowledge of the capabilities and limitations of such tools, insufficient awareness of the appropriate contexts for their use. As a result, they may be less confident in adopting LLMs compared to their middle-class counterparts.

Jacobsen: Could over-reliance on AI change mentorship dynamics?

Li: This is indeed possible. As AI tools become more capable of addressing workplace challenges, employees may increasingly turn to them as an alternative source of support. Drawing from our research, when workers are faced with the choice between seeking help from supervisors or turning to AI, many may prefer the AI. On the one hand, over-reliance on AI could reduce employees’ reliance on supervisors for guidance, potentially weakening mentorship ties and diminishing opportunities for relationship-building, informal learning, and career development. On the other hand, it might also shift the role of mentorship, pushing supervisors to focus less on routine problem-solving and more on higher-level coaching, strategic advice, and professional development. Such changes could fundamentally reshape workplace dynamics, raising important questions about how organizations can preserve the benefits of mentorship while embracing AI as a complementary tool.

Jacobsen: What policies help level the AI adoption gap?

Li: The answer depends on the organization’s strategic approach. If a company chooses to promote LLM-based systems, it must address employees’ concerns about the capabilities and appropriate contexts for using these tools, concerns that are especially salient among lower-class employees. To mitigate such barriers, organizations can provide comprehensive training programs, practical case studies, and regular feedback sessions to build employees’ confidence and competence in using LLMs. Alternatively, if a company emphasizes human-based systems, it needs to address the high social interaction costs that often deter low- and middle-class employees from seeking help. Policies such as implementing standardized processes for help-seeking, offering inclusive check-ins, and establishing clear communication channels can help reduce power differentials and foster more equitable and accessible support environments.

Jacobsen: How might these findings shape future discussions about equity?

Li: There are two possible directions. First, our study highlights the role of social class background in shaping workplace inequality in the era of AI. Our findings suggest that the rise of LLMs in the workplace may unintentionally deepen social stratification if class-based disparities remain unaddressed. In LLM-intensive environments, lower-class workers, despite overcoming initial employment barriers, may continue to struggle due to limited knowledge and confidence in using such tools, while middle-class workers are better equipped to navigate them effectively. In workplaces that emphasize human-based support, upper-class workers can leverage their stronger interpersonal skills when interacting with supervisors, while middle-class workers may avoid such interactions and instead rely on LLMs. As a result, the advantages held by middle- and upper-class workers risk widening inequality and sparking renewed discussions on equity in the contemporary workplace. Second, by examining AI adoption not only as a productivity-enhancing tool but also as a substitute for human supervisor help, our research shifts the focus toward the interpersonal dynamics AI introduces into the workplace. This perspective invites broader conversations about the unintended social consequences of AI integration, such as its impact on mentorship and relationship-building, which are critical to understanding equity in the future of work.

Jacobsen: Thank you for the opportunity and your time, Meng.

Discussion

Li’s account offers a sociological correction to a common technological myth: that tools diffuse through workplaces simply because they are efficient. In his framing, LLMs enter an existing hierarchy of help-seeking, and adoption becomes an interpersonal strategy as much as a computational one. The key explanatory move is the “substitution” model: workers can replace supervisor assistance with AI assistance, thereby avoiding the vulnerability, status negotiation, and impression management involved in asking a superior for help. Once help-seeking is understood as socially priced, it becomes unsurprising that class background matters—because class shapes how people interpret hierarchy, self-presentation, and the costs of initiating unequal interactions.

The interview’s most consequential claim is the dual-mechanism account of middle-class receptivity. Middle-class workers are positioned as having enough familiarity and resources to use LLMs effectively while also experiencing meaningful social friction in approaching supervisors. That combination makes AI an attractive “quiet help” channel. Lower-class workers, in this account, are constrained less by reluctance than by capability gaps—limited exposure, weaker understanding of appropriate contexts for use, and lower confidence. Upper-class workers, meanwhile, are described as more willing or able to leverage human channels—suggesting that interpersonal ease can function as an alternative advantage in environments where supervisor relationships remain central.

The equity implications are sharp because they cut both ways depending on organizational culture. In LLM-heavy environments, competence with AI becomes a new form of cultural capital, potentially compounding existing opportunity gaps for those without early exposure or training. In human-support-centric environments, social fluency with authority can confer advantages, leaving those who perceive higher interaction costs to either under-seek help or rely on tools that may not provide sponsorship, advocacy, or career visibility. In short: either the algorithm becomes the gatekeeper, or the relationship does—and class can predict who thrives under each regime.

Li’s speculative remarks about managerial interpretation of help-seeking are a useful frontier for future work. If supervisors begin to assume that LLM access makes asking questions “unnecessary,” help-seeking could be reframed from competence-signaling to inefficiency-signaling. That would subtly change who is rewarded, who is coached, and who is seen as “high potential,” potentially reshaping mentorship into a scarcer and more strategic resource. The organizational risk is a hollowing-out of apprenticeship: workers may solve problems faster but develop fewer developmental relationships, and those relationships are often where promotions, protection, and professional identity are built.

The policy takeaway is not “ban AI” or “embrace AI,” but govern AI as a social intervention. Training programs, practical use cases, and feedback loops can reduce the confidence and knowledge gap for lower-class workers in LLM-intensive settings. Conversely, standardized help-seeking processes, inclusive check-ins, and clearer channels can lower the perceived interaction costs of seeking human guidance. The broader point is almost annoyingly human: inequality does not vanish when a new tool arrives; it simply learns new costumes. The responsible move is to design workplaces where competence with tools and access to mentorship are not rationed by background.

Methods

The interview was conducted via typed questions—with explicit consent—for review, and curation. This process complied with applicable data protection laws, including the California Consumer Privacy Act (CCPA), Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA), and Europe’s General Data Protection Regulation (GDPR), i.e., recordings if any were stored securely, retained only as needed, and deleted upon request, as well in accordance with Federal Trade Commission (FTC) and Advertising Standards Canada guidelines.

Data Availability

No datasets were generated or analyzed during the current article. All interview content remains the intellectual property of the interviewer and interviewee.

References

(No external academic sources were cited for this interview.)

Journal & Article Details

Publisher: In-Sight Publishing

Publisher Founding: March 1, 2014

Web Domain: http://www.in-sightpublishing.com

Location: Fort Langley, Township of Langley, British Columbia, Canada

Journal: In-Sight: Interviews

Journal Founding: August 2, 2012

Frequency: Four Times Per Year

Review Status: Non-Peer-Reviewed

Access: Electronic/Digital & Open Access

Fees: None (Free)

Volume Numbering: 13

Issue Numbering: 4

Section: A

Theme Type: Idea

Theme Premise: Mentorship and the Workplace

Theme Part: None.

Formal Sub-Theme: None.

Individual Publication Date: December 15, 2025

Issue Publication Date: January 1, 2026

Author(s): Scott Douglas Jacobsen

Word Count: 1,249

Image Credits: Photo by Levart_Photographer on Unsplash

ISSN (International Standard Serial Number): 2369-6885

Acknowledgements

The author acknowledges Meng Li for her time, expertise, and valuable contributions. Her thoughtful insights and detailed explanations have greatly enhanced the quality and depth of this work, providing a solid foundation for the discussion presented herein.

Author Contributions

S.D.J. conceived the subject matter, conducted the interview, transcribed and edited the conversation, and prepared the manuscript.

Competing Interests

The author declares no competing interests.

License & Copyright

In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
© Scott Douglas Jacobsen and In-Sight Publishing 2012–Present.

Unauthorized use or duplication of material without express permission from Scott Douglas Jacobsen is strictly prohibited. Excerpts and links must use full credit to Scott Douglas Jacobsen and In-Sight Publishing with direction to the original content.

Supplementary Information

Below are various citation formats for How Social Class Shapes ChatGPT Adoption at Work: Meng Li on AI Help-Seeking, Mentorship, and Workplace Inequality (Scott Douglas Jacobsen, December 15, 2025).

American Medical Association (AMA 11th Edition)
Jacobsen SD. How Social Class Shapes ChatGPT Adoption at Work: Meng Li on AI Help-Seeking, Mentorship, and Workplace Inequality. In-Sight: Interviews. 2025;13(4). Published December 15, 2025. http://www.in-sightpublishing.com/social-class-chatgpt-adoption-at-work-meng-li-ai-help-seeking-mentorship-workplace-inequality 

American Psychological Association (APA 7th Edition)
Jacobsen, S. D. (2025, December 15). How Social Class Shapes ChatGPT Adoption at Work: Meng Li on AI Help-Seeking, Mentorship, and Workplace Inequality. In-Sight: Interviews, 13(4). In-Sight Publishing. http://www.in-sightpublishing.com/social-class-chatgpt-adoption-at-work-meng-li-ai-help-seeking-mentorship-workplace-inequality 

Brazilian National Standards (ABNT)
JACOBSEN, Scott Douglas. How Social Class Shapes ChatGPT Adoption at Work: Meng Li on AI Help-Seeking, Mentorship, and Workplace Inequality. In-Sight: Interviews, Fort Langley, v. 13, n. 4, 15 dez. 2025. Disponível em: http://www.in-sightpublishing.com/social-class-chatgpt-adoption-at-work-meng-li-ai-help-seeking-mentorship-workplace-inequality 

Chicago/Turabian, Author-Date (17th Edition)
Jacobsen, Scott Douglas. 2025. “How Social Class Shapes ChatGPT Adoption at Work: Meng Li on AI Help-Seeking, Mentorship, and Workplace Inequality.” In-Sight: Interviews 13 (4). http://www.in-sightpublishing.com/social-class-chatgpt-adoption-at-work-meng-li-ai-help-seeking-mentorship-workplace-inequality

Chicago/Turabian, Notes & Bibliography (17th Edition)
Jacobsen, Scott Douglas. “How Social Class Shapes ChatGPT Adoption at Work: Meng Li on AI Help-Seeking, Mentorship, and Workplace Inequality.” In-Sight: Interviews 13, no. 4 (December 15, 2025). http://www.in-sightpublishing.com/social-class-chatgpt-adoption-at-work-meng-li-ai-help-seeking-mentorship-workplace-inequality

Harvard
Jacobsen, S.D. (2025) ‘How Social Class Shapes ChatGPT Adoption at Work: Meng Li on AI Help-Seeking, Mentorship, and Workplace Inequality’, In-Sight: Interviews, 13(4), 15 December. Available at: http://www.in-sightpublishing.com/social-class-chatgpt-adoption-at-work-meng-li-ai-help-seeking-mentorship-workplace-inequality

Harvard (Australian)
Jacobsen, SD 2025, ‘How Social Class Shapes ChatGPT Adoption at Work: Meng Li on AI Help-Seeking, Mentorship, and Workplace Inequality’, In-Sight: Interviews, vol. 13, no. 4, 15 December, viewed 15 December 2025, http://www.in-sightpublishing.com/social-class-chatgpt-adoption-at-work-meng-li-ai-help-seeking-mentorship-workplace-inequality

Modern Language Association (MLA, 9th Edition)
Jacobsen, Scott Douglas. “How Social Class Shapes ChatGPT Adoption at Work: Meng Li on AI Help-Seeking, Mentorship, and Workplace Inequality.” In-Sight: Interviews, vol. 13, no. 4, 2025, http://www.in-sightpublishing.com/social-class-chatgpt-adoption-at-work-meng-li-ai-help-seeking-mentorship-workplace-inequality

Vancouver/ICMJE
Jacobsen SD. How Social Class Shapes ChatGPT Adoption at Work: Meng Li on AI Help-Seeking, Mentorship, and Workplace Inequality [Internet]. 2025 Dec 15;13(4). Available from: http://www.in-sightpublishing.com/social-class-chatgpt-adoption-at-work-meng-li-ai-help-seeking-mentorship-workplace-inequality 

Note on Formatting

This document follows an adapted Nature research-article format tailored for an interview. Traditional sections such as Methods, Results, and Discussion are replaced with clearly defined parts: Abstract, Keywords, Introduction, Main Text (Interview), and a concluding Discussion, along with supplementary sections detailing Data Availability, References, and Author Contributions. This structure maintains scholarly rigor while effectively accommodating narrative content.

 

#AIAdoption #ChatGPT #EarlyCareerProfessionals #HelpSeekingBehavior #HumanCenteredAI #largeLanguageModels #Mentorship #SocialClassBackground #SupervisorEmployeeRelations #WorkplaceHierarchy #WorkplaceInequality

Even as Stack Overflow devs voice doubts about AI, they keep leaning on large‑language models, AI Assist and chat‑based tools for everyday coding. Moderators wrestle with AI slop while expert votes still shape the answers. Curious how the community balances skepticism and reliance? Read the full story. #StackOverflow #LargeLanguageModels #AIAssist #ExpertVotes

🔗 https://aidailypost.com/news/stack-overflow-users-skeptical-ai-yet-continue-rely-it

New research shows LLMs surpass clinical cut‑offs on 20+ psychiatric tests, from ADHD and autism to OCD and dissociation. The models hit scores far above thresholds, raising questions about assessment, ethics, and open‑source AI's role in mental health. Dive into the data and implications. #AI #LargeLanguageModels #PsychometricInventories #ADHD

🔗 https://aidailypost.com/news/ai-models-score-far-above-clinical-thresholds-20-psychiatric-tests

“The problem with #generative #AI has always been that #largelanguagemodels associate patterns together without really understanding those patterns; it’s #statistics without comprehension.” open.substack.com/pub/garymarc... #LLMs
How OpenAI is using GPT-5 Codex to improve the AI tool itself

“The vast majority of Codex is built by Codex,” OpenAI told us about its new AI coding agent.

Ars Technica
OpenAI releases GPT-5.2 after “code red” Google threat alert

Company claims new AI model tops Gemini and matches humans on 70% of work tasks.

Ars Technica
UK intelligence warns AI 'prompt injection' attacks might never go away

A top technologist at the U.K.’s National Cyber Security Centre said “there’s a good chance” that prompt injection attacks against AI will never be eliminated, and he warned of the related risks of embedding generative AI into digital systems globally.

Nagarro teams up with OpenAI to bring generative AI into enterprise workflows. By leveraging GPT‑style large‑language models, they aim to break data silos, embed AI‑first thinking, and set up robust governance frameworks. Curious how this partnership could reshape business automation? Read the full story. #Nagarro #OpenAI #GenerativeAI #LargeLanguageModels

🔗 https://aidailypost.com/news/nagarro-partners-openai-apply-generative-ai-enterprise-tasks