Lucio La Cava

286 Followers
211 Following
63 Posts
Assistant Professor @ University of Calabria ๐Ÿ‡ฎ๐Ÿ‡น
๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป Ph.D. in Information and Communication Technologies
Prev. Visiting @ IT University of Copenhagen ๐Ÿ‡ฉ๐Ÿ‡ฐ
๐Ÿค– Multimodal Representation Learning & Networks
Websitehttps://luciolacava.me
Twitterhttps://twitter.com/luciolcw

Thrilled to be visiting the NEtwoRks, Data, and Society (@nerdsitu) group at IT University of Copenhagen!

After a wonderful experience here during my PhD, itโ€™s a pleasure to continue collaborating with @lajello ๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป

Excited for another inspiring chapter in Copenhagen! ๐Ÿ‡ฉ๐Ÿ‡ฐ

๐Ÿค– Do LLMs reflect our moral expressionsโ€”or alter them?

In our #ACL2025 paper, we study 12 widely used LLMs and found that moral expressions are altered to varying degrees depending on the editing task and moral conditioning prompt.

๐Ÿ“ ACL2025 โ€” Hall X5 (28)
๐Ÿ•š Today, 11:00โ€“12:30

Yesterday, we had the pleasure of virtually meeting the OpenAI team for an insightful discussion about Open LLMs. While I canโ€™t share details just yet, thereโ€™s a lot to be excited about! ๐Ÿš€

It was particularly inspiring to see Sam Altman join the session and share OpenAIโ€™s vision for open models ๐Ÿคฏ

Wrapping up an amazing #AAAI2025 in Philadelphia ๐Ÿ‡บ๐Ÿ‡ธ

We presented our work "Open Models, Closed Minds? On Agents Capabilities in Mimicking Human Personalities through Open Large Language Models"

๐Ÿ“ https://arxiv.org/abs/2401.07115

Grateful to everyone who engaged with our work for the valuable feedback!

Open Models, Closed Minds? On Agents Capabilities in Mimicking Human Personalities through Open Large Language Models

The emergence of unveiling human-like behaviors in Large Language Models (LLMs) has led to a closer connection between NLP and human psychology. Scholars have been studying the inherent personalities exhibited by LLMs and attempting to incorporate human traits and behaviors into them. However, these efforts have primarily focused on commercially-licensed LLMs, neglecting the widespread use and notable advancements seen in Open LLMs. This work aims to address this gap by employing a set of 12 LLM Agents based on the most representative Open models and subject them to a series of assessments concerning the Myers-Briggs Type Indicator (MBTI) test and the Big Five Inventory (BFI) test. Our approach involves evaluating the intrinsic personality traits of Open LLM agents and determining the extent to which these agents can mimic human personalities when conditioned by specific personalities and roles. Our findings unveil that $(i)$ each Open LLM agent showcases distinct human personalities; $(ii)$ personality-conditioned prompting produces varying effects on the agents, with only few successfully mirroring the imposed personality, while most of them being ``closed-minded'' (i.e., they retain their intrinsic traits); and $(iii)$ combining role and personality conditioning can enhance the agents' ability to mimic human personalities. Our work represents a step up in understanding the dense relationship between NLP and human psychology through the lens of Open LLMs.

arXiv.org

๐Ÿ”ฅ Are you working on next-gen or alternative social media platforms like #Bluesky or #Mastodon?

๐Ÿ‘‡๐Ÿป This #ICWSM2025 workshop is for you! ๐Ÿ˜Ž
๐Ÿ“ข https://nextgensocial-workshop.github.io/

๐Ÿ“š We solicit research, position, demo, and dataset paper submissions!
๐Ÿ“† Submissions: March 31, 2025

@computationalsocialscience @networkscience

Nex-Gen and Alternative Social Media @ICWSM2025

๐ŸŽ‰ Exciting news! Our paper "Open Models, Closed Minds? On Agents Capabilities in Mimicking Human Personalities through Open Large Language Models" has been accepted at the 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025)! ๐Ÿฅณ

๐Ÿ“ Check it out here: https://arxiv.org/abs/2401.07115

#NLP #Agents #AAAI25 #Personas #Personalities #LLM

Open Models, Closed Minds? On Agents Capabilities in Mimicking Human Personalities through Open Large Language Models

The emergence of unveiling human-like behaviors in Large Language Models (LLMs) has led to a closer connection between NLP and human psychology. Scholars have been studying the inherent personalities exhibited by LLMs and attempting to incorporate human traits and behaviors into them. However, these efforts have primarily focused on commercially-licensed LLMs, neglecting the widespread use and notable advancements seen in Open LLMs. This work aims to address this gap by employing a set of 12 LLM Agents based on the most representative Open models and subject them to a series of assessments concerning the Myers-Briggs Type Indicator (MBTI) test and the Big Five Inventory (BFI) test. Our approach involves evaluating the intrinsic personality traits of Open LLM agents and determining the extent to which these agents can mimic human personalities when conditioned by specific personalities and roles. Our findings unveil that $(i)$ each Open LLM agent showcases distinct human personalities; $(ii)$ personality-conditioned prompting produces varying effects on the agents, with only few successfully mirroring the imposed personality, while most of them being ``closed-minded'' (i.e., they retain their intrinsic traits); and $(iii)$ combining role and personality conditioning can enhance the agents' ability to mimic human personalities. Our work represents a step up in understanding the dense relationship between NLP and human psychology through the lens of Open LLMs.

arXiv.org

Our mastodon server https://datasci.social is 2 years old! ๐ŸŽ‚๐Ÿฅณ

We are one of the most reliable servers with >99.99% uptime in 2024. Sign-ups are open - for everybody working or interested in #datascience.

Full news: https://community.datasci.social/blog/2024-11-15/two-years-old/

datasci.social

Community of researchers & practitioners in human-centric data science, broadly defined, like network science, computational social science, geospatial data science.

Mastodon hosted on datasci.social

Wrapping up an amazing #EMNLP2024 in Miami! ๐ŸŒดโ˜€๏ธ

Grateful to everyone who visited our poster for their valuable feedback and stimulating discussions! ๐Ÿค—

๐ŸŒด Excited to head to #EMNLP2024 next week! I'll be presenting our work "Talking the Talk Does Not Entail Walking the Walk: On the Limits of Large Language Models in Lexical Entailment Recognition" ๐Ÿ“

๐Ÿ“„ Check out the preprint here: https://arxiv.org/abs/2406.14894

Talking the Talk Does Not Entail Walking the Walk: On the Limits of Large Language Models in Lexical Entailment Recognition

Verbs form the backbone of language, providing the structure and meaning to sentences. Yet, their intricate semantic nuances pose a longstanding challenge. Understanding verb relations through the concept of lexical entailment is crucial for comprehending sentence meanings and grasping verb dynamics. This work investigates the capabilities of eight Large Language Models in recognizing lexical entailment relations among verbs through differently devised prompting strategies and zero-/few-shot settings over verb pairs from two lexical databases, namely WordNet and HyperLex. Our findings unveil that the models can tackle the lexical entailment recognition task with moderately good performance, although at varying degree of effectiveness and under different conditions. Also, utilizing few-shot prompting can enhance the models' performance. However, perfectly solving the task arises as an unmet challenge for all examined LLMs, which raises an emergence for further research developments on this topic.

arXiv.org

๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป Thrilled to have presented our work "Is Contrasting All You Need? Contrastive Learning for the Detection and Attribution of AI-generated Text" ๐Ÿค– at the 27th European Conference on Artificial Intelligence (ECAI 2024) in Santiago de Compostela, Spain ๐Ÿ‡ช๐Ÿ‡ธ

๐Ÿ“š Check out the paper here: https://doi.org/10.3233/FAIA240862

We also had the opportunity to walk the final section of the Santiagoโ€™s French Way, what an amazing experience! โญ๏ธ๐Ÿšถ๐Ÿปโ€โ™‚๏ธ

IOS Press Ebooks - Is Contrasting All You Need? Contrastive Learning for the Detection and Attribution of AI-generated Text