๐จ๐ปโ๐ป Ph.D. in Information and Communication Technologies
Prev. Visiting @ IT University of Copenhagen ๐ฉ๐ฐ
๐ค Multimodal Representation Learning & Networks
| Website | https://luciolacava.me |
| https://twitter.com/luciolcw |
| Website | https://luciolacava.me |
| https://twitter.com/luciolcw |
๐ค Do LLMs reflect our moral expressionsโor alter them?
In our #ACL2025 paper, we study 12 widely used LLMs and found that moral expressions are altered to varying degrees depending on the editing task and moral conditioning prompt.
๐ ACL2025 โ Hall X5 (28)
๐ Today, 11:00โ12:30
Yesterday, we had the pleasure of virtually meeting the OpenAI team for an insightful discussion about Open LLMs. While I canโt share details just yet, thereโs a lot to be excited about! ๐
It was particularly inspiring to see Sam Altman join the session and share OpenAIโs vision for open models ๐คฏ
Wrapping up an amazing #AAAI2025 in Philadelphia ๐บ๐ธ
We presented our work "Open Models, Closed Minds? On Agents Capabilities in Mimicking Human Personalities through Open Large Language Models"
๐ https://arxiv.org/abs/2401.07115
Grateful to everyone who engaged with our work for the valuable feedback!
The emergence of unveiling human-like behaviors in Large Language Models (LLMs) has led to a closer connection between NLP and human psychology. Scholars have been studying the inherent personalities exhibited by LLMs and attempting to incorporate human traits and behaviors into them. However, these efforts have primarily focused on commercially-licensed LLMs, neglecting the widespread use and notable advancements seen in Open LLMs. This work aims to address this gap by employing a set of 12 LLM Agents based on the most representative Open models and subject them to a series of assessments concerning the Myers-Briggs Type Indicator (MBTI) test and the Big Five Inventory (BFI) test. Our approach involves evaluating the intrinsic personality traits of Open LLM agents and determining the extent to which these agents can mimic human personalities when conditioned by specific personalities and roles. Our findings unveil that $(i)$ each Open LLM agent showcases distinct human personalities; $(ii)$ personality-conditioned prompting produces varying effects on the agents, with only few successfully mirroring the imposed personality, while most of them being ``closed-minded'' (i.e., they retain their intrinsic traits); and $(iii)$ combining role and personality conditioning can enhance the agents' ability to mimic human personalities. Our work represents a step up in understanding the dense relationship between NLP and human psychology through the lens of Open LLMs.
๐ฅ Are you working on next-gen or alternative social media platforms like #Bluesky or #Mastodon?
๐๐ป This #ICWSM2025 workshop is for you! ๐
๐ข https://nextgensocial-workshop.github.io/
๐ We solicit research, position, demo, and dataset paper submissions!
๐ Submissions: March 31, 2025
๐ Exciting news! Our paper "Open Models, Closed Minds? On Agents Capabilities in Mimicking Human Personalities through Open Large Language Models" has been accepted at the 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025)! ๐ฅณ
๐ Check it out here: https://arxiv.org/abs/2401.07115
The emergence of unveiling human-like behaviors in Large Language Models (LLMs) has led to a closer connection between NLP and human psychology. Scholars have been studying the inherent personalities exhibited by LLMs and attempting to incorporate human traits and behaviors into them. However, these efforts have primarily focused on commercially-licensed LLMs, neglecting the widespread use and notable advancements seen in Open LLMs. This work aims to address this gap by employing a set of 12 LLM Agents based on the most representative Open models and subject them to a series of assessments concerning the Myers-Briggs Type Indicator (MBTI) test and the Big Five Inventory (BFI) test. Our approach involves evaluating the intrinsic personality traits of Open LLM agents and determining the extent to which these agents can mimic human personalities when conditioned by specific personalities and roles. Our findings unveil that $(i)$ each Open LLM agent showcases distinct human personalities; $(ii)$ personality-conditioned prompting produces varying effects on the agents, with only few successfully mirroring the imposed personality, while most of them being ``closed-minded'' (i.e., they retain their intrinsic traits); and $(iii)$ combining role and personality conditioning can enhance the agents' ability to mimic human personalities. Our work represents a step up in understanding the dense relationship between NLP and human psychology through the lens of Open LLMs.
Our mastodon server https://datasci.social is 2 years old! ๐๐ฅณ
We are one of the most reliable servers with >99.99% uptime in 2024. Sign-ups are open - for everybody working or interested in #datascience.
Full news: https://community.datasci.social/blog/2024-11-15/two-years-old/
Wrapping up an amazing #EMNLP2024 in Miami! ๐ดโ๏ธ
Grateful to everyone who visited our poster for their valuable feedback and stimulating discussions! ๐ค
๐ด Excited to head to #EMNLP2024 next week! I'll be presenting our work "Talking the Talk Does Not Entail Walking the Walk: On the Limits of Large Language Models in Lexical Entailment Recognition" ๐
๐ Check out the preprint here: https://arxiv.org/abs/2406.14894
Verbs form the backbone of language, providing the structure and meaning to sentences. Yet, their intricate semantic nuances pose a longstanding challenge. Understanding verb relations through the concept of lexical entailment is crucial for comprehending sentence meanings and grasping verb dynamics. This work investigates the capabilities of eight Large Language Models in recognizing lexical entailment relations among verbs through differently devised prompting strategies and zero-/few-shot settings over verb pairs from two lexical databases, namely WordNet and HyperLex. Our findings unveil that the models can tackle the lexical entailment recognition task with moderately good performance, although at varying degree of effectiveness and under different conditions. Also, utilizing few-shot prompting can enhance the models' performance. However, perfectly solving the task arises as an unmet challenge for all examined LLMs, which raises an emergence for further research developments on this topic.
๐จ๐ปโ๐ป Thrilled to have presented our work "Is Contrasting All You Need? Contrastive Learning for the Detection and Attribution of AI-generated Text" ๐ค at the 27th European Conference on Artificial Intelligence (ECAI 2024) in Santiago de Compostela, Spain ๐ช๐ธ
๐ Check out the paper here: https://doi.org/10.3233/FAIA240862
We also had the opportunity to walk the final section of the Santiagoโs French Way, what an amazing experience! โญ๏ธ๐ถ๐ปโโ๏ธ