Ixchel Ruiz

@ixchelruiz
590 Followers
64 Following
49 Posts

Talking about Ai …

The final question encapsulates the most intriguing aspect of this era.

“How do you feel about AI?”

#JavaOne

"In a past tech life, discussing emotions or feelings was unthinkable. Today, technology prompts us to consider our emotions"

“If some thing can be tested deterministic should be tested deterministically” — Anna Zhdan

@jugch Basel

We’re thrilled that so many people are attending our February 2026 Basel JUG meeting @jugch
Your AI Agent in Any Editor @jetbrains with Anna Zhdan

How Modular hires new grads? they look for folks who have intellectual curiosity, are fearless in getting things done, and it’s a positive sign if they’ve contributed to open source projects before. — Chis Lattner

🤷‍♀️🤷‍♀️

If you have never contributed to open source maybe join Hackergarten.net

#LLMs often produce inefficient code — correct yet wasteful, hard to read or maintain.
A recent study analyzed these issues, proposing 5 categories of inefficiency: Logic, Performance, Readability, Maintainability, and Errors.
Experts validated the taxonomy’s relevance.
🔗 https://arxiv.org/abs/2503.06327
A Taxonomy of Inefficiencies in LLM-Generated Python Code

Large Language Models (LLMs) are widely adopted for automated code generation with promising results. Although prior research has assessed LLM-generated code and identified various quality issues -- such as redundancy, poor maintainability, and sub-optimal performance a systematic understanding and categorization of these inefficiencies remain unexplored. Without such knowledge, practitioners struggle to optimize LLM-generated code for real-world applications, limiting its adoption. This study can also guide improving code LLMs, enhancing the quality and efficiency of code generation. Therefore, in this study, we empirically investigate inefficiencies in LLM-generated code by state-of-the-art models, i.e., CodeLlama, DeepSeek-Coder, and CodeGemma. To do so, we analyze 492 generated code snippets in the HumanEval++ dataset. We then construct a taxonomy of inefficiencies in LLM-generated code that includes 5 categories General Logic, Performance, Readability, Maintainability, and Errors) and 19 subcategories of inefficiencies. We then validate the proposed taxonomy through an online survey with 58 LLM practitioners and researchers. Our study indicates that logic and performance-related inefficiencies are the most popular, relevant, and frequently co-occur and impact overall code quality inefficiency. Our taxonomy provides a structured basis for evaluating the quality LLM-generated code and guiding future research to improve code generation efficiency.

arXiv.org
Exciting news! I’m a candidate for the 2025 JCP Executive Committee ☕✨
The Java Community Process (JCP) is where ideas become the standards that move Java forward.
🗓️ Voting: Nov 4–17, 2025
🌍 Join the JCP: https://jcp.org/
👋 More about my work: https://sessionize.com/ixchelruiz/
Join the community, get your vote, and help shape what’s next for Java! 💛
#JCP #Java #OpenSource #Community
The Java Community Process(SM) Program

"In git we trust; in blame we verify.”
😂🤣🤣🤣

I am incredibly grateful to the Sessionize community for recognising me as the Speaker of the Day. This honour is deeply meaningful because it acknowledges my active participation and contributions as a speaker. It truly demonstrates the power of sharing knowledge and engaging in meaningful dialogue.

I am thrilled and motivated by this opportunity to connect with and inspire others. Thank you for valuing active voices and fostering a space where speakers can thrive and share their expertise.

Finally have a chance to watch this session live #javazone2025

We hate Code - The !joy of maintaining dead code

@hansolo_

The filters for the programme have been changing, which is an improvement in production 👌👍✅

We could also have slots for time, for example, simplifying the scrolling after 3 pm. 😂😂😂😂

#javazone2025