Talking about Ai …
The final question encapsulates the most intriguing aspect of this era.
“How do you feel about AI?”
"In a past tech life, discussing emotions or feelings was unthinkable. Today, technology prompts us to consider our emotions"
Talking about Ai …
The final question encapsulates the most intriguing aspect of this era.
“How do you feel about AI?”
"In a past tech life, discussing emotions or feelings was unthinkable. Today, technology prompts us to consider our emotions"
“If some thing can be tested deterministic should be tested deterministically” — Anna Zhdan
@jugch Basel
How Modular hires new grads? they look for folks who have intellectual curiosity, are fearless in getting things done, and it’s a positive sign if they’ve contributed to open source projects before. — Chis Lattner
🤷♀️🤷♀️
If you have never contributed to open source maybe join Hackergarten.net
Large Language Models (LLMs) are widely adopted for automated code generation with promising results. Although prior research has assessed LLM-generated code and identified various quality issues -- such as redundancy, poor maintainability, and sub-optimal performance a systematic understanding and categorization of these inefficiencies remain unexplored. Without such knowledge, practitioners struggle to optimize LLM-generated code for real-world applications, limiting its adoption. This study can also guide improving code LLMs, enhancing the quality and efficiency of code generation. Therefore, in this study, we empirically investigate inefficiencies in LLM-generated code by state-of-the-art models, i.e., CodeLlama, DeepSeek-Coder, and CodeGemma. To do so, we analyze 492 generated code snippets in the HumanEval++ dataset. We then construct a taxonomy of inefficiencies in LLM-generated code that includes 5 categories General Logic, Performance, Readability, Maintainability, and Errors) and 19 subcategories of inefficiencies. We then validate the proposed taxonomy through an online survey with 58 LLM practitioners and researchers. Our study indicates that logic and performance-related inefficiencies are the most popular, relevant, and frequently co-occur and impact overall code quality inefficiency. Our taxonomy provides a structured basis for evaluating the quality LLM-generated code and guiding future research to improve code generation efficiency.
I am incredibly grateful to the Sessionize community for recognising me as the Speaker of the Day. This honour is deeply meaningful because it acknowledges my active participation and contributions as a speaker. It truly demonstrates the power of sharing knowledge and engaging in meaningful dialogue.
I am thrilled and motivated by this opportunity to connect with and inspire others. Thank you for valuing active voices and fostering a space where speakers can thrive and share their expertise.
Finally have a chance to watch this session live #javazone2025
We hate Code - The !joy of maintaining dead code
The filters for the programme have been changing, which is an improvement in production 👌👍✅
We could also have slots for time, for example, simplifying the scrolling after 3 pm. 😂😂😂😂