even #humaneval a set of tests for evaluating coding LLMs, uses real tests I mean `assert ...` 🤯

a good explainer: https://www.youtube.com/watch?v=dKkNsCm9pLQ
the code: https://github.com/openai/human-eval

Learn about the HumanEval LLM benchmark with Empirical

YouTube

🛠️ Achieves top performance in Fill-in-the-Middle (#FIM) tasks: 85.9% average accuracy across languages, 95.3% pass@1 rate

💻 Excels in multiple languages: 86.6% #Python, 78.9% #Cpp, 82.6% #JavaScript accuracy on #HumanEval benchmarks

[Перевод] Сравнение бенчмарков LLM для разработки программного обеспечения

В этой статье мы сравним различные бенчмарки, которые помогают ранжировать крупные языковые модели для задач разработки программного обеспечения.

https://habr.com/ru/articles/857754/

#LLM #бенчмарки #бенчмаркинг #HumanEval #DevQualityEval #CodeXGLUE #Aider #SWEbench #ClassEval #BigCodeBench

Сравнение бенчмарков LLM для разработки программного обеспечения

В этой статье мы сравним различные бенчмарки, которые помогают ранжировать крупные языковые модели для задач разработки программного обеспечения. Серия публикаций о бенчмаркинге LLM Прочтите все...

Хабр

🚀 #Claude35Sonnet is now rolling out on #GitHubCopilot, bringing advanced coding capabilities directly to #VisualStudioCode and https://GitHub.com

• 🏆 Performance highlights:
- Highest score among public models on #SWEbench Verified
- 93.7% accuracy on #HumanEval for #Python function writing

• 💻 Key features:
- Production-ready code generation
- Inline debugging assistance
- Automated test suite creation
- Contextual code explanations

• ⚙️ Technical details:
- Runs via #AmazonBedrock
- Cross-region inference for enhanced reliability
- Available to all #GitHub Copilot Chat users and organizations

Source: https://www.anthropic.com/news/github-copilot

GitHub · Change is constant. GitHub keeps you ahead.

Join the world's most widely adopted, AI-powered developer platform where millions of developers, businesses, and the largest open source community build software that advances humanity.

GitHub

🚀 #Qwen2.5: New #AI model family released by Qwen Team

#LLM variants: 0.5B to 72B parameters, support 29+ languages including English, Chinese, French, Spanish
Specialized models: #Qwen2.5Coder for coding, #Qwen2.5Math for mathematics
128K token context length, can generate up to 8K tokens
#OpenSource under Apache 2.0 license (except 3B and 72B variants)

💡 Key improvements:

Enhanced knowledge (85+ on #MMLU)
Better coding skills (85+ on #HumanEval)
Improved math capabilities (80+ on #MATH)
Stronger instruction following and long text generation
Better handling of structured data and outputs (e.g., #JSON)

🔬 Performance highlights:

#Qwen2572B competitive with leading models like #Llama3 and #MistralAI
Smaller models (e.g., 3B) show impressive efficiency
#QwenPlus API model competes with #GPT4 and #Claude on some benchmarks

🛠️ Available via #HuggingFace, #vLLM, and other deployment options
📊 Comprehensive benchmarks and comparisons provided in the blog post

https://qwenlm.github.io/blog/qwen2.5/

Qwen2.5: A Party of Foundation Models!

GITHUB HUGGING FACE MODELSCOPE DEMO DISCORD Introduction In the past three months since Qwen2’s release, numerous developers have built new models on the Qwen2 language models, providing us with valuable feedback. During this period, we have focused on creating smarter and more knowledgeable language models. Today, we are excited to introduce the latest addition to the Qwen family: Qwen2.5. We are announcing what might be the largest opensource release in history!

Qwen

Modelo de linguagem de código aberto supera GPT-4 Turbo em problemas de codificação pela primeira vez

https://www.tabnews.com.br/NewsletterOficial/modelo-de-linguagem-de-codigo-aberto-supera-gpt-4-turbo-em-problemas-de-codificacao-pela-primeira-vez

O Coder V2, desenvolvido pela chinesa DeepSeek, foi treinado com mais de 300 linguagens de programação, alcançando pontuações de 90,2 e 76,2 nos benchmarks HumanEval e MBPP+

hashtags: #InteligênciaArtificial #CódigoAberto #CoderV2 #DeepSeek #HumanEval #MBPP_plus

Modelo de linguagem de código aberto supera GPT-4 Turbo em problemas de codificação pela primeira vez · NewsletterOficial

O Coder V2, desenvolvido pela chinesa DeepSeek, foi treinado com mais de 300 linguagens de programação, alcançando pontuações de 90,2 e 76,2 nos benchmarks HumanEval e MBPP+, respectivame...

TabNews

好的 #測試資料集 讓你上天堂
是說中國這方面的文章是真的挺多的....

#HumanEval 是如何进行代码评估的:从数据构成、评估逻辑到 pass@k 指标计算 - CSDN 博客 https://blog.csdn.net/qq_27590277/article/details/135163862

Grok-1.5, l'ultima versione del modello di IA di Musk

X.ai annuncia Grok-1.5, l'ultima versione del suo modello di IA, con una finestra di contesto di 128 k token e miglioramenti nei test di Math e HumanEval

Gomoot : tecnologia e lifestyle Scopri le ultime novità in fatto di hardware,tecnologia e altro

Proud to announce our paper on "Automatic Unit Test Data Generation and Actor-Critic Reinforcement Learning for Code Synthesis" has been accepted to Findings of #EMNLP2023 .
This is joint work with Matthieu Zimmer, Gerasimos Lampouras, Derrick Goh Xin Deik, and Ignacio Iacobacci .

Code Synthesis, the generation of programming language code from a natural language description, is a challenging problem for #LLMs.
Various Reinforcement Learning methods have been proposed to improve performance of pretrained models.
One #RL approach to this problem is to use functional tests (Unit Tests) as the reward signal; however, this requires data consisting of (i) NL problem prompts, (ii) varied unit tests for each problem to assess functional correctness, which is often unavaible. Some datatasets such as #HumanEval and #MBPP exist; however, these are limited in size and contain (relatively) simple problems.

We show how to programmatically derive new training data for functional test-based Code Synthesis RL, generating and converting automatic tests from a strongly typed language (Java) to a weakly typed language (Python). This allows us to generate arbitrary amounts of test-annotated data.

We then introduce a very straight-forward yet effective practical REINFORCE-based Actor-Critic RL approach that makes use of Unit Test annotated data to tune a function-level Code Synthesis LM.
Crucially, we find that keeping the Critic in sync with the Policy yields better results than pretraining and freezing the Critic.
Use of our augmentation data further improves model performance.

Preprint available at https://arxiv.org/abs/2310.13669 ; code and model will be made available.

#Machinelearning #AI #ML #ReinforcementLearning #LLM #PLM #CodeSyntheis #Huawei

Automatic Unit Test Data Generation and Actor-Critic Reinforcement Learning for Code Synthesis

The advent of large pre-trained language models in the domain of Code Synthesis has shown remarkable performance on various benchmarks, treating the problem of Code Generation in a fashion similar to Natural Language Generation, trained with a Language Modelling (LM) objective. In addition, the property of programming language code being precisely evaluable with respect to its semantics -- through the use of Unit Tests to check its functional correctness -- lends itself to using Reinforcement Learning (RL) as a further training paradigm. Previous work has shown that RL can be applied as such to improve models' coding capabilities; however, such RL-based methods rely on a reward signal based on defined Unit Tests, which are much harder to obtain compared to the huge crawled code datasets used in LM objectives. In this work, we present a novel approach to automatically obtain data consisting of function signatures and associated Unit Tests, suitable for RL training of Code Synthesis models. We also introduce a straightforward, simple yet effective Actor-Critic RL training scheme and show that it, in conjunction with automatically generated training data, leads to improvement of a pre-trained code language model's performance by up to 9.9% improvement over the original underlying code synthesis LM, and up to 4.3% over RL-based models trained with standard PPO or CodeRL.

arXiv.org