#BigCode is an open scientific collaboration working on responsible training of large language models for coding applications.

In this organization you can find the artefacts of this collaboration:
πŸ‘‰ #StarCoder, a state-of-the-art language model for code,
πŸ‘‰ The #Stack, the largest available pretraining dataset with perimssive code, and πŸ‘‰ #SantaCoder, a 1.1B parameter model for code.

#StarCoder is a 15.5B parameters language model for code trained for 1T tokens on 80+ programming languages.
It uses MQA for efficient generation, has 8,192 tokens context window and can do fill-in-the-middle.

Chat with StarCoder here: https://huggingface.co/chat/?model=bigcode/starcoder

https://huggingface.co/bigcode

HuggingChat

Making the community's best AI chat models available to everyone.

There are quite a few code-generating β€œAI” systems now β€” GitHub #CoPilot, Amazon #CodeWhisperer, BigCode #SantaCoder, Facebook #Incoder, maybe even more.

I wonder how hard it would be to get these BS bots to play #TDD ping-pong… I write a test, then they generate code until all tests pass, then i refactor, then we repeat.

#HuggingFace just released the #SantaCoder models for the holiday season. Part of the #BigCode project, these 1.1B parameter models are trained on #Python, #Java, and #JavaScript and use advanced techniques like near-deduplication and comment-to-code ratio.

https://huggingface.co/bigcode/santacoder

#AI #DeepLearning πŸ€—

bigcode/santacoder Β· Hugging Face

We’re on a journey to advance and democratize artificial intelligence through open source and open science.