This is a video demonstration of the basic operation of #Qwen25 used through #Ollama in #Linux with a GTX 3070, to generate code in #Python for #Blender, as you can see it is dynamic and understands the language fairly well. #AI #coding #Blender3D #opensource #LLM #developer
@pafurijaz Been looking for a model for exactly this purpose. Will check it out. (this one? https://ollama.com/library/qwen2.5-coder:32b)
@jerbot that is a bit bigger, you need 20GB of Ram in your GPU, but with with this one (https://ollama.com/library/qwen2.5-coder:7b) if you have 8GB works well and it is the one that I'm using. remember that are strictly trained for coding, so they are very good in this side.
qwen2.5-coder:7b

The latest series of Code-Specific Qwen models, with significant improvements in code generation, code reasoning, and code fixing.

@pafurijaz Thanks. Fortunately I was forced to upgrade my computer and video card, as Unreal Engine is a VRAM pig. Builder insisted I future-proof my machine with a 3090, as it won't break the bank like the 4000 series, but this gave me my current availability of 24GB of VRAM. It was trying to split to the "Shared GPU" memory, but (I think) setting Ollama with graphic performance priority fixed it so now all 21GB in VRAM (or it was because had more apps closed?).