Simple task for testing a local #LLM: "Hello World in modern C++23". I'd expect:

```
#include <print>

int main() {
std::println("Hello World!");
return 0;
}
```

Each coding LLM I've tried presents the C++98 solution using `std::cout`.

If asked to use `std::println()`, they implement their own version using `std::cout`, or they include `iostream` instead of the needed `print`. If you try to correct these errors, the turn into aisplaining mode.

Still a long way to go.

#CPlusPlus

@taschenorakel Interesting question. I tried it with all the local models I have. ministral-3-14b-reasoning used std::print, qwen3-next-80b used std::println. The rest (qwen3-4b-2507, nemotron-3-nano, qwen3-coder-30b, devstral-small-2-2512, gemma-3n-e4b) used std::cout and because I mentioned C++23 they also used std::format, std::source_location and modules. nemotron-3-nano was at least aware that std::print was a proposal for C++23 in the reasoning part.