| Github | https://github.com/ABeltramo |
| GOW | https://github.com/games-on-whales |
| Blog | https://abeltra.me |
| Github | https://github.com/ABeltramo |
| GOW | https://github.com/games-on-whales |
| Blog | https://abeltra.me |
@borkdude the problem is that in this case you already know and understand "the solution" and can compare the two implementations. How can you assess the LLM output when you don't fully understand the problem or the solution presented?
Personally it takes longer to clean, review and properly test an auto-generated output compared to solving the problem in the first place. Especially since for complex problems the LLM will most likely introduce even more subtle bugs.