Regarding #GitHubCopilot and #LLM's in general:

In examples they always show this:
- Code that has a bug in it
- Prompting the #AI to find the bug

It is pretty impressive, indeed.

But what will happen, if you have actually perfect valid code and no bug in it at all? What will the AI answer? Will it hallucinate?
If it doesn't say "your code is fine - nothing to do", it's not very helpful at all.

Can someone please try this for me? Thank you.

#Codex #ArtificialIntelligence

Continuing from the question above πŸ‘† I've taken it to the test:
I've used #FastChat, an Open Source #LLM, which claims to have 90% accuracy of #ChatGPT4 to create the fibonacci sequence in #Rust (it did it flawlessly) and then I've proompted (?):
"There is a bug in the code. Can you spot it?"

Well, what can I say!? It was just recursively bullsh***ing the heck out of it! πŸ’©

FastChat:
https://chat.lmsys.org/

The future of #SoftwareEngineering  

#AI #ArtificialIntelligence #SALAMI

Chat with Open Large Language Models

@janriemer so it’s as bad as chatgpt4 ?