Just got a cold call email advertising training in “ChatGPT for Accountants”.
Folks, I just want to remind you all that “vibe accounting” is better known as “fraud” or “embezzlement”. Please plan accordingly.
Just got a cold call email advertising training in “ChatGPT for Accountants”.
Folks, I just want to remind you all that “vibe accounting” is better known as “fraud” or “embezzlement”. Please plan accordingly.
@btanderson So far GPT AI is not bad at telling you how the math needs to be done, but it is exceptionally bad at doing the math itself.
So depending on how that is trained, it might be okay. Problem is people see these tools and just use them for all the things they shouldn't be used for. That's demand haha. And the industry is so greedy it will supply that demand with shitty ai.
If it's not intentional, it is just sparkling incompetence.
I keep getting ads for “24/7 AI tax advice” and I’m like “I’m not goin’ to jail for you!”
I could use an _extremely limited Eliza_ for tax season; patiently telling me to just do a little of the next bit until it’s done.
I would not trust anyone but me to write such an Eliza and it looks like maybe I shouldn’t trust myself to use even a carefully written one, they’re too attractive for people to stay conscious of what they are.
An actual expert system/recommender system, based on Rete or some other rule-based production style system would be fine, but not, you know, spicy autocomplete.
@btanderson What could ever go wrong, eh?
Accountant: "ChatGPT, list all my client's business lunches. I have a tee time so won't be checking your work."
ChatGPT: "Your client owns a Cessna 172 jet helicopter, a Gulfstream 900EX, and the island of Maui. With those deductions, IRS owes the client $64 million USD. Filing automatically now using blockchain to upload to a crypto server in Nigeria."
@btanderson and not a lot of people know this, but in the accounting field there's already a word for using chatgpt as part of work product.
"Malpractice."
I feel compelled to say that "LLMs are inaccurate" is a bit of a cop out.
They are no more inaccurate than the randos you interact with professionally.
Any protocol where accuracy is important has multiple checks and balances.
If you are feeding ChatGpt tokens straight into your brain surgery bot, you are no less of a threat than an LLM.