There are a LOT of screenshots of the current Bing floating around right now where it answers questions with hilariously bad answers. This is NOT the new Bing though: this is Bing's existing version of Google's "featured snippets"

The new Bing is still behind a waitlist for most people. I've attached a screenshot of that taken from this Verge article: https://www.theverge.com/2023/2/7/23587454/microsoft-bing-edge-chatgpt-ai

Microsoft announces new Bing and Edge browser powered by upgraded ChatGPT AI

Microsoft has unveiled a new version of Bing with an AI chat function. The AI chat is powered by the same technology underpinning ChatGPT. Microsoft wants to capitalize on the hype and threaten Google’s dominance.

The Verge
If you see a screenshot like this one you can dunk on it all you like but it's NOT the new GPT-3 enhanced Bing: this is something a Bing has been doing poorly for a long time in its existing form
The best screenshots I've seen of the new Bing chat interface so far are in this Reddit gallery, where the bot genuinely ends up trying to passive aggressively gaslight the user into believing that it's still 2022 https://www.reddit.com/r/bing/comments/110eagl/the_customer_service_of_the_new_bing_chat_is/
the customer service of the new bing chat is amazing

Posted in r/bing by u/Curious_Evolver • 4,491 points and 607 comments

reddit
@simon feels very much fake. Too Hollywood evil robot.
@isagalaev @simon yea that was discussed by the OP in the thread. Their defence seemed convincing. Basically they didn’t believe it either and encouraged to go use it and get similar behaviour while some who had done that concurred that the behaviour wasn’t surprising to them. The story seems to be that the AI isn’t chatgpt but somehow trying to resist bad prompts or injection and is resultantly obnoxious.
@maegul
@isagalaev @simon
it would be a pretty involved fake for unclear gains ig, except an elaborate google plant mebs. the OP sticks around and talks like a bing aficionado for too long, screenshot PiPs left in, and it seems like plausible behavior for a model like this with a different hidden prompt than the very safe one the openai one seems to use. but idk ultimately 🤷

@jonny I wasn't thinking about anything elaborate, I thought those were just doctored screenshots for the lulz. But everyone seems to think it's legit!

What threw me off is where it recognizes its nonsensical mistake and starts pushing for one variant. From what I saw, it usually simply can't recognize those mistakes. In my case it was "yes, PEP8 says use 4 spaces, which is why I used 2", and it didn't see anything wrong with that statement.