U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban

https://lemmy.ml/post/43860615

U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban - Lemmy

Clearly the whole drama with Pentagon making a big deal of showing that they’re trying to force AI companies to build autonomous AI killing machines and spy on citizens is completely manufactured. Anthropic was always going to comply, and the goal is to just create a marketing campaign them as heroically resisting. All the media has been running the story of a plucky Anthropic defying US military to defend ethical AI and protect humanity.

I switched from ChatGPT to Claude last month and let me tell you, I am not impressed. What are yall using besides hosting your own model? I don’t have the money for GPU’s.

EDIT: this comment is not for anti-AI circlejerk. I’m only looking for actual recommendations. calling anything slop is hypocritical with how monotonous and low effort the circlejerking is.

I’m running ollama with qwen2.5:3b in docker on a rtx3050 8gb. I also use DeepSeek.
Thanks! Can you explain what you just wrote? Do you own these GPU’s? Are you in China?

No problem. My desktop has an nvidia RTX 350 card that has 8GB of ram on it. It’s a basic modernish video card. Ollama is an open source framework for running large language models. The model I’m using is qwen 2.5. It has 3 billion (3b) parameters(basically the size of the LLM) . Docker is a program that allows you to basically run smaller dedicated computers on your computer.

I am not in China. I’m an American living in Albania. I recommended DeepSeek because it’s free, works well, and if a company is going to have the information on what you’re chatting about, it might as well be one that isn’t in the same country as you.

Thanks for all the info! I’d love to run a model locally, but I don’t have the money for a decent enough setup right now, but I know it’s getting close. How effective is the 3b model? Does it do the job for you or you feel like it’s lacking? Are requests pretty slow on that machine?
It runs pretty well. I didn’t notice a speed difference between it and DeepSeek’s web chat. I haven’t used it for anything big, I’m primarily trying to stay current with the technology so I know what I’m talking about during job interviews.
Cool. I’m looking forward to getting the hardware to run locally. Any alternatives you know about where I can borrow computing to run my own model?
You can checkout aihorde.net
AI Horde