New blog post: https://bartwronski.com/2024/01/22/how-i-use-chatgpt-daily-scientist-coder-perspective/
"How I use ChatGPT daily (scientist/coder perspective)."

I recommend it to anyone working with technology, but especially if you think that LLMs are "useless" and are open-minded to see how they can be helpful, delightful, and playful.

How I use ChatGPT daily (scientist/coder perspective)

We all know how the internet works—lots of “hot takes,” polarizing opinions, trolling, and ignorance.  Recently, everyone has opinions on AI and LLMs/GenAI in particular. I won’t focus here on…

Bart Wronski

@BartWronski if you have the GPU for it, there are a few quite good 7b/13b models for use at home. the main advantage of that is that it's open source work, the models are far less likely to generate "I'm sorry but" responses, you're in full control of the system prompt, and privacy is perfectly maintained.

Here's a starting point https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard

look for 4-bit / GGUF quantized models in particular. these fit into 8-12GB almost completely.

Open LLM Leaderboard - a Hugging Face Space by HuggingFaceH4

Track, rank and evaluate open LLMs and chatbots

@lritter thanks, I definitely want to give them a try. The only reason I didn't do it earlier was this initial setup.

@BartWronski the best frontend i've so far encountered is https://github.com/oobabooga/text-generation-webui/ which i can recommend very much.

as a bonus, it can fake an OpenAI web API on port 5000, so anything that interfaces with OpenAI and supports custom servers can connect to it.

GitHub - oobabooga/text-generation-webui: A Gradio web UI for Large Language Models with support for multiple inference backends.

A Gradio web UI for Large Language Models with support for multiple inference backends. - oobabooga/text-generation-webui

GitHub
@lritter @BartWronski 7/13 is a bit lacking in the long run. 33B 4-bit fits into 24GB of VRAM. If you want to try 70B, or more quantization bits, or have less VRAM, you can split the layers to process part of them on the GPU and the rest on the CPU.
@wolfpld @lritter I actually have an A6000 48GB GPU in my PC now, so can try with even larger models. (The work one has a 4090, so "only" 24GB but maybe I could play with local LLMs for work coding, as understandably so far I could not use any service - internal code and company policies)
@BartWronski great post! About 50% of your use cases overlap mine (and most of the other 50% are ones I should probably consider...)
@BartWronski good post! A lot of your uses overlap with mine.
@aras @BartWronski What are you people doing to need to write complex regexes and ffmpeg scripts so often!?
@resistor @BartWronski my case is less of ffmpeg, and more often “I want this 10 line throwaway script to do something”. And since I don’t need that everyday, I don’t have enough knowledge of (js/python/perl/css/bash/whatever) to remember how exactly to do it.

@resistor @aras I work with images and often videos, do a lot of presentations and papers which need videos, my hobbies revolve around images and music and recently involve recording videos for social media :)

And for regexes, it's probably highly domain specific. There are people who write them every day, I do it just once per month/two months/quarter, which is enough to always forget then

@BartWronski @aras I'm constantly trying to figure out what I'm missing when others are speaking so enthusiastically about using LLMs in their workflows.

I occasionally make an ffmpeg invocation or a regex, but and usually of the very simple variety (mix this video with that audio, replace this substring with that substring) which I've done often enough to memorize. Anything more complex than that is less frequent than every 6 months for me, so I have yet to have a chance to even try it out.

@BartWronski @aras I don't really have use for most of the NLP-ish ones. I'm not reading or writing academic papers, and I've never wished I had a summary of one.

It is interesting to see that you get use out of it for language learning. I usually have good results using Google for that, but I suppose I could try an LLM for it.

The best use case I've found so far is that ChatGPT 3.5 can explain my daughter's fourth grade math problems, as long as I'm very insistent that it not use algebra.

@resistor @aras The point of my post was not to show that those are the only use cases but rather that for most things I do daily, I found some use that makes them more enjoyable and easier. :) Often very unrelated ones.

You do different things; maybe they can also benefit from those. Or maybe not, but it's worth considering as an additional tool available. :)

And at least not dismissing other people's use-cases and usefulness.

@BartWronski @aras I’m looking for ideas to try, since I don’t find it useful today.

Like I said, the language learning one is something I’ll have to try.

@BartWronski It’s a little disingenuous to suggest anyone who doesn’t like it hasn’t tried it. I’ve tried it extensively and found its use for coding to be a net decrease in productivity as it was too often subtly wrong. Bugs that take longer to track down than the AI saved.

It also might copy GPL-licensed code I’m not allowed to use, if asked for something specific enough.

@BartWronski really enjoyed your post - those are actually useful use-cases I can relate to!
@BartWronski I use it to rewrite paragraphs but don't use it to "highlight and explain mistakes." Thanks a lot for the tip!
@BartWronski I use chat GPT to generate latex too.
But I still use the free one (3.5), mostly to help explaining linear algebra proofs, and sometimes it is hilariously wrong. Got the FOIL method wrong yesterday. Thankfully it can usually be corrected or at least points to the right direction.

@BartWronski Thanks for sharing your views.

I used to be more anxious about gen. AI thanks to startup's annoying and apocalyptic marketing + all the lazy takes from gurus of LinkedIn that still invades my TL.

But now what makes me have anxiety is about the artists; a close person to me got laid-off from a studio and now he depends on gigs, we just don't know if his job will still be a thing in the next years.

Also, I do worry about outsourcing which in this case impacts developers here too.

@Andre_LA I don't want to bear the bad news, but gigs won't be sustainable long-term, AI or not.
It started a while ago, but even AAA game studios outsource whole levels to still developing countries for pennies, and it will be a race to the bottom - nature of capitalism...

Making illustrations will become like music, so primarily the domain of a) high-profile super talent, b) people doing it as a hobby, and c) things that need in-person presence, like live gigs - for visuals, e.g., tattoos.

@BartWronski this was insightful, thank you! I haven't used it since the early days so I should try it again.

Hilariously (given my employer) we are not allowed to use it in any capacity at work :P

@longbool we can use it as long as we don't input any company data or code there - and obviously have to use our judgement :) NVIDIA has a great culture of trust in my experience so far
@longbool @BartWronski Huh. Even EA has a GPT instance hooked up to Slack that you're allowed to use for work.
@TheIneQuation @longbool @BartWronski you're not allowed to use its output for anything that will be shipped, though. Intelectual property of AI output is still an issue