Overall, I found this experience incredibly fascinating.

GPT-4 is pretty amazing, but also definitely not perfect. I hit a few stumbling blocks that I don't think it would be able to solve on its own, without the fact that I already had a lot of prior understanding. This is probably because programming with Tor and the python stem library is incredibly esoteric, so there's not a lot of info about it in its model. I don't think it would have had these problems with something popular like React, for example.

But I'm still super impressed. I didn't actually write ANY of this code myself, I just prompted GPT-4 to write it for me, and to tweak it as necessary, and I copied and pasted it into my editor to try it out.

Had I not used GPT-4, I would have ended up with something quite similar. But it probably would have taken me an extra hour or so, and definitely more had I not already been familiar with this stuff.

@micahflee
I know very little about programming and coding, but I still found this thread very interesting.

@micahflee Thanks for taking the time to document this experience. Absolutely fascinating.

PS I also appreciate your tone -- not sensationalist in either direction (i.e. just-a-fancy-autocomplete dismissive OR end-of-all-jobs-aarrgghh panic). Just a well balanced review of an incredible new TOOL available to us.

@micahflee Had a similar experience recently, had GPT-3 write a bunch of regexes. I could have written them, and as it was I had to tweak/fix/try-again for many, but it was still like 20x faster. I also tried asking it for a common-ish but not ubiquitous section of a legal contract and it gave a perfectly serviceable version. I'm not sure how many white collar employees it's going to replace, but if I were stackexchange or lawinsider I'd be worried

@micahflee

#aiDrivenDevelopment is a fascinating paradigm shift. I can definitely see it being more revolutionary than #LSP. I just hope that large language models get efficient enough to run sufficiently on consumer hardware (either that, or have an #APU (#AI processing unit) become part of our computers).

I am not scared about AI, but I am scared of having a #Microsoft situation, where the only viable IDE's are closed source.

Hopefully we can get something like LSP and be able to bring this functionality to #Vim and #emacs, as I will always argue that those are better than any IDE out there.

@zbecker @micahflee there is a package gptel which puts gpt into an org mode buffer for you.

@jayalane @micahflee

I am not surprised someone has done that already, but I assume it requires paying openai for api access.

I am a bit of a zealot, and until I can run the LLM on my pc, I am not going to use it when I program.

@micahflee I'm a sftwr engr working with large distributed systems and I use a combo of ChatGPT (Best coder) Bing Chat(Access to the current internet with sources) and Github Copilot(Reads my code as I type it in) everyday. It's not always right like you said but it gets most of it and you just have to fill the holes. Or if Bing is wrong Copilot will fix it lol. I get so much done so fast. And having it for Bash is a must.
@eigenman @micahflee I have been using GPT-4 a lot te last couple of weeks. It seems like it saves me time, but if I weren’t an expert in the types of systems I am writing, it wouldn’t be a success. For this TOR programming in Python, would someone that never knew about it before have been able to quickly make good working code this way?
@jayalane @micahflee yeah I agree, you still need to know what you are doing for the most part. It's more like a super intellisense. The thing with computers is it is easy to test the solution just by running it. Although there are some gotchas even in that scenario. Things may look like they are working but there are actually errors.
@eigenman @micahflee I had it generate a beautiful function to wrap a REST API; it got all the https client stuff but never used the inout parameter to build the rest call. It did fix it when I pointed it out, but it’s an odd sort of error.
@jayalane @micahflee I've fed it leet programming questions and it gets the logic almost perfectly, however, it seems to mix languages. it ends up being pseudo code correct for the most part. But when I point this out it usually fixes it.
@jayalane @micahflee Here's a good use case I just did. Who hates regular expressions? I do lol. ChatGPT write me a regex that will find the term error or a range between 400 and 600
@eigenman @micahflee I had it successfully tell me what an exit code of 137 from a go process means and the syntax for a literal map of strings to int64.
@jayalane @micahflee nice. yeah a game changer for sure. I've always wanted to have the Star Trek computer and this is almost it lol😍
@eigenman @micahflee I did try and fail to get it to compute some very large Ackerman function values. It refused saying they were too large.
@eigenman @micahflee I read the whole thread after finding it and I would say no, your own expertise made it successful. I ave also seen per language effects where Go code was pretty good but Java code was concatenating strings to build SQL a la Bobby Tables. A coworker suggested that Java stack overflow is full of such low quality things.

@micahflee

"I didn't actually write ANY of this code myself"

Good point. So a copyright violation of the original authors?

Try asking GPT-4 for a list of *all* the humans who own copyright to the code that it provides you with. Code with free-software (copyleft) licences *is* copyrighted. And even permissively licensed code requires attribution.

Of course, GPT-4 will quite likely lie to you like Trump on steroids. Unless LLMs have copyright traceability?