I’ve been building discord bots and other llm powered software recently and doing some things that I think are fairly new strategies for getting value out of llms
I think the future of llm powered software, at least the kind I’m interested in right now, is gonna be “llm social engineering” — prompt engineering as first conceived is a real thing but it’s pretty weak, it can only do so much because you’re just depending on one perfect comprehension of one prompt every time, not reliable. It’s the brunt of many jokes. Basically you’re depending on the LLM to do the thought structuring that leads to a good outcome. But usually efficient thinking around specific problems is best framed with specific logical strategies interspersed with more organic decisionmaking/observation.
So, in this case, connecting multiple (ofc highly nondiscreet vector spacey) LLMs in a logical/discreetly structured network with different specific prompts modifying their behavior is going to be a far more efficient solution than just presenting a higher end LLM thread with a more comprehensive prompt and hoping it always does the logical structure work right- nah you can just make it implicit and save compute + be more reliable.
its basically doing end-product Mixture of Experts stuff
but it’s also very analagous to organization / network / social engineering
I think the future of llm powered software, at least the kind I’m interested in right now, is gonna be “llm social engineering” — prompt engineering as first conceived is a real thing but it’s pretty weak, it can only do so much because you’re just depending on one perfect comprehension of one prompt every time, not reliable. It’s the brunt of many jokes. Basically you’re depending on the LLM to do the thought structuring that leads to a good outcome. But usually efficient thinking around specific problems is best framed with specific logical strategies interspersed with more organic decisionmaking/observation.
So, in this case, connecting multiple (ofc highly nondiscreet vector spacey) LLMs in a logical/discreetly structured network with different specific prompts modifying their behavior is going to be a far more efficient solution than just presenting a higher end LLM thread with a more comprehensive prompt and hoping it always does the logical structure work right- nah you can just make it implicit and save compute + be more reliable.
its basically doing end-product Mixture of Experts stuff
but it’s also very analagous to organization / network / social engineering