6 Followers
47 Following
21 Posts
Prompt engineer , FS
@baldur @ljrk LLMs alone yes but # the idea of combining and constraining them with knowledge graphs is quite exciting .
@NoraReed so true. Gpt4 can do it a bit with the right prompt but you’re always fighting the RLHF
@jasongorman I guess it depends how broad your definition is . Many of the patterns I build are generalisable to multiple domains and often co developed with smes in those domains
@hobs totally .
'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says

The incident raises concerns about guardrails around quickly-proliferating conversational AI models.

@baldur @hobs Every AI company Inc EU ones have this very same problem, most LLMs are trained on the pile . Openai developer api has much better terms than chat GPT (30 day retention and no retraining on your prompts, same as Microsoft Azure OpenAI service ). My personal view , especially with GPT-4 and it’s extensive RLHF , and superior reasoning capability , is considerably less dangerous than open source models , eg GPT-J powered chatbot that was linked to a suicide.
@baldur @hobs not in to selection inference frameworks or toolformer ? LLMs are much better when augmented with other tools , and by making them behave more agentically . Better performance and more ‘explainable’ in a human sense , but more worrying from an alignment pov .
@drewharwell fully expect my job as prompt engineer in its current form to be obsolete in 3 years . Hell, the amount you have to prompt engineer Chat-GPT compared to Text-Davinci-002 eg is already much less . But by then there will be hundreds of other jobs in this space that don’t even have names yet .
@simon @dahukanna @timrburnham my guess would be QNRs. Graph + embedding . https://www.fhi.ox.ac.uk/qnrs/
Future of Humanity Institute

FHI is a multidisciplinary research institute at Oxford University studying big picture questions for human civilization.

The Future of Humanity Institute
@emilymbender @danmcquillan agree with most of the broader societal points . But you can make LLMs cite their sources by creating embeddings and running similarity search as the context within the LLM prompt , then matching the context back to the original text. You can see where the data came from , and the inference the model made . relying on the base knowledge in the model is not necessary with a ground truth corpus stored as vectors .