Nils Durner

293 Followers
72 Following
43 Posts
Software Engineer with a particular interest in #ESignature #eIDAS #PDF #WacomForBusiness #gpt4. Mostly using #cpp, #csharp and #sql these days.
GitHubhttps://github.com/ndurner
Thoughtshttps://ndurner.github.io
I do use it and I do like it. There are cases where I found it odd/not working well, but can't even remember which. @mroach

My #GenAI workbenches for OpenAI, Anthropic and Amazon #Bedrock are a pastime that has kept me interested for quite some time now, so I'm confident to share these more broadly: these tools serve as my personal Swiss Army knife and intelligent utility for diverse applications across work, play, and vacation.

🛠️ Workbench variants
on OpenAI: https://huggingface.co/spaces/ndurner/oai_chat
on Anthropic: https://huggingface.co/spaces/ndurner/claude_chat
on Amazon Bedrock: https://huggingface.co/spaces/ndurner/amz_bedrock_chat

Distinctive features:
💫 ahead of the curve: custom prompting and sometimes pre-release model access, with reliable GPT-4 "classic" available for fallback, elevate performance beyond what's possible with ChatGPT etc.

🌞 accessible: allows access to Claude 3 in the EU, and unlocks higher usage limits (subject to #AWS, OpenAI or Anthropic agreements). Better user experience than handling Google Collab sheets.

🌟 cost effective: the pay-per-use model using your own API key allows for cost distribution across team members, which can be more economical than individual flat-fee subscriptions.

🌄 vision capabilities to discuss images and photos, a feature not commonly available in typical #LLM Playgrounds.

💡 educate about & experiment with #GenerativeAI, and experience its job-transformative potential beyond #ChatGPT

✨ bonus features: file upload including basic Word file reading, history export/import for reuse or sophisticated prompting techniques, file download, reproducibility, support for Mobile, custom system prompt for AI personas, … and perhaps more to come.

🔒 mitigation against the AI Assistant snooping attack by Roy Weiss et al.

🚀 self-hosted deployment option or ready-to-run hosted variant.

(🌖 frozen models with static world-knowledge, not internet-enabled like #PerplexityAI)

These tools are shaped around my personal and local community's use. I am open to suggestions however, and a very modest write-up to get started with some of the more advanced features is here: https://lnkd.in/eS7xvEGk.

As a Swabian from #TheLänd, I really only pay if I am really, really convinced. 👛🔒,💸🙅🏻‍♂️💯. The professional-grade services underlying these tools justify the effort and time for any dedicated professional - as opposed to the consumer-grade offerings that may just refuse to work when demand is high and generally only give access to technology that is so very behind by today's standards. "You get what you pay for", as the saying goes.
Through the underlying frontier language models, these AI tools can almost be likened to a young apprentice: capable of conceiving fresh & brilliant ideas and eager to tackle the tedious tasks. Yet, it remains crucial for the Maestra or Maestro to diligently check and co-iterate on results.

Nota bene, thus: This tool is an ongoing experiment, provided with no warranties. You are solely responsible for its use. Follow the science and share your experiences.

If you are aware of similar projects or have insights to share, I would appreciate hearing about them.

🔔 Keeping up
on Hugging Face: https://huggingface.co/ndurner
on GitHub: https://github.com/ndurner/

🛫 Recommended high-level background
"Co-Intelligence" by Ethan Mollick: https://www.linkedin.com/posts/emollick_co-intelligence-by-ethan-mollick-9780593716717-activity-7183949270348124160-cR1J

Healthcare AI Build vs. Buy: Lessons on building genAI solutions in house: https://elion.health/resources/webinar-ai-build-vs-buy

AI Index report 2024 by Stanford HAI: https://aiindex.stanford.edu/report/?sf187708151=1

🤿 Recommended deep dive
"Writing Principles for Task-Tuned Prompt Engineering" by Karina Nguyen: https://www.youtube.com/watch?v=6d60zVdcCV4

Anthropic Prompt library: https://docs.anthropic.com/claude/prompt-library

OpenAI Cookbook: https://cookbook.openai.com/

🔬 Recommended research preprints
Sparks of Artificial General Intelligence: Early experiments with GPT-4: https://arxiv.org/abs/2303.12712

A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models: https://arxiv.org/pdf/2401.01313.pdf

A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications: https://arxiv.org/pdf/2402.07927.pdf

An Empirical Categorization of Prompting Techniques for Large Language Models: A Practitioner's Guide: https://arxiv.org/pdf/2402.14837.pdf

Lost in the Middle: How Language Models Use Long Contexts: https://arxiv.org/abs/2307.03172

Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks: https://arxiv.org/pdf/2404.06480.pdf

OpenEQA: Embodied Question Answering in the Era of Foundation Models: https://open-eqa.github.io/assets/pdfs/paper.pdf

OAI Chat - a Hugging Face Space by ndurner

Discover amazing ML apps made by the community

AI Attribution in Art offers perspectives for tech workers pondering #GenAI disclosure: from subtle references to detailed disclosures, artists are using various approaches to acknowledging AI contributions in their work.

https://www.linkedin.com/pulse/ai-attribution-art-perspectives-software-engineers-nils-durner-mx68f

My article examines:
🔍 The "AI Assisted" label used by curator Prof. Janet Bellotto

🔍 Segregated disclosure at the Singapore Art Museum

🔍 Integrated disclosure by artist Daito Manabe

🔍 Comprehensive disclosure practices by:

🌠 the Italian art studio fuse* and

👨‍🎨 artist and writer Prof. Lev Manovich

As AI becomes increasingly interwoven with our development lifecycle, tech stacks and products, the #ModernArt community's thoughtful exploration of authorship, originality, and ethical AI use may offer valuable insights for our own practices. While it's unlikely that there's a one-size-fits-all solution, these examples provide food-for-thought as we consider the nuances of transparent and responsible AI attribution in our own fast-paced industry.

Whether you're a software engineer or in any other role shaping tech solutions, I invite you to explore these perspectives and consider how they might inform our work.

#GenerativeAI #SoftwareEngineering #SolutionConsulting # DigitalArt

AI Attribution in Art: Perspectives for Software Engineers

Introduction Software engineers often "stand on the shoulders of giants", as the saying goes: any reasonably complex component, arguably, comprises third party libraries, artwork, and decades of prior work. Attribution, and giving credit, to third parties who have contributed to such a "larger work"

There are actually two Nano models, according to their paper. Bard is based on (a fine-tuned version of) the Pro model, with an upcoming premium tier featuring the (yet unavailable) Ultra model size.

#Bard users have noticed a fixed pre-span conversation to each chat session. It would make sense to hide that away behind a dedicated (internal?) Bard Assistant API, rather than exposing the full Google Vertex AI API.
Don't know for certain, though.

@matthew_d_green

"previous gen macbook" makes me wonder if you're running the full model or a quantized version. Also, which runtime do you use - llama.cpp?

Thanks for the real-world report!

@Elucidating

The perceived RSA-3000 crypto mandate by the German Federal Office for Information Security (BSI) has been reported by @heiseonline highlighting that:
💡 a BSI speaker confirmed that this is a recommendation, not a mandate
💡 the TLS certificate of the #BSI website still uses RSA-2048 as well
💡 the wording, especially across BSI publications, is confusing and could be misleading

This reporting¹ is in the context of TLS (publications TR-02102-2, TR-03116-4), but the same issues are present with the general "Technische Richtlinien" document on cryptographic algorithms and key lengths (TR-02102 part 1), which is cited by sources like keylength.org, often without the nuance from the preamble, such as:
👩🏻‍⚖️ the recommendations do not preempt regulatory approval processes
🧑🏻‍💻 they target developers planning new systems
💫 they may exceed the stated goal of achieving 120 bits of security

The Heise article¹ concludes that "A #signature algorithm for TLS needs to be secure for only as long as the certificate is valid, which is typically one year." It also notes that the US National Institute of Standards and Technology (NIST) "considers #RSA with a key length of 2048 bits to be sufficiently secure for signatures until the year 2030".

¹ Heise article (German): https://www.heise.de/news/BSI-Verwirrung-um-Anforderungen-an-Schluessellaengen-fuer-TLS-Verbindungen-9596072.html

BSI: Hohe Anforderungen an RSA-Schlüssellängen bei TLS-Verbindungen

Das BSI fordert für TLS-Verbindungen äußerst große RSA-Schlüssellängen. Sprecher relativieren, es seien Empfehlungen. Checklisten widersprechen dem.

heise online
Yes, I have used doxygen throughout my career for exactly that. Would perhaps use GPT-4 Turbo in conjuction these days, corporate policy permitting. @Chaos_99
Nothing ready-made, but doxygen used to support XML as output format. (I have used this loong ago to auto-generate an C API from a C++ API) @darkcisum

1. ja, anwesend

2. taugt als zweiter TC: Apple weist in versch. Formen darauf hin, dass die Ultra nicht allein verwendet werden sollte, z.B. mit:
„Always use a secondary depth gauge and timer/watch, as well as decompression tables.“
https://support.apple.com/en-bn/HT213334 (unten; „Learn more about diving with Apple Watch Ultra“)

Außerdem beruhigend, indem Messenger + Apple Pay (+ Mini-Handy) usw. mit am Körper sind.

3. Keine Air Integration, AFAIK

@fluepke

Use the Depth app on Apple Watch Ultra

Learn how to use the Depth app on Apple Watch Ultra during underwater activities to measure water temperature, duration, and depth to 130 feet (40 meters).

Apple Support

I don't recommend relying on ChatGPT Plus for work: when the #GPT-4 message limit strikes, you'll be divorced from your toolchest for the next several hours. What I use instead is the #OpenAI playground (write-up for a colleague here: https://ndurner.github.io/chatgpt-vs-openai-playground), with this frontend tool that makes it a little more convenient to use: https://ndurner.github.io/geppettos-workbench. The latter also establishes are more suitable-for-work System Prompt:
---
You are a helpful assistant. Answer faithfully and factually correct. Respond with "I don't know" if you are unsure. Answer user inquiry in the context of the conversation that preceded it.
---

At least that's the best I came up with this far. I would love to hear about possible improvements that make it more like ChatGPT, but without the rampant confabulation - aside from the leaked "Sydney" prompt.

@imrehg

ChatGPT vs. OpenAI Playground

I got the question what the difference between the OpenAI Playground “in chat mode” and chat.openai.com is. So for everybody’s benefit, my answer reproduced below. If you’re not familiar at all, see this screenshot for what I am talking about:

Nils Durner’s Blog