0 Followers
0 Following
1 Posts

I prompt injected my CONTRIBUTING.md – 50% of PRs are bots

https://lemmy.world/post/44572752

  • Aggregation of multiple engines
  • Per-engine weight control
  • Good UX
  • Filtering of bad domains from the search results
  • More generally, very customizable

Fair enough.

I decided against web/network-based password managers for my personal needs since the additional attack surface is a concern. A Keepass database file synced across machines strikes a good balance for me (requires password + keyfile to open). It’s also simple to backup and protect.

So yeah, for you use case, I’d recommend Aegis Authenticator.

Aegis Authenticator | F-Droid - Free and Open Source Android App Repository

Free, secure and open source 2FA app to manage tokens for your online services

debian-live-config 5.0.0

https://lemmy.world/post/44095777

debian-live-config 5.0.0 - Lemmy.World

I just released v5.0.0 [https://github.com/nodiscc/debian-live-config/releases/tag/5.0.0] of my preconfigured Debian system for personal computers/workstations. It’s “just” a plain Debian system trying to stay close to upstream, with saner (in my opinion) config defaults and default packages selection. 10 minute offline installation, provides a ready-to-use system with a lean XFCE-based desktop and applications for common use cases. I’ve used it as a daily driver for years, and also to “fix” machines I don’t really want to support except for the occasional dist-upgrade every few years. This release is based on Debian 13. The live system mode also works but is not the primary use case. The full build tooling is provided for those who want to build their own custom [https://debian-live-config.readthedocs.io/en/latest/custom.html] Debian-based system.

No, I’m not interested in a password manager, thank you

Ok. But since you already use a password manager (right?), why not use its built-in TOTP management (I use KeepassXC on desktop and KeepassDX on Android). Why do you need yet-another-separate app?

If I really had to, I’d recommend Aegis.

But I’ll still recommend using a password manager.

There are better alternatives, podman is daemonless and rootless by default, comes with a docker-compatible CLI, and far better container network implementations. The only reason to keep using Docker nowadays is if you have a lot of legacy apps that depend on Docker-specific features (e.g. require rootful containers)
  • Small 4B models like gemma3 will run on anything (I have it running on a 2020 laptop with integrated graphics). Don’t expect superintelligence, but it works for basic classification tasks, writing/reviewing/fixing small scripts and basic chat
  • I use github.com/ggml-org/llama.cpp in server mode pointing to a directory of GGUF model files downloaded from huggingface. The use it from the built-in web interface or API (wrote a small assistant script)
  • To load larger models you need more RAM (preferably fast VRAM/GPU but DDR5 on the motherboard will work - it will be noticeably slower). My gaming rig with 16GB AMD 9070 runs 20-30B models at decent speeds. You can grab quantized (lower precision, lower output quality) versions of those larger models if the full-size/unquantized models don’t fit. Check out whatmodelscanirun.com
  • For image generation I found github.com/vladmandic/sdnext which works extremely well and fast wth Z-Image Turbo, FLUX.1-schnell, Stable Diffusion XL and a few other models

As for the prices… well the rig I bought for ~1500€ in september is now up to ~2200€ (once-in-a-decade investment). It’s not a beast but it works, the primary use case was general computing and gaming, I’m glad it works for local AI, but costs for a dedicated, performant AI rig are ridiculously high right now. It’s not economically competitive yet against commercial LLM services for complex tasks, but that’s not the point. Check old.reddit.com/r/LocalLLaMA/ (yeah reddit I know). 10k€ of hardware to run ~200-300B models, not counting electricity bills

Mattermost is no longer Open-Source

https://lemmy.world/post/43061242

Mattermost is no longer Open-Source - Lemmy.World

Lemmy

Any recommendations for a good XMPP web client?

See my requirements in other comment.

I’m in the same boat, running a Gitlab Mattermost instance for a small team.

Gitlab has not announced yet what will happen with the bundled Mattermost, but I guess it will be dropped entirely, or be hit by the new limitations (what will hit us the hardest is the 10000-most-recent messages limitation, anything further than that will be hidden behind a paywall - including messages sent before the new limitations come in effect - borderline ransomware if you ask me)

I know there are forks that remove the limitation, may end up doing that if the migration path is not too rough.

I used to run a Rocket.Chat instance for another org, became open-core bullshit as well. I’m done with this stuff.

I have a personal Matrix + Element instance that barely gets any use (but allows me to get a feeling of what it can do) - I don’t like it one bit. The tech stack is weird, the Element frontend receives constant updates/new releases that are painful to keep up with, and more importantly, UX is confusing and bad.

So I think I’ll end up switching this one for a XMPP server. Haven’t decided which one or which components around it precisely. I used to run prosody with thick clients a whiiille ago and it was OK.

My needs are simple, group channels, 1-to-1 chat, posting files to a channel. ideally temporary many-to-many chats, decent web UI.

Voice capabilities would be a bonus (I run and use a mumble server and it absolutely rules once you’ve configured the client, but it doesn’t integrate properly into anything else, and no web UI), as well as some kind of integration with my Jitsi Meet instance. E2E encryption nice but not mandatory. Semi-decent mobile clients would be nice.

For now, wait and see.