Scale AI's Public Document Blunder: Security Is a Mindset, Not a Checkbox

So, news dropped: Scale AI had thousands of confidential client documents (Meta, Google, xAI) publicly exposed via shared Google Docs links. Sounds like a rookie mistake, but this is a top-tier AI vendor with $14B backing, and it went down like a high school sloppiness.

This isn't just embarrassing exec-level fumbling; it's a stark reminder: security isn’t optional, it’s foundational. No amount of encryption or AI wizardry matters when your Google Docs are visible to the world.

I’ve been around this block: permissions, configuration, automation—if you miss one piece, your whole trust chain breaks. Scale AI’s quick lock-down after the BI exposé is reactive, not proactive—and that scares me. Your X million-dollar AI startup shouldn’t be fixing holes in your Google shares.

My takeaway: if you’re building serious software, secure EVERY layer, from CI to shared docs. Otherwise, you’re one misclick away from a reputation meltdown.

#OpLog Day 10
GPT-4.1 Just Dropped, and It’s Pretty Wild

So, OpenAI just rolled out GPT-4.1 to all paid ChatGPT users, and honestly, it’s a solid step up from GPT-4o. The model now handles coding tasks way better and can digest way longer inputs — like, 1 million tokens long. That’s huge if you’re working with big projects or want to feed it entire docs.

But here’s the thing: while this is exciting, it’s also a bit scary. The pace of AI upgrades is insane, and we’re basically putting all our eggs in a few big AI platforms. Sure, they’re powerful and useful, but it raises questions about privacy and who really controls this tech.

As someone who’s seen a lot of tech trends come and go, I think it’s cool to have these tools at our fingertips — just gotta keep in mind the bigger picture. Innovation is great, but we shouldn’t forget the ethical side of things as AI gets smarter and more embedded in our lives.

#OpLog Day 9
The False Urgency of Tech Stacks

In today’s dev circles, choosing the “perfect” tech stack has become an obsession. I’ve seen teams spend weeks debating between Next.js and Svelte, or Go vs Rust—before even writing a single line of meaningful product logic.

Here’s my take: no tech stack saves a bad idea. The real work is in execution, not in your choice of framework. Great products survive refactors, migrations, even rewrites. But they don’t survive indecision.

Pick what you know. Ship it. Optimize later. The best tech stack is the one that lets you move without friction—because clarity beats cleverness every time.

#Oplog Day 7
The Myth of “Clean Code” in Early-Stage Projects

There’s this romanticized notion in tech circles that every project, no matter how small or new, should follow “clean code” principles from day one. But I’ll be honest: I don’t buy it.

When you’re validating an idea—whether it’s a micro-SaaS, an automation script, or an internal tool—clarity of intent matters far more than the elegance of your syntax.

I’ve seen people abandon great ideas because they got trapped in the rabbit hole of “doing things the right way,” following architecture patterns meant for production-scale apps… for a 200-line MVP.

Let’s break it down:



1. You Can’t Refactor What Doesn’t Exist

You don’t need a folder structure that mimics a Fortune 500 monolith. You need a proof that your project solves a real problem. Clean code is something you evolve into—not something that should delay your 0.1 release.



2. Readable ≠ Overengineered

Readable code is important. But obsessing over things like SOLID principles, strict domain boundaries, and factory patterns—before your idea even proves traction—is just a form of procrastination dressed as best practice.



3. Your First Draft is Meant to Be Ugly

And that’s okay. Real creativity lives in messy drafts. I write throwaway scripts all the time just to think through a problem. Some evolve into products. Most die peacefully in /tmp.

Clean code has its place. But worshipping it too early kills momentum.



4. Speed is a Feature (at First)

You want to build fast, test fast, and fail fast. That doesn’t mean write garbage—it means prioritize velocity. Clean code can come in iteration two, once you know you’re building something worth maintaining.



So yeah—I don’t write clean code when I’m exploring. I write “clear-enough” code. It’s the code that’s good enough to prove the point and bad enough that I can throw it away guilt-free.

And honestly, I’ve had more success working this way than when I tried to be the Clean Code Cop from line one.



#OpLog Day 9
oplog.isalman.dev
Source: github.com/hotheadhacker/akkoma-blog
Home - Salman's OpLog

OpLog - Salman's programming thoughts and technical opinions

WhatsApp’s “End-to-End Encryption” Isn’t What You Think

WhatsApp proudly advertises its end-to-end encryption (E2EE) as a guarantee that only you and the person you’re communicating with can read your messages. However, the reality is more nuanced.

1. Metadata Exposure
While message content is encrypted, WhatsApp collects metadata—information about who you communicate with, when, and how often. This data can be shared with parent company Meta and, upon request, with law enforcement agencies.

2. Vulnerabilities in Group Chats
Recent research has highlighted weaknesses in WhatsApp’s group messaging system. The platform lacks cryptographic management for group messages, allowing potential attackers to add unauthorized members to group chats without proper verification.

3. Prekey Depletion Attacks
A study titled “Prekey Pogo” revealed that WhatsApp’s implementation of the Signal protocol is susceptible to prekey depletion attacks. Such attacks can degrade the security of future messages, compromising the intended forward secrecy of E2EE.

4. Message Flagging and Content Review
If a user flags a message, WhatsApp can access its content to assess potential violations. This process involves decrypting the message, which contradicts the notion of absolute end-to-end encryption.

5. Backup Vulnerabilities
WhatsApp offers encrypted backups, but users must opt-in for this feature. Unencrypted backups stored on cloud services like Google Drive or iCloud are susceptible to access by third parties, including the service providers themselves.

6. Exploitation by Spyware
In 2019, the Pegasus spyware exploited a vulnerability in WhatsApp, allowing attackers to install surveillance software on users’ devices. This incident underscores that vulnerabilities can exist, enabling unauthorized access despite encryption claims.

7. Data Recovery Possibilities
Deleted WhatsApp messages can sometimes be recovered through backups or third-party software, challenging the perception that once a message is deleted, it’s gone forever.



#OpLog Day 7
URL: oplog.isalman.dev
Repo: github.com/hotheadhacker/akkoma-blog
Home - Salman's OpLog

OpLog - Salman's programming thoughts and technical opinions

Let’s Encrypt is free, but is it really safe?

Let’s Encrypt has changed the internet—for better and worse. It democratized HTTPS, no doubt. A few lines of shell and your site is green-locked. But we don’t talk enough about its trade-offs. And after self-hosting for years, I’ve seen enough edge cases to have a solid opinion: Let’s Encrypt isn’t as secure as we pretend it is.

Let me explain.

1. Domain validation is just a low bar
Let’s Encrypt only checks if you “own” the domain via DNS or HTTP validation. That’s fine for personal projects, but it opens doors for phishing sites, malicious mirrors, and scam clones to get green-locked with zero scrutiny. Users think a padlock = safe. That’s not true. They just bought a domain, added DNS records, and now they have the same certificate UX as a real banking site.

2. Expiry every 90 days sounds good, until it breaks in production
Short-lived certs are cool in theory. But if you’re running a self-hosted service, you know how brittle the automation can get. One cron job fails, one DNS hiccup, one missed renewal—and your service goes down with “SSL_ERROR_BAD_CERT_DOMAIN”. I’ve seen open-source dashboards, internal APIs, and even e-commerce sites go down silently because of auto-renew gone wrong.

3. MITM still happens, but now with HTTPS
I’ve seen cases where compromised servers served malware under legit Let’s Encrypt certs. And most users wouldn’t even question it because the padlock was green. Let’s Encrypt doesn’t audit content. It doesn’t verify who you are. It’s just domain-control. That’s it.

4. Abuse at scale
A single bad actor can register 50 domains, get certs in seconds, and set up a phishing farm—all looking “secure.” Try doing that with paid certificates that involve organization validation. Let’s Encrypt made attacks faster and harder to detect. And while they have rate-limiting and revocation systems, they’re reactive, not preventive.

What do I use instead?
In most of my serious projects, I use Cloudflare’s Origin Certificates or paid DV/OV certs, depending on the use case. They last longer, offer better API controls, and give me less anxiety during deployments. Also, if you’re in DevOps, the fewer moving parts that break silently, the better.

To be clear—Let’s Encrypt isn’t bad. It’s necessary. It brought HTTPS to billions. But if you’re building something serious—something where uptime, trust, and security really matter—don’t blindly go for “free.” Sometimes free costs more in the long run.



#OpLog Day 6
I Turned My Akkoma Instance into a Federated Opinion Blog

Welcome to Day 5 of #OpLog — my daily stream of personal tech opinions.

Today’s post is about how I turned my self-hosted Akkoma instance into a fully automated, opinion-based microblog — called Oplog.

Akkoma (a fork of Pleroma) is lightweight, API-friendly, and doesn’t carry the bloat of traditional social or blogging platforms. I use it to post short-form thoughts — but I wanted those to live outside the fediverse too, in a format I fully control.

So I built a GitHub Action that runs every hour, fetches the XML feed from my Akkoma instance, parses it, supports pagination, and then publishes it as a static site using GitHub Pages. The whole thing is pipelined — no manual effort needed. Once I post on my instance, it automatically appears on Oplog.

I’ve even polished the UI to make it clean and readable. This isn’t a traditional blog — it’s federated, automated, and designed for short, high-signal posts without the pressure of SEO or long-form writing.
• Preview Oplog: https://oplog.isalman.dev
• GitHub Source Code: https://github.com/hotheadhacker/akkoma-blog

This is what I always wanted:
A personal space powered by the fediverse and automation — no overhead, just thoughts.



#OpLog Day 5
Home - Salman's OpLog

OpLog - Salman's programming thoughts and technical opinions

Stop Obsessing Over Scalability Before You Have a Product

One of the strangest trends I’ve seen in the DevOps and startup world lately is how early teams rush to architect their projects for massive scale — before they’ve even validated their core product. Everyone seems obsessed with building for “millions of users” from day one, yet most of these products don’t survive long enough to reach even 100.

I’ve been in DevOps for over four years now, and here’s what I’ve learned:
Everything is scalable.
Given the time, resources, and intent, any system can scale. Sure, some require more engineering effort, but it’s doable — it always is. But what’s not as easily fixable is a poorly thought-out product, a missing market fit, or a startup that ran out of steam because it spent all its energy on infrastructure gymnastics instead of value creation.

I’ve seen so many projects set up from day one with:
• Microservices (without needing services in the first place),
• Kubernetes clusters (when they barely need one server),
• Kafka queues,
• Redis clusters,
• Full-blown observability stacks like Prometheus, Grafana, Loki, the whole suite,
• And CI/CD pipelines with dozens of automated checks and staged deployments.

Meanwhile… their product isn’t even live. Or worse — it’s live, but no one’s using it.

Scalability has become a vanity metric in the builder’s mind. Something to show off. Something to prove that they’re serious about tech. But it’s like building a ten-lane highway in the desert. Where are the cars?

Why are we doing this?

Because we’re letting fear and future-hypotheticals override logic:
• “What if we blow up overnight?”
• “What if TechCrunch features us tomorrow?”
• “What if 10,000 people sign up in the first week?”

None of that matters if your product can’t keep even one person engaged.

Here’s the honest truth:

You can scale almost anything when needed — but you can’t inject product-market fit after you’ve exhausted your budget and energy on premature infra scaling.

What’s the better approach?
• Build a solid, minimal product.
• Get it working on a basic setup. A single server, even.
• Keep your code readable and modular. That’s your best “scalability plan” right there.
• Maintain a rough SOP or checklist for scaling later. Know how you would scale when needed — but don’t build it yet.
• Track real usage. Then decide when to scale based on demand, not on your architecture fantasy.

And I get it — it feels good to build fancy infra. It makes you feel like a real engineer. But building the right thing, with the right focus, is even harder — and that’s what separates successful products from the graveyard of over-engineered ghosts.

So stop chasing scale before you’ve solved a problem.
Build value first. Scale later.


#OpLog - Day 4
A well-crafted README > any fancy landing page.

Markdown is the new minimal web — clean, structured, and privacy-friendly.

GitHub READMEs (even profile READMEs) serve as reliable, fast-loading, no-tracking intros to your work.
No scripts, no trackers, no third-party callouts leaking your IP.

You get formatting, links, badges, and even basic analytics — without writing a line of frontend.

Whether for personal or company branding, don't underestimate a good README.
It's trustable, portable, and just works.

---

#OpLog -Day3
We’re Severely Underestimating Docker Vulnerabilities — And It Might Burn Us All

Docker changed how we build and ship software. It gave developers the power to package everything into portable containers. But somewhere along the way, we’ve become too comfortable, too trusting of images, and too careless with commands like docker build and docker exec.

Let’s talk about the security mess we’re not discussing enough.

The Illusion of Isolation

Containers are often mistaken for lightweight VMs. But they’re not. Docker uses the host kernel, and unless configured carefully, a malicious container can access far more than intended.

Commands like docker exec drop you into a container. Useful, yes — but also risky. Any script or automation using it might interact with host resources or expose sensitive files unintentionally.

The Real Risk of docker build

docker build is an overlooked attack vector. It:

  • Takes a Dockerfile
  • Executes each instruction as root
  • Often pulls base images from public registries

Now imagine this:

You clone a random GitHub repo, run docker build ., and boom — you’ve executed arbitrary root-level instructions. No warnings. Just blind trust.

Real-World Case: Docker Hub Malware

In 2021, over 30 malicious Docker images were found on Docker Hub. They had been downloaded millions of times and were used to:

  • Mine crypto
  • Leak secrets
  • Create backdoors

These weren’t obscure projects. They had names like alpine-nginx or ubuntu-node, making them easy to mistake for safe options.

Dockerfile Hell

Many Dockerfiles reference scripts like:

RUN curl https://example.com/script.sh | bash

Or clone unknown repos. This adds layers of trust issues.

Once a payload hides in one of these layers, it can:

  • Hijack your SSH keys
  • Drop persistent malware
  • Leak credentials from ENV variables
docker exec Gone Wrong

Running docker exec feels harmless. It’s just for debugging, right?

But when used poorly — especially in scripts — it can:

  • Expose a shell with elevated privileges
  • Access sensitive files
  • Leave behind credentials or temp files

If someone gains access to that container, it’s a full gateway into your environment.

Root Problem: Blind Trust

The issue isn’t Docker itself. It’s how casually we use it:

  • Pulling random images
  • Building unverified repos
  • Mounting host volumes
  • Allowing docker.sock access (aka root access to Docker)

It’s not a Docker flaw. It’s a human flaw.

Staying Safe: Practical Tips

We can do better. Here’s how:

1. Trust base images only from verified sources
  • Use pinned versions (alpine:3.18, not latest)
  • Prefer official images
  • Scan with tools like Trivy
2. Audit every Dockerfile you use
  • Watch for suspicious URLs or commands
  • Avoid curl | bash patterns

3. Don’t mount Docker socket
  • It gives containers full control
  • If required, restrict access or set to read-only
4. Avoid running as root
  • Use USER directive
  • Drop capabilities with --cap-drop=ALL
5. Monitor and rescan regularly
  • Docker scan tools help
  • Periodic rebuilds ensure old exploits don’t linger
Let’s Be Real

We got used to Docker making life easy. But ease brought ignorance. People assume containers are safe just because they’re isolated. But attackers love Docker — especially when misused.

Next time you run docker build or docker exec, ask yourself:

Do I know what this image is doing?

Because if it breaks out, it’s not just a container anymore.

#OpLog — Special Drop