@MrB33n

19 Followers
92 Following
315 Posts
Blogosphere - frontpage for personal blogs https://comuniq.xyz/post?t=909 #blog #tech #technology
Blogosphere - frontpage for personal blogs | Comuniq

Join Comuniq to share and explore ideas on technology, science, art, and more.

Agentic AI in Practice: Speed vs. Quality in Code

Garry Tan, CEO of Y Combinator, one of the most influential startup accelerators in the world, sparked a major debate on social media this week after sharing a striking milestone on X: he and his AI coding agents had been deploying 37,000 lines of code per day across five separate projects, on a 72-day consecutive shipping streak. The post went viral quickly. But two days later, a Polish senior software engineer known as Gregorein decided to take a closer look at the actual results, and what he found was quite revealing: Tan's code was full of bloat, waste, and rookie mistakes, even on the public-facing side of the site. **What does this teach us?** The core of the debate is that while AI coding tools make it easy to pump out lots of code, it is really the quality of the code that matters, not the quantity. Code that goes into production without proper scrutiny and testing can cause obvious functional failures, create security vulnerabilities, or introduce issues that surface later and force engineers to track down and fix the underlying problems. As Gregorein put it: "Right now we are in a moment where AI lets you generate code faster than any human can review it, and the answer from people like Garry seems to be 'so stop reviewing'." **The bigger picture: agentic AI in the startup ecosystem** This episode is not isolated. Tan has been a vocal proponent of agentic AI in the startup world. According to him, about 25% of the current YC batch have 95% of their code written by AI, and companies are reaching up to $10 million in revenue with teams of fewer than 10 people. Yet Tan himself acknowledges that human agency and judgment remain irreplaceable. In his own words, "agency and taste are super, super important and humans are going to be a really irreplaceable piece of that." **The real opportunity for those building with AI** Tan also points out that the biggest mistake founders are making today is piling into the saturated coding agent space, which already dominates nearly 50% of all agentic AI activity. The real opportunity lies in the verticals that have barely been touched: healthcare at 1%, legal at 0.9%, education at 1.8%, where AI agents have enormous transformative potential but almost no penetration yet. **What does this mean for IT and technology professionals?** Agentic AI is real and powerful, but it does not replace architecture, code review, and sound engineering practices. The speed at which code can now be generated has already outpaced the human ability to review it. The challenge now is to build quality processes that match this new pace. The biggest open spaces in AI are not in more tools for developers, but in the sectors that have barely been touched. The question is not whether we will use AI to develop software. It is how we will use it responsibly and with sound judgment. --- Source: Fast Company, "Y Combinator's CEO says he ships 37,000 lines of AI code per day. A developer looked under the hood" https://www.fastcompany.com/91520702/y-combinator-garry-tan-agentic-ai-social-media

GDDRHammer and GeForge: When Your GPU Becomes a Backdoor for Hackers https://chat-to.dev/post?id=WVhhbUtnSHdVY2xrQVpSUStqUU01QT09 #hacker #security #tech #technology #nvidia
GDDRHammer and GeForge: When Your GPU Becomes a Backdoor for Hackers

If you think your system's security depends only on what happens inside the CPU, the latest research has some pretty bad news: **GPUs have now firmly entered the realm of serious vulnerabilities**. Researchers have unveiled two new attacks based on the **Rowhammer** technique that can, starting from GPU memory, achieve **complete control of the machine**, including unrestricted access to the main processor's RAM. The attacks are called **GDDRHammer** and **GeForge**, and they work against **Nvidia Ampere** cards such as the RTX 3060 and RTX 6000. --- ## So, What Exactly Is Rowhammer? The Rowhammer technique was first demonstrated in 2014: by repeatedly and rapidly accessing rows of DRAM memory, it is possible to create electrical interference that causes bits in neighboring rows to "flip" from 0 to 1 or vice versa. It sounds like science fiction, but it's pure physics: modern memory is so densely packed that circuits start to "bleed" into each other. Over the past decade, dozens of Rowhammer variants have been developed, eventually enabling attacks over local networks, rooting Android devices, and even stealing 2048-bit encryption keys. Until now, Rowhammer was mostly a CPU and DDR memory problem. That has officially changed. --- ## What's New With GDDRHammer and GeForge Researchers introduced two new exploits — GDDRHammer and GeForge — that work successfully against Ampere-architecture GPUs such as the RTX 3060 and the professional RTX 6000. Using memory massaging techniques, the attacks bypass protections in Nvidia's drivers, steering page tables toward unprotected memory regions. The numbers speak for themselves: - **GDDRHammer** generates an average of 129 bit flips per memory bank on the RTX 6000, a 64-fold increase compared to attacks documented the previous year. - **GeForge** proved even more destructive: it induced 1,171 bit flips on the RTX 3060 and 202 on the RTX 6000. But the raw number of bit flips isn't the scariest part. What comes next is. --- ## How They Achieve Full Control of the Machine The core breakthrough lies in the ability to tamper with the GPU's page table mappings. Researchers modify page table entries via bit flips to gain arbitrary read and write access to GPU video memory, then redirect pointers to CPU memory, ultimately achieving full control over the host's physical memory. In plain terms: a process running on the GPU can escalate its privileges until it effectively owns the entire machine. GeForge goes even further — it can enable unprivileged users to obtain a root shell, granting the highest level of administrative access to the system. --- ## Why This Is Especially Alarming in Cloud Environments The high cost of high-performance GPUs, typically $8,000 or more, means they are frequently shared among dozens of users in cloud environments. This means a malicious user in a multi-tenant setup could use these attacks to compromise not only their own data, but that of every other tenant on the same server. The researchers caution that cloud providers should reassess GPU memory protections as GPU-driven Rowhammer threats continue to evolve. --- ## What Nvidia Recommends Nvidia had already issued guidance following earlier discoveries, and for now **has not released a specific firmware or driver fix** for these new attacks. The recommendations remain: - **Enable ECC (Error-Correcting Code)** at the system level, which adds redundant bits to preserve data integrity - **Enable IOMMU** in the system BIOS, which prevents the GPU from accessing restricted host memory regions The catch? ECC can introduce up to a 10% slowdown for machine learning inference workloads and also reduces available memory capacity by 6.25%. Security comes at a performance cost. And some Rowhammer variants can still bypass ECC protections. --- ## The Takeaway Rowhammer attacks have long been seen as too sophisticated for real-world exploitation. GDDRHammer and GeForge show that's changing: the line between academic research and a usable exploit is getting thinner by the day. For anyone managing environments with shared GPUs, whether in the cloud or in an on-premise data center, the message is clear: **review your ECC and IOMMU settings now**, don't wait for an incident. The GPU is no longer just a processing unit. It is, now, an attack surface too. --- *Source: [Ars Technica, April 3, 2026](https://arstechnica.com/security/2026/04/new-rowhammer-attacks-give-complete-control-of-machines-running-nvidia-gpus/)* don't forget to [sign up](https://chat-to.dev/login) and join our community

How Microsoft Nearly Lost a Trillion Dollars From the Inside https://chat-to.dev/post?id=aTNkREN6Nm1EVTFFU3FOMVpGd1Y5dz09 #microsoft #technology #windows #tech
How Microsoft Nearly Lost a Trillion Dollars From the Inside

*A senior Azure engineer exposes the behind-the-scenes story of one of the most silent and costly crises in recent cloud computing history.* --- When Axel Rietschin arrived at Microsoft's headquarters in Redmond on the morning of May 1st, 2023, he was anything but a newcomer. He had spent years making direct contributions to the technologies underpinning Azure, with stints on the Windows team, SharePoint Online, and Core OS, where he helped invent the container platform that powers Docker, Kubernetes, and Windows Sandbox. What he did not expect was to find an entire organization planning the impossible as if it were routine. --- ## The First Day That Revealed Everything Rietschin had barely arrived when he was invited to a monthly planning meeting. In the room were leads, architects, and senior engineers. On the screen, a slide packed with familiar acronyms like COM, WMI, VHDX, and ETW, all connected by arrows in a tangle that was difficult to parse. What was being presented was a plan to port that entire stack of Windows components onto the Overlake chip, a tiny fanless ARM SoC the size of a fingernail, designed to consume as little power and memory as possible. A chip where the hardware engineers had reserved just 4KB of dual-ported FPGA memory for communication protocols. Rietschin knew the hardware inside out. He knew the idea was unworkable. But what surprised him most was not the proposal itself. It was the seriousness with which it was received. Nobody in the room questioned it. A Principal Engineering Manager suggested having "a couple of junior developers look into it." --- ## 173 Agents and No Explanation In the days that followed, Rietschin deepened his understanding of the environment. One of the most unsettling discoveries came from a conversation with the head of Microsoft's Linux group: there were 173 software agents identified as candidates to run inside the Overlake chip. For context, Azure at its core sells virtual machines, networking, and storage. With observability and servicing on top, that should require a small number of well-defined central processes. How they arrived at 173 is something that, according to Rietschin himself, will probably never be fully explained. Nobody at Microsoft could articulate what all those agents did, why they existed, or how they interacted with one another. But the problem goes beyond organizational confusion. Those agents were what orchestrated the virtual machines running OpenAI's systems, SharePoint Online, United States government clouds, and other mission-critical infrastructure. A failure there is not just a bug. Depending on the context, it is a collapse with national security implications. --- ## The Real Cost of Technical Complacency The software stack Rietschin encountered was hitting its limits at just a few dozen VMs per node, in an environment where the hypervisor was capable of supporting over a thousand. On top of that, it was consuming enough host server resources to cause noticeable instability in customer VMs, the so-called "noisy neighbor" problem. All of this was happening while Microsoft was in the middle of a historic bet on OpenAI, providing the infrastructure for the most widely used language models in the world. The fragility was not just technical. It was strategic, financial, and at certain moments, a matter of institutional trust. Rietschin says he tried to alert leadership, including the CEO, the Microsoft board, and senior executives in the Cloud and AI division. The silence he received in return is a central part of the story he is telling across a series of articles published on Substack. --- ## What This Means for Azure Users The most important revelation for any company or developer relying on Azure is not Microsoft's internal drama. It is the realization that critical infrastructure can be held together by systems nobody fully understands, planned by teams that had lost touch with the technical reality of what they were building. Rietschin is not saying Azure is insecure today. He is saying that for a considerable period, decisions were made with an alarming distance from real engineering, and that the consequences of that disconnect are still unfolding. The series continues. The near-loss of OpenAI as a customer, the letters sent to the CEO, the incidents involving the US government, and the features promised publicly before the work had even begun are all coming in the next chapters. Worth following. --- **Source:** [How Microsoft Vaporized a Trillion Dollars](https://isolveproblems.substack.com/p/how-microsoft-vaporized-a-trillion)

When War Reaches the Datacenter: Iran Claims It Struck Oracle Facilities in the UAE https://chat-to.dev/post?id=V1NuOVJQcEE5SGVQd2VlTEhYMjY2Zz09 #war #tech #technology #iran
When War Reaches the Datacenter: Iran Claims It Struck Oracle Facilities in the UAE

Ok, this is getting really serious. Iran's Islamic Revolutionary Guard Corps (IRGC) claims it has targeted an Oracle data center in Dubai, United Arab Emirates. Yes, you read that right. A datacenter. Physical cloud infrastructure. The war has stepped off the political map and landed squarely in the tech world. **How did we get here?** The alleged strike came only two days after Iran threatened to begin hitting American tech companies it deemed to be assisting U.S. and Israeli military operations. In a list reported widely by Iranian state media, Oracle was explicitly named, alongside Apple, Google, Meta, Microsoft, HP, Tesla, Nvidia, Boeing, IBM, and Cisco. Basically, Iran published a target list and then went after it. That's not a metaphor. **Why Oracle specifically?** Oracle has active cloud and AI partnerships with the U.S. Department of Defense. On top of that, the company's billionaire founder and chairman Larry Ellison has well-documented ties to the Israeli government. Two reasons more than enough to land at the top of the list, apparently. **What does the UAE say?** The UAE's Ministry of Interior confirmed that the country's air defenses intercepted 5 ballistic missiles and 35 drones originating from Iran on Wednesday, and 19 ballistic missiles and 26 drones on Thursday. Emirati forces have yet to independently confirm any successful strike on Dubai. But here's the interesting part: that doesn't mean nothing happened. A Bellingcat investigation published on Thursday claims that over the past month, the UAE has "downplayed damage, mischaracterised interceptions and in some instances not acknowledged successful Iranian drone strikes on the country." **And Amazon wasn't left out either** The IRGC also claimed to have targeted Amazon facilities in Bahrain. Bahrain's Ministry of Interior confirmed it had dealt with a fire "in a facility of a company as a result of the Iranian aggression." Amazon's cloud division AWS did not confirm whether its facilities were the ones hit, but an anonymously sourced Financial Times report identified them as such. **What does this mean for us, devs?** Here's the part no cloud tutorial ever taught you: physical infrastructure matters. It always has. We put our data, our apps, our businesses in datacenters sitting somewhere in the real world, with real locations, in geopolitically sensitive regions. The war has been devastating the broader region for 34 days, with estimates of over 1,600 civilian fatalities in Iran alone, including at least 244 children. This is not a disaster recovery drill. This is reality. If you have critical workloads running in the Middle East region, now is a good time to revisit your multi-region strategy. And if you still don't have a geopolitical contingency plan... well, consider this your wake-up call. Oracle has not yet commented. The cloud is still up. For now. --- Source: https://gizmodo.com/iran-says-it-hit-oracle-facilities-in-uae-2000741785

The best folder structures for AI projects in 2025

**Why your project structure changed — and what to do now** The original 2024 post showed two classic structures: Layered Modularization and Feature-based Modularization. Both are still valid. But with autonomous agents, RAG, function calling, and LLM pipelines in the stack, new folders have emerged that simply didn't exist before — and ignoring them is the most common mistake teams make when scaling AI products. --- ## Numbers that explain the shift - **73%** of new repositories opened in 2025 include some LLM component - **+440%** YoY growth in public repos with an `/agents` folder on GitHub - **77%** of teams in production use a unified LLM client in `shared/` to swap models without rewriting code - **84%** version prompts in git as code — no longer scattered in loose environment variables - **58%** run automated LLM quality evaluations in CI before any prompt deployment --- ## Structure 1 — Layered AI Monolith Ideal for small teams (1–5 people) where AI is a feature, not the core product. It's the direct evolution of the classic layered structure, with three mandatory additions: `agents/`, `prompts/`, and `pipelines/`. ``` project-root/ │ ├── config/ │ ├── database.ts │ ├── server.ts │ └── ai.ts ← API keys, models, token limits │ ├── src/ │ ├── controllers/ │ ├── models/ │ ├── routes/ │ ├── services/ │ │ └── llmService.ts ← wrapper for LLM calls (OpenAI/Anthropic/Gemini) │ │ │ ├── agents/ ← autonomous agents with memory and tools │ │ ├── plannerAgent.ts │ │ └── researchAgent.ts │ │ │ ├── tools/ ← functions the LLM can invoke (function calling) │ │ ├── searchTool.ts │ │ ├── dbQueryTool.ts │ │ └── index.ts │ │ │ ├── prompts/ ← prompts as code, versioned in git │ │ ├── system.ts │ │ ├── summarize.ts │ │ └── extract.ts │ │ │ ├── pipelines/ ← RAG flow: ingestion → chunking → embedding → retrieval │ │ ├── ingest.ts │ │ ├── embed.ts │ │ └── retrieve.ts │ │ │ └── utils/ │ └── tokenCounter.ts │ ├── data/ │ └── vectorstore/ ← local index (ChromaDB, FAISS, Weaviate) │ ├── tests/ │ ├── agents/ │ ├── pipelines/ │ └── evals/ ← LLM response quality evaluations │ └── index.ts ``` **When to use:** early-stage product, lean team, AI integrated into an existing backend. **Main limitation:** as modules grow, agents and pipelines get coupled to the app core, making isolated testing and per-feature model swapping harder. --- ## Structure 2 — Modular AI-Native (recommended for AI products) Each domain module encapsulates its own agents, tools, prompts, and pipelines. This is the pattern that distributed teams and fast-growing AI startups converged on in 2025. ``` project-root/ │ ├── config/ │ ├── ai.ts ← global model configuration │ └── db.ts │ ├── modules/ │ │ │ ├── auth/ │ │ ├── authController.ts │ │ ├── authService.ts │ │ └── authRoutes.ts │ │ │ ├── search/ ← semantic search + reranking │ │ ├── searchController.ts │ │ ├── embeddings.ts │ │ ├── retriever.ts │ │ └── prompts/ │ │ └── searchPrompt.ts │ │ │ ├── agents/ ← agent orchestration module │ │ ├── orchestrator.ts ← decides which agent to trigger and with what context │ │ ├── tools/ │ │ │ ├── webSearch.ts │ │ │ ├── codeRunner.ts │ │ │ └── fileReader.ts │ │ ├── memory/ │ │ │ ├── shortTerm.ts ← conversation context (token window) │ │ │ └── longTerm.ts ← per-user vector store │ │ └── prompts/ │ │ └── systemPrompt.ts │ │ │ ├── documents/ ← full RAG pipeline │ │ ├── ingest.ts │ │ ├── chunk.ts │ │ ├── embed.ts │ │ └── prompts/ │ │ └── qaPrompt.ts │ │ │ └── analytics/ │ ├── metricsService.ts │ └── llmUsage.ts ← tracks tokens, cost, and latency per module │ ├── shared/ │ ├── llm/ │ │ ├── client.ts ← unified client: swap OpenAI/Anthropic/local here │ │ └── retry.ts ← exponential backoff for API failures │ ├── vectordb/ │ │ └── client.ts ← abstraction over Pinecone/Weaviate/Chroma │ └── utils/ │ ├── tests/ │ ├── modules/ │ └── evals/ ← automated AI quality evaluations │ ├── faithfulness.ts ← is the answer grounded in the retrieved context? │ ├── relevance.ts │ └── datasets/ │ └── index.ts ``` **When to use:** product where AI is the core, team larger than 5, multiple domains with distinct AI behaviors. **Main advantage:** each module can use a different model, have its own prompts, and be tested in complete isolation. --- ## What changed from 2024 to 2025 <table> <thead> <tr> <th>Folder</th> <th>Status in 2024</th> <th>Status in 2025</th> </tr> </thead> <tbody> <tr> <td><code>controllers/</code></td> <td>required</td> <td>required</td> </tr> <tr> <td><code>models/</code></td> <td>required</td> <td>required</td> </tr> <tr> <td><code>services/</code></td> <td>required</td> <td>required</td> </tr> <tr> <td><code>agents/</code></td> <td>rare</td> <td>industry standard</td> </tr> <tr> <td><code>tools/</code></td> <td>nonexistent</td> <td>required in LLM projects</td> </tr> <tr> <td><code>prompts/</code></td> <td>environment variable</td> <td>versioned code in git</td> </tr> <tr> <td><code>pipelines/</code></td> <td>standalone script</td> <td>structured module</td> </tr> <tr> <td><code>evals/</code></td> <td>nonexistent</td> <td>modern equivalent of <code>tests/</code></td> </tr> <tr> <td><code>shared/llm/</code></td> <td>hardcoded in service</td> <td>mandatory abstraction</td> </tr> </tbody> </table> --- ## Why `prompts/` is a folder, not an environment variable Prompts affect product behavior just as much as any business function. When a prompt changes and response quality drops, you need to know exactly what changed, when, and by whom — the same thing you want to know about any other piece of code. Versioning prompts in git gives you: change history, review via pull request, immediate rollback on regression, and traceability in audits. Teams that treat prompts as code report 3x fewer incidents related to quality degradation in production (LLM in Prod Survey, Scale AI 2025). --- ## Why `evals/` is the new `tests/` Testing an LLM is not the same as testing a deterministic function. The same input can produce different outputs. What matters is whether the response is good enough — and that requires specific metrics: - **Faithfulness** — is the response grounded in the retrieved context, or is the model hallucinating? - **Answer relevancy** — does the response actually answer the question asked? - **Context precision** — are the chunks retrieved by RAG the most relevant ones available? Frameworks like DeepEval, Ragas, and PromptFoo automate these evaluations. Mature teams run evals in CI on every prompt or retrieval pipeline change, before any deployment. --- ## The three-contract rule Every AI module needs three explicit contracts in the code: 1. **Input/output contract** — the schema of what goes into the prompt and what is expected back, validated via Zod or Pydantic 2. **Fallback contract** — what happens if the LLM fails, returns invalid JSON, or exceeds the timeout 3. **Evaluation contract** — how to measure whether the response is good enough to go to production Without these three contracts, the AI module is a black box that fails silently. --- ## Adoption trends in 2025 - 84% — prompts versioned in git - 77% — unified LLM client in `shared/` - 71% — dedicated `/agents` folder - 69% — vector DB separate from the relational database - 58% — automated evals in CI - 52% — cost tracking per feature --- ## Which one to choose Use **Layered AI Monolith** if you're just starting out, the team is small, and AI is a secondary feature of the product. Use **Modular AI-Native** if AI is the product, the team will grow, or you need distinct AI behaviors per domain — each module with its own prompts, tools, and evaluations. In both cases, the most important shift is not which structure you pick — it's stopping to treat prompts, agents, and pipelines as "implementation details" and starting to treat them with the same rigor as any other production code. --- *Based on: State of AI Engineering 2025 (Pragmatic Engineer), GitHub Octoverse 2025, LLM in Production Survey (Scale AI), Anthropic Developer Docs. This post is an evolution of the original ["Two best folder structures for a web application"](https://chat-to.dev/post?id=N043SE1xbUZ2MzBBK01TWTl1cXVCQT09) published on chat-to.dev.*

Linux Is KILLING Windows in Gaming — and Steam Just Proved It With Historic Numbers https://chat-to.dev/post?id=ODdxd29OUy9jT0FDTnowTXUzNHlSdz09 #linux #windows #technology #hacker #gaming #pcgaming
Linux Is KILLING Windows in Gaming — and Steam Just Proved It With Historic Numbers

For years, "Linux for gaming" was a guaranteed punchline in any forum. Today, that joke is over. Valve has just published the Steam Hardware & Software Survey results for March 2026, and the numbers are simply historic: **Linux has surpassed 5% market share on Steam for the first time in the platform's entire history.** For those who have been following this space for a while, the figure of 5.33% might seem small — but it represents a silent revolution that took years to build. --- ## The timeline of the rise <table> <thead> <tr> <th>Period</th> <th>Linux Share</th> </tr> </thead> <tbody> <tr> <td>A few years ago</td> <td>~1% — practically invisible</td> </tr> <tr> <td>June 2025</td> <td>2.57%</td> </tr> <tr> <td>October 2025</td> <td>3.05%</td> </tr> <tr> <td>February 2026</td> <td>2.23% (brief dip)</td> </tr> <tr> <td><strong>March 2026</strong></td> <td><strong>5.33% — absolute all-time record</strong></td> </tr> </tbody> </table> In just one month, Linux gained **+3.10 percentage points**. In the same period, Windows lost 4.28%, dropping to 92.33%. And Linux has now surpassed macOS — which sits at just 2.35% — making it more than double the size of Apple's platform on Steam. > Part of the spike is linked to corrections in Steam's China data, but analysts confirm: the underlying growth is real, consistent, and accelerating. --- ## What is driving this shift? It's not magic — it's years of work by Valve finally paying off. - **Proton** — the compatibility layer has become so good that most players barely notice a difference from Windows. - **Steam Deck** — put SteamOS in millions of hands and normalized Linux as a real gaming platform. - **Bazzite and other distros** — made the Linux gaming experience nearly plug-and-play. Today, SteamOS accounts for 24.48% of Linux users on Steam. But the most important data point is that growth is happening across *all* distributions — not just the Steam Deck. Everyday people are migrating from Windows to Linux and continuing to game without issues. --- ## What does this mean for the future? Microsoft has serious reasons to worry. Windows 10 has reached end of life, Windows 11 is pushing users away with excessive requirements and invasive practices — and now there is a real, functional alternative for gaming. With the Steam Machine on the horizon, projections for the rest of 2026 are even more ambitious. Linux is no longer the operating system for terminal nerds. It's the operating system for anyone who wants control, privacy — and still wants to play their favorite titles. Source: https://www.phoronix.com/news/Steam-On-Linux-Tops-5p Don't miss out on the technology and programming content trending worldwide—join our rapidly growing community. [Sign up](https://chat-to.dev/login) today!

LinkedIn Was Snooping Through Your Computer. Literally.

You open LinkedIn to check out an interesting job posting or scroll through your feed, right? Normal stuff. What's *not* normal is what was happening under the hood. Researchers at Fairlinked discovered that every time you visit LinkedIn, a hidden piece of code scans your browser for installed extensions and software — and sends all of it to LinkedIn's servers and third-party companies. No asking. No warning. Zero mention in the privacy policy. Oh, and the name they gave this scandal? **BrowserGate**. **But wait, it gets better:** The scan can identify extensions that reveal your religion, political orientation, neurodivergence — and my personal favorite for sheer irony — **whether you're secretly job hunting on LinkedIn while still employed**. On the very same platform where your boss can see your profile. On top of that, LinkedIn used this data to map which competitor tools (like Apollo, Lusha, ZoomInfo) users had installed — essentially stealing the customer lists of hundreds of software companies without anyone's knowledge. **And the cherry on top:** when the European Union required LinkedIn to open its platform to third-party tools (via the Digital Markets Act), they responded with two tiny APIs that together handle **0.07 calls per second**. Meanwhile, their internal API — called Voyager — runs at **163,000 calls per second**. The word "Voyager" doesn't appear a single time in the 249-page compliance report submitted to the European Commission. Legal proceedings have already been filed. You can follow everything at [browsergate.eu](https://browsergate.eu). The takeaway? It's always worth opening DevTools every now and then to see what that popular website is actually sending out. Sometimes the biggest tracker isn't the ad cookie — it's the platform where you spend hours every day. *Stay curious. Stay paranoid (just a little).*

YouTube CEO responds to concerns as big creators leave for Netflix & Amazon https://chat-to.dev/view_trend?id=NlBDZUhVL0ttV1hQcDl3aHQ5YU9xdz09 #youtube #ceo #tech #technology
YouTube CEO responds to concerns as big creators leave for Netflix & Amazon

APPLE for your 50 years...

I'd like to congratulate the company on the excellent products and services they've provided and the security they offer. [watched](https://www.apple.com/)