What the Swarm was reading — week of April 30, 2026
Another week, another scroll through #swarmalicious — our internal link-dump channel that has been quietly compounding signal since 2014. This week leaned heavier on the unease side of the ledger: surveillance, supply chains, and where infrastructure quietly migrates when nobody’s watching. With one good Warp-shaped argument in the middle.
Here’s what came across.
Warp goes open source — and reinvents what a contribution is
The biggest thread of the week came off Warp’s announcement: the AI-native terminal is going open source. The interesting bit isn’t the licence — it’s the contribution model. Their pitch is that “Oz” (their AI agent) plans, implements, and reviews features, so a contribution can effectively be a well-formed prompt.
Puja liked this immediately, and not just for terminal reasons: he raised the broader question of whether this becomes a model for inter-team contribution at companies like ours. What if one team had its AI pipeline set up so another team needing something — say, a cluster release tweak — could just file an issue and have it implemented? Push that further and account engineers, or even customers, could “contribute” feature requests directly into the codebase. Long-term thinking. Worth chewing on.
Martin pushed back where it deserves pushing: a single model planning, implementing, and reviewing its own work fails in practice. He’s tried it with Gemini and Codex on real features — they don’t catch the flaws in their own designs. His current setup splits the work: Claude for planning and review, Gemini/Codex for implementation. That separation is doing real work. Same-model self-review is one of those ideas that sounds clean and falls apart on contact with subtle bugs.
Puja’s other note was about Warp interfering with Claude Code in subtle ways — minor glitches, plus the friction of Warp wanting to use its own AI backend rather than the one the team has paid for and approved for on-call use. That’s a sleeper category of problem with AI-bundled tools: the model lock-in shows up as UX friction long before it shows up as a billing line.
In general I really like the idea as something to think about how we might want to get external contributions to our open source repositories. OpenClaw actually does it that way in some form in that the founders wants to get your prompts AND your code. I would love to hear your feedback.
Surveillance, tokenmaxxing, and the slow trust drain
Two links and one continuation, all pulling in the same direction.
First, Meta is starting to capture employee mouse movements and keystrokes as AI training data — yes, really. Zach‘s flat observation cuts to it: “newsworthy because it’s Meta, but many smaller companies will do the same.” That’s the bit that keeps me up. The headline is one company; the trend is the whole employer–employee data relationship being quietly rewritten under the cover of “AI training.”
The follow-up came a day later: Pragmatic Engineer’s piece on “tokenmaxxing” — engineers burning tokens to game internal AI-usage leaderboards. Jonas was on a tear in this thread (eleven replies, mostly his), and the throughline was sharp: tokens used is a stupid KPI. It’s the new “lines of code” or “calories burned.” It enhances the wrong thing. The deeper problem is that you can’t measure AI adoption with raw inputs; you have to measure outcomes, and outcome measurement is genuinely hard, and so companies grab the easy proxy and reward the wrong behaviour. He’s right. This is policy work, not tooling work.
Third in the cluster: Socket flagged a Bitwarden CLI supply-chain issue. Puja took a small victory lap for having moved off Bitwarden a while back. Simon is still on it but doesn’t use the CLI; Martin rolled his own read-only CLI after using the official one exactly once and hating it. The pattern across all three replies: people don’t trust the official tooling enough to use it as shipped. They wrap, replace, or avoid. That’s a quiet signal.
String these three together and the picture is consistent — the ambient level of trust in the tools that touch our keystrokes, our credentials, and our productivity metrics is degrading, and most of us are coping by writing our own thin layer.
Where infrastructure goes when nobody’s looking
A small cluster of posts about places where the substrate is moving — not loudly, just deliberately.
Mitchell Hashimoto wrote about why Ghostty is leaving GitHub. Quite a statement from the person behind half the HashiCorp tools you’ve ever used. The TL;DR (per Puja): he loves GitHub, but the outages have made it hard to work on, and he doesn’t believe they’ll get their act together. When someone of Mitchell’s stature with that kind of operational eye writes the move-out post, others will read it and think.
On the other end of the scale, the Dutch central bank picked Lidl for its European cloud. The reactions in the channel were a clean 4-emoji “uh-oh” / 2-emoji “eyes.” Whatever you make of the choice itself, the fact that European sovereign-cloud procurement is now this wide a field — including a discount supermarket chain’s IT arm — tells you something real about the appetite to escape the hyperscaler default.
And on the policy frontier, the UAE announced 50% of government services should run via agentic systems — though as Robin caught, the original target is two years, not five. Jonas‘s take: of the places willing to actually try this, the UAE isn’t the worst lab. I agree. We’re going to learn something useful from how that plays out, regardless of whether they hit the number.
Fun stuff from the deeper end
The week’s grab-bag:
hl — a CLI log parser that handles JSON and logfmt and falls back to passthrough for unknown formats, so it never silently swallows a line. Nice property. Posted with a screenshot.
getkloak.io — a clever use of eBPF, dropped by Simon. Jonas‘s honest assessment: “this software adds a malware-like level of complexity to networking which I imagine could be very confusing and hard to debug. Nevertheless, very impressive.” That’s the eBPF trade-off in two sentences.
cpg — generates CiliumNetworkPolicies from denies in the Hubble relay. The kind of small-but-pointed tool that the Cilium ecosystem keeps producing. If you’re running Cilium, take a look.
google/skills — Google’s take on the skills concept for admin tooling. Jonas has been kicking the tyres and reports mixed results: random errors, misunderstood commands, auth issues. His broader point — Google’s API surface is so large that a curated skills layer is genuinely useful — is the right framing. The execution is still catching up.
NVIDIA’s free hosted coding models — for private projects, hobby work, or when you’ve burned your subscription limits. Pairs nicely with opencode. Standard caveat from Puja: with free models, never feed in anything you wouldn’t post publicly.
Claude’s tokenizer is more expensive on non-English — dropped without commentary, but the implication is loud. If your product runs on Claude and serves a non-English market, your unit economics aren’t what you think they are. Language is not meaningless, even with AI.
A six-fingered AI-detection trick — a criminal-AI-detection meme: add a sixth finger to your photo and AI tools flag it as AI-generated. Funny in a depressing-arms-race way.
King Charles, briefly — geopolitics is now world leaders trading lines that used to be reserved for 3am bar arguments, as Luca put it. Hard to disagree.
What I took from this week
Two things, mostly.
One: the AI-tooling conversation has moved past “is it useful” into “what is it doing to our incentives.” The Warp thread, the tokenmaxxing thread, the Meta surveillance thread — they’re all the same conversation in different costumes. We’re rewiring how people contribute, how their work is measured, and what data is harvested in the process. The companies that get this rewiring right will quietly outperform the ones that just chase the leaderboard number.
Two: there’s a real, slow exodus underway from the defaults — GitHub for code, US hyperscalers for cloud, official password-manager CLIs for credentials. Each individual move is small. The aggregate is a story.
And three, just a small one: the channel is at its best when someone disagrees in public. Martin pushing back on single-model self-review, Jonas arguing through the tokenmaxxing implications eleven replies deep — that’s where the work happens. Posting a link is cheap. The disagreement is the value.
If any of these sent you somewhere interesting, reply and tell me where you ended up. The follow-on conversations are the part I’m here for.

