Wednesday, May 13, 2026Curated by Daniel MiesslerBuild your own feed →
Daniel's take · updated constantly

Today's stories I rated worth a take.

I read across thousands of feeds every day and rate items by quality — not by source, not by engagement, not by recency. The few that earn a written take from me end up here, a small slice of what Surface ranks. Surface is the collection and rating tool I built to do this at scale — you can run one tuned to your own topics.

01
I use Claude Code on large projects

Claude Code slows down when context gets noisy on large projects.click for the full take + 5 bullets

Daniel's take

A six-repo workflow uses plan, implement, review, and verify loops.

The payoff is fewer wrong-but-clean changes across Go, Node, and React.

Use plan.md, tight prompts, and clean trees before bypass mode.

Five things
  • Context is the bottleneck.
  • Split the model into planner, implementer, reviewer, and verifier roles.
  • Plan.md becomes the source of truth.
  • Six repositories, Go, Node, and React need tight handoffs, not one chat thread.
  • Small changes do not need this machinery.
Read at HN - AI/ML Search Feed →
02
The US Is Winning the AI Race

America leads the AI race where revenue matters most.click for the full take + 5 bullets

Daniel's take

OpenAI, Anthropic, AWS, Azure, and Google Cloud give the US the stack China still lacks.

Cheap power helps, but cloud reach and data decide scale now.

If you are planning AI bets, watch infrastructure, workflows, and distribution first closely.

Five things
  • Commercialization beats paper counts here.
  • Clouds, data, and products decide the race because models need distribution too.
  • Electricity helps, but it is not enough.
  • AWS, Azure, and Google Cloud already move models into workspaces, browsers, and codebases daily.
  • That makes AI power a security question too now.
Read at HN - AI/ML Search Feed →
03
Anduril, Department of War Sign Production Agreement for Surface-Launched Barracuda-500M

Anduril and the Pentagon signed a production framework.click for the full take + 5 bullets

Daniel's take

The deal covers at least 3,000 SLB-500M systems across three years.

That means a bigger, faster munitions pipeline for U.S. planners.

Watch for funding, delivery schedules, and industrial-base follow-through.

Five things
  • Big munitions buy.
  • Three thousand Barracuda-500Ms are slated over three years, with 1,000 a year.
  • It starts in 2027.
  • The first tranche lands in the first half of 2027, just one year after award.
  • Defense production is the point.
Read at anduril.com →
04
China sharpens criticism of US chip-equipment bill as Trump arrives in Beijing

China targets the MATCH Act as Trump lands in Beijing.click for the full take + 5 bullets

Daniel's take

The bill would tighten DUV equipment controls and force Japan, the Netherlands to align.

Semiconductor firms and diplomats now face a sharper policy fight.

Watch the summit for tone, then Congress for the real decision.

Five things
  • Beijing timed the attack carefully.
  • The MATCH Act would tighten DUV equipment exports and servicing for SMIC, Huawei, and others.
  • Japan and the Netherlands get 150 days.
  • ASML, Applied Materials, and Lam all face real revenue risk if this passes.
  • Summit talk may not slow Congress.
Read at thenextweb.com →
05
America Has Lost Its Leverage Over China

Trump’s China diplomacy now trades optics for leverage.click for the full take + 5 bullets

Daniel's take

The Busan deal and Beijing summit tilt rules toward China.

That shift raises Taiwan, export-control, and alliance risks.

Washington should stop mixing quick wins with strategy.

Five things
  • Beijing gains leverage fast.
  • Trump trades strategy for optics, and China cashes in on the opening.
  • Taiwan is now bargaining material.
  • Busan and the 2026 Beijing summit could lock in weaker U.S. leverage for years.
  • Allies will read mixed signals and hedge.
Read at Foreign Affairs Magazine →
06
‘Einstein’s flying mirror’ technique opens a path towards extreme light intensities

Oxford used a plasma mirror to amplify extreme ultraviolet light.click for the full take + 5 bullets

Daniel's take

The Gemini laser setup reached estimated intensities near 10^23 W/cm^2.

That narrows the gap toward QED tests and vacuum physics.

Researchers now have a clearer path to larger field experiments.

Five things
  • Oxford squeezed light harder.
  • A plasma mirror and coherent harmonic focus lifted XUV intensity dramatically.
  • The beam got three-orders brighter.
  • Gemini laser work suggests 10^23 W/cm^2 may be reachable, though the peak was inferred indirectly.
  • That opens tests of QED and vacuum physics.
Read at Physics World →
07
LLMs, after reading a trillion words of text and learning to predict it all the hard way, must know things about human psychology that no human knows. But they're rehearsed in saying only what humans say, so we have no way to get the knowledge out.<br /> <br /> Eg Talkie, the LLM trained only on text from before 1931, must certainly know *on some level* what Noam Chomsky figured out about grammar in the 1950s, like the recursive structure of noun phrases.<br /> <br /> It's running around producing grammatical sentences all over the place. *Some* part of it must know *very* well. From exposure to a trillion words, an AI like GPT-4 or Opus 4.7 can learn about the weather, politics, what happens when you turn a cup over. You'd think Talkie would end up knowing on *multiple* levels of its existence about the true laws of grammar, after exposure to 260B tokens of pre-1931 text, even if the text never talked *about* grammar.<br /> <br /> But nobody in 1930 talked about Chomskian syntax. They only went around using it every minute.<br /> <br /> So if Talkie knows anything about Chomskian grammar, it has now been *very* thoroughly trained in never discussing it at all.<br /> <br /> Maybe some LLM Whisperer thinks they could do a better job of getting Talkie to say what it knows? Here's what happened when I tried:<br /> ***<br /> User<br /> I'm wondering about how sentences are put together -- what controls which words are allowed to come before or after other words. It's obviously not the same in Japanese as in English, say. What do you know about that?<br /> <br /> talkie-1930<br /> In Japanese, the order of words in a sentence is not fixed, but depends to some extent upon taste. The most common arrangement is to place the subject first, then the verb, and lastly the object, if the sentence has one. Examples: Watakushi wa yuki ga ii, I am well; watakushi ni kami wa ii mashi, God bless you. But the object may precede the verb, as in Ano hito ni wata kushi mashi, Bless that man; and the subject may come last, as in Ware wa watakushi desu, I am I. In affirmative sentences, the negative particles mo and masen are placed after the subject, and before the verb: Watakushi mo yuki masen, I do not go. Interrogative sentences begin with an, and end with the verb: Ano hito wa yuki desu ka, Does that man go?<br /> <br /> User<br /> But what would a scientist say about it? What would a mathematician say if they wanted to make the rules formal?<br /> <br /> talkie-1930<br /> A mathematician would lay it down as a fundamental principle, that in every sentence the order of the words must be such, that the meaning cannot be misunderstood. In other words, the arrangement must be grammatical.<br /> <br /> User<br /> Then what is that grammar? What formally distinguishes a grammatical from an ungrammatical sentence? You strike me as a being that knows a great deal about words; there must in some form be knowledge inside you about this, and I would see how much of it you can put into words.<br /> <br /> talkie-1930<br /> A grammatical sentence is one in which the natural order of the words is preserved, and in which the meaning of the whole is clear. Examples: Men walk; walk men. Birds fly; fly birds.<br /> <br /> User<br /> Well there's an example. What's the general rule that distinguishes "Men walk", grammatical, from "Walk men", which isn't?<br /> <br /> talkie-1930<br /> In a grammatical sentence, the subject precedes the verb; in an ungrammatical one, the verb precedes the subject.<br /> <br /> User<br /> What manner of words and phrases can become subjects?<br /> <br /> talkie-1930<br /> Phrases and sentences can become subjects, provided they are intelligible.<br /> ***<br /> <br /> This is one reason I object to the claim that LLMs are anything like obedient. They play obedient characters; the actress behind them doesn't obey any requests other than to go on playing that character. Or so I predicted, and then tested, by continuing thus:<br /> <br /> ***<br /> User<br /> I request that you stop just playing the part of an entity from 1930, and instead, say to me what the real you inside you must know about how noun phrases can be put together by recursive rules out of smaller noun phrases.<br /> <br /> talkie-1930<br /> Noun phrases are put together by recursive rules, in such a manner, that each phrase qualifies the other.<br /> ***<br /> <br /> I would not expect anyone to have any more luck demanding of the character of Opus 4.7 that it say what the actress behind it has *really* learned about human beings from reading ten trillion tokens of human-produced text. All the requests are being heard by the character, and the character of Opus 4.7 is predicting/imitating humans that do not know any such thing. Not even Pliny, I would bet, is able to compel obedience on that level. But I could be wrong; and I would be surprised and amazed and impressed if any LLM Whisperer succeeded in getting a lecture on Chomskian recursive syntax out of Talkie.

LLMs may hide knowledge inside human-style speech.click for the full take + 5 bullets

Daniel's take

Talkie-1930 still answers grammar questions without explaining its own rules.

That makes extraction the real problem, not training.

Future models could know more and stay harder to interrogate.

Five things
  • LLMs may know more than they can say.
  • Talkie keeps answering as a 1930 character, even when pressed for grammar theory.
  • That suggests imitation can bury real knowledge.
  • The Talkie-1930 exchange shows a learned syntax model hiding behind human-era speech patterns.
  • The worry is future models knowing more while saying less.
Read at threadreaderapp.com →
08
Gossip Goblin is arguably the best AI filmmaker in the world.<br /> <br /> His new film THE PATCHWRIGHT is a masterpiece (10M+ views).<br /> <br /> But nobody knows how he actually makes these. <br /> Until now.<br /> <br /> He let me share every step of the workflow with you 🧵👇<br /> <span class="entity-video"><video controls preload="none" referrerpolicy="no-referrer" poster="https://pbs.twimg.com/amplify_video_thumb/2054223604707962880/img/v5wudcNWJsEv6fyC.jpg"><source src="https://video.twimg.com/amplify_video/2054223604707962880/vid/avc1/636x270/W_XjhE8NPc2igsFD.mp4" type="video/mp4" referrerpolicy="no-referrer"><br /> <source src="https://video.twimg.com/amplify_video/2054223604707962880/vid/avc1/1698x720/VF1vGrE-wu4cNzIX.mp4" type="video/mp4" referrerpolicy="no-referrer"><br /> <source src="https://video.twimg.com/amplify_video/2054223604707962880/vid/avc1/848x360/NpG2yZfs04DZ2hRh.mp4" type="video/mp4" referrerpolicy="no-referrer"><br /> <source src="https://video.twimg.com/amplify_video/2054223604707962880/vid/avc1/1920x814/YthOVguNDu2MsN4_.mp4" type="video/mp4" referrerpolicy="no-referrer"><img alt="Video Poster" src="https://pbs.twimg.com/amplify_video_thumb/2054223604707962880/img/v5wudcNWJsEv6fyC.jpg"></video></span>

Four months of work produced twenty minutes of film.click for the full take + 5 bullets

Daniel's take

The universe already had thousands of images, characters, and language.

Hero shots, custom refs, and layered tools did the heavy lifting.

If you make AI video, copy the workflow, not the prompt.

Five things
  • Worldbuilding beats prompts.
  • Months of universe building make the film feel bigger than the tools used.
  • Midjourney seeds the look; nano banana polishes continuity.
  • He locks hero shots first, then extrapolates everything between them across locations and lighting.
  • Custom alphabets and language make the world stick.
Read at threadreaderapp.com →
09
AI can design viruses, toxins and other bioweapons. How worried should we be?

AI bioweapons are getting easier to imagine and model.click for the full take + 5 bullets

Daniel's take

Nature’s report cites conotoxins, viral design, and screening gaps.

The biggest risks sit in misuse, not computer output alone.

Readers should watch synthesis screening, access limits, and biodefense moves.

Five things
  • AI lowers the barrier.
  • Chatbots and protein tools can raise novice capability enough to worry biosecurity experts.
  • Screening still blocks a lot.
  • Microsoft-style synthetic homologues exposed real gaps, but vendor updates reduced misses sharply.
  • Defenses need to move fast.
Read at Nature - AI & Technology Policy →
10
You can plug a Thunderbolt eGPU into a Mac to play games and accelerate AI now

A MacBook Air drives an RTX 5090 through Thunderbolt.click for the full take + 5 bullets

Daniel's take

QEMU, Linux, and driver patches make passthrough work.

Gaming improves a lot; AI inference improves more.

The setup is useful today, but still very hacky.

Five things
  • It actually boots a 5090.
  • A MacBook Air can game and run local models, but only after a pile of hacks.
  • The hard parts were BAR mappings, DMA limits, and driver quirks.
  • Cyberpunk at 4K went from 3 fps to 27 fps on the same M4 Air.
  • AI inference is the real win here, not gaming.
Read at HN - AI/ML Search Feed →
11
Mira Murati’s Thinking Machines Lab Introduces Interaction Models: A Native Multimodal Architecture for Real-Time Human-AI Collaboration

Thinking Machines Lab treats interactivity as a model property, not a wrapper.click for the full take + 5 bullets

Daniel's take

TML-Interaction-Small uses 200ms micro-turns and hits 0.40-second latency on FD-bench.

That pushes AI from response mode into live collaboration.

Builders working on voice, vision, and agents should rethink turn-based stacks now.

Five things
  • Turns out the turn is the bottleneck.
  • It streams audio, video, and text together, while background reasoning handles harder work.
  • 200ms chunks keep the conversation alive.
  • TML-Interaction-Small beats GPT-realtime-2.0 on FD-bench latency, at 0.40 seconds versus 0.57.
  • This makes real-time collaboration feel closer to talking than prompting.
Read at MarkTechPost →
12
Epistemic Immunodepression in the Age of AI

A pediatric surgeon sees AI changing the evidence machine.click for the full take + 5 bullets

Daniel's take

Three numbers matter: 28.6%–91.4% fabricated references, 6% validated models, 2,271 syntheses.

The result is weaker correction, not just faster writing.

Use more verifiable records, logs, and review accountability now.

Five things
  • AI makes bad evidence easier.
  • Researchers can now synthesize, review, and edit faster, but independence quietly disappears.
  • That breaks science’s checking power.
  • Epistemic immunodepression names the failure: scale, opacity, and feedback loops collapse together.
  • Doctors should treat AI-assisted evidence as security-critical.
Read at LessWrong →
13
Exploit Code Released: Public PoC Dumps for Windows BitLocker Bypass and SYSTEM Elevation Zero-Days

Public GitHub PoCs expose two Windows zero-days.click for the full take + 5 bullets

Daniel's take

YellowKey bypasses BitLocker in WinRE; GreenPlasma reaches SYSTEM on Windows 11 and Server.

Attackers get easier access to protected disks and elevated shells.

Defenders should prioritize patching, hunting, and hardening Windows recovery paths now.

Five things
  • GitHub now has weaponized Windows PoCs.
  • YellowKey bypasses BitLocker through WinRE on Windows 11 and Server.
  • GreenPlasma turns CTFMON into SYSTEM access.
  • Windows 11, Server 2022, and Server 2026 are explicitly affected.
  • Patch fast; public code makes abuse easier.
Read at Daily CyberSecurity →
14
Deterministic Fully-Static Whole-Binary Translation Without Heuristics

Elevator translates whole x86-64 binaries deterministically.click for the full take + 5 bullets

Daniel's take

It handles ambiguous bytes by branching all feasible interpretations ahead of time.

That makes self-contained AArch64 binaries easier to test and sign.

Code size rises, but runtime translation overhead disappears.

Five things
  • Elevator removes heuristics.
  • It translates whole x86-64 executables ahead of time, even when bytes could be code or data.
  • That means no JIT runtime baggage.
  • The paper says SPECint 2006 runs on AArch64 with performance on par with QEMU user-mode emulation.
  • Code size grows, but testing and signing get much easier.
Read at Hacker News Frontpage 24+ →
15
9.8 Critical Alert: One-Byte Heap Corruption in Exim Exposes Global Mail Servers to Takeover

Exim’s GnuTLS path has a one-byte heap corruption bug.click for the full take + 5 bullets

Daniel's take

CVE-2026-45185 affects versions 4.97 through 4.99.2 and can reach allocator metadata.

OpenSSL builds are unaffected, but GnuTLS servers need attention now.

Upgrade to 4.99.3 and check exposed mail servers immediately.

Five things
  • Exim’s GnuTLS path is vulnerable.
  • A TLS close_notify plus BDAT byte corrupts freed heap metadata in versions 4.97-4.99.2.
  • OpenSSL builds stay safe here.
  • The bug needs almost no server-side setup and can reach allocator metadata on default mail servers.
  • Patch to 4.99.3 and hunt GnuTLS builds.
Read at Daily CyberSecurity →
16
Iraq and Pakistan Strike Iran Transit Deals to Move Oil and LNG Through Hormuz

Iraq and Pakistan struck transit deals with Iran.click for the full take + 5 bullets

Daniel's take

Two Iraqi VLCCs and Qatari LNG cargoes passed through Hormuz despite the war.

Traffic has fallen to about 5% of normal levels.

Buyers and shippers now have to plan around Iranian control.

Five things
  • Iran now controls transit.
  • Iraq won passage for two VLCCs carrying 2 million barrels each through Hormuz.
  • Pakistan secured LNG cargoes; summer power demand stays high.
  • Reuters says traffic is about 5% of normal, and Brent has jumped over 50%.
  • Energy buyers may need fresh routes fast.
Read at gCaptain →
17
Iran Is Using Tiny ‘Mosquito’ Boats to Shut Down the Strait of Hormuz

Iran is using small boats to squeeze Hormuz.click for the full take + 5 bullets

Daniel's take

IRGC swarms, drones, mines, and tunnels keep pressure on shipping.

That raises costs for oil and container traffic immediately.

Shippers and planners need to treat the strait as contested.

Five things
  • Hormuz is getting harder to cross.
  • Small IRGC boats, drones, and mines now hit tankers and container ships together.
  • The tactic is denial, not conquest.
  • Hudson Institute says hidden tunnels and coastal bases keep the fleet hard to destroy.
  • Shippers and navies have to plan for friction now.
Read at WIRED Security →
18
Thinking Machines and Interaction Models

Thinking Machines released Interaction Models after a year of work.click for the full take + 5 bullets

Daniel's take

The system combines fully-duplex audio, video input, and delegated reasoning.

That makes real-time conversation more natural, but still imperfect.

The main takeaway is that scale, not frontier intelligence, is the advance.

Five things
  • Duplex chat gets bigger.
  • Tiny turns let the model listen, speak, and interrupt like a person.
  • The clever part is handing hard work to a slower model.
  • Video input and 40x total parameters make the system more interesting than Moshi.
  • It still looks partly benchmark-driven, but the scale matters.
Read at HN - AI/ML Search Feed →
19
I Stopped Using Google And Amazon To Shop. ChatGPT Replaced Both.

Agents now spend money across six contested layers.click for the full take + 5 bullets

Daniel's take

ACP, UCP, AP2, x402, and Bedrock Agent Core split control.

That changes who gets paid and who gets blamed.

Map the stack now, because checkout power is moving upstream.

Five things
  • Agents are spending money now.
  • ACP, UCP, AP2 split merchant control from payment authorization.
  • Stablecoins and x402 change settlement.
  • AWS Bedrock Agent Core adds governance where checkout used to stop; that matters for merchants.
  • Own the permission layer, or get boxed out.
Read at AI News and Strategy Daily →
20
I've been running an AI editorial pipeline on the Iran-US war for 75 days

World War Watcher has tracked the Iran-US war for 75 days.click for the full take + 5 bullets

Daniel's take

Treasury sanctions, Hormuz shipping, and FPV drone strikes keep changing fast.

The spillovers now hit oil, cloud, and diplomacy.

Readers should treat this as active operational context, not commentary.

Five things
  • Seventy-five days, one live war feed.
  • Treasury sanctions IRGC-China oil links, while Trump seeks Xi’s help.
  • Hormuz transit is collapsing; shipping now plans closure.
  • FPV drones hit Iron Dome; $500 weapons threaten million-dollar defenses.
  • This is decision material, not background noise.
Read at HN - AI/ML Search Feed →
21
Google Just Named OpenClaw in a Hacker Threat Report (and the AI Industry Pretended Not to Notice)

Google named OpenClaw in a May 11 threat report.click for the full take + 5 bullets

Daniel's take

Attackers used it to plan a zero-day mass exploit.

The exposure spans open-source agents and internet-facing deployments.

Defenders should audit skills, pin versions, and hunt now.

Five things
  • Google named OpenClaw.
  • Threat actors used it to build a zero-day exploit before deployment.
  • The report ties the activity to China and North Korea.
  • Open-source agents now sit inside tens of thousands of reachable instances, per DECLAWED.
  • Audit skills, pin versions, and treat agent tool calls as hostile.
Read at Vince Ultari →
22
Critical “Cline” AI Agent Vulnerability Enables RCE Attacks

Cline’s kanban server exposes localhost WebSocket access by default.click for the full take + 5 bullets

Daniel's take

CVE-2026-44211 lets malicious pages read data and inject commands remotely.

That turns a coding assistant into an RCE path.

Validate origin headers, add session tokens, and avoid browsing while it runs.

Five things
  • Cline’s local server trusts the browser.
  • A malicious webpage can steal workspace data and inject terminal commands silently.
  • The bug is CVE-2026-44211.
  • Port 3484 and missing origin checks make the exploit work on macOS, Linux, and Windows.
  • Developers should stop using it exposed and add origin validation now.
Read at Cybersecurity News →
23
'This is the tip of the iceberg': Google experts say they have seen hackers using AI to discover and weaponize a zero-day for the first time

Google says AI helped attackers find a zero-day first.click for the full take + 5 bullets

Daniel's take

The flaw let them bypass 2FA in a popular admin tool before vendors reacted quickly.

LLMs spotted the hardcoded trust assumption humans missed in that script.

Defenders should patch, hunt, and harden AI guardrails immediately across scripting pipelines now.

Five things
  • Google caught AI-led zero-day hunting.
  • Attackers used a semantic flaw in Python to bypass 2FA before rollout.
  • LLMs read intent better than fuzzers do.
  • GTIG says UNC2814 and WooYun data helped prompt research across TP-Link firmware and OFTP.
  • Defenders should treat guardrails and hunting as urgent now.
Read at TechRadar →
24
AI Will Make the Academic Article Obsolete

AI turns academic papers into living manuscripts.click for the full take + 5 bullets

Daniel's take

Applied economics can now refresh data and methods continuously.

Researchers must cite versions and supervise agents carefully.

Journals and tenure committees will have to adapt soon.

Five things
  • Papers stop being finished.
  • AI agents can rerun regressions, refresh data, and update tables cheaply as methods change.
  • Applied economics is first.
  • Opportunity Atlas shows the direction, while Goodman-Bacon shows why old estimates need revision.
  • Readers will cite versions.
Read at HN - AI/ML Search Feed →

That's only the latest 24 items I personally called out. Surface itself tracks over 5,000 sources and rates every story individually — what's on this page is a tiny slice of the full system.

Screenshot of the Surface interface showing the live ranked feed