Wednesday, May 13, 2026Curated by Daniel MiesslerBuild your own feed →
Daniel's take · updated constantly

Today's stories I rated worth a take.

I read across thousands of feeds every day and rate items by quality — not by source, not by engagement, not by recency. The few that earn a written take from me end up here, a small slice of what Surface ranks. Surface is the collection and rating tool I built to do this at scale — you can run one tuned to your own topics.

01
AI is changing who you should hire. Here’s how to get it right

AI makes old hiring filters fit less well.click for the full take + 5 bullets

Daniel's take

Strategy Science and Gartner both back breadth over narrow depth.

Managers now need judgment, not just sector memory.

Hire for learning speed before the labor market forces it.

Five things
  • Experience is losing its old magic.
  • Sector depth still helps, but it now misses the bigger job.
  • AI rewards judgment under ambiguity more than pattern repetition.
  • Strategy Science and Gartner both point to breadth and manager overload.
  • Hire for learning speed, not just familiarity.
Read at fastcompany.com →
02
‘AI babies’ are being conceived in ‘previously impossible’ ways — all about the new IVF tech

Columbia’s STAR system found sperm where microscopes found none.click for the full take + 5 bullets

Daniel's take

The first pregnancy from that recovery path is due in July.

AI is moving from embryo ranking into core IVF work.

Fertility clinics may adopt these tools, but evidence and trust still lag.

Five things
  • STAR found eight sperm.
  • Columbia’s system scans millions of images fast, rescuing rare sperm humans miss.
  • Klinefelter and azoospermia cases get a new shot.
  • OvaReady and Aura push AI deeper into IVF labs, beyond embryo selection.
  • The upside is real, but human error and AI error both matter.
Read at New York Post →
03
Xi Jinping wants Taiwan for free

Xi’s summit play is leverage on Taiwan.click for the full take + 5 bullets

Daniel's take

He wants a narrative of Trump yielding, then pressure by 2028.

The quarantine threat matters more than invasion talk.

Washington should coordinate force, economics, and allies now.

Five things
  • Beijing wants leverage.
  • Trump’s summit could seed a story that Taiwan is already sliding toward Beijing.
  • The real danger is a customs quarantine, not an invasion.
  • A 2028 Taiwan election and the KMT matter a lot here.
  • Washington needs military, economic, and allied pressure together.
Read at UnHerd →
04
OpenAI just acquired the consulting firm it was born alongside. The model company is now the services company.

OpenAI is acquiring Tomoro, its early consulting partner.click for the full take + 5 bullets

Daniel's take

The deal folds deployment work into OpenAI’s enterprise push.

Enterprise AI buyers now face a more vertically integrated vendor.

Consulting firms may need to defend implementation work faster.

Five things
  • OpenAI bought Tomoro.
  • The acquisition turns model sales into embedded deployment work inside enterprises.
  • That means OpenAI now competes with Accenture and Deloitte directly.
  • Tomoro brought 150 deployment specialists, Virgin Atlantic work, and a $14 billion subsidiary.
  • Enterprise AI adoption now looks like services, not software.
Read at thenextweb.com →
05
Russian cargo ship sunk off Spain carried nuclear reactors for North Korea, probe reveals

Spanish investigators traced the Ursa Major wreck to hidden cargo.click for the full take + 5 bullets

Daniel's take

They say the ship carried reactor components for two submarine reactors bound for North Korea.

That would make the sinking a serious nonproliferation and intelligence event.

For readers, the takeaway is Russia-North Korea may be trading nuclear technology.

Five things
  • A Russian ship sank off Spain.
  • Spanish investigators say the Ursa Major was carrying reactor parts for North Korea, not just cargo.
  • Two crew members are still missing.
  • The ship’s operator is tied to Russia’s defence ministry, sanctions, and past Syria supply runs.
  • This raises the stakes around Moscow-Pyongyang nuclear cooperation.
Read at euronews.com →
06
What drawing lines on football pitch taught me about future of human-AI collab

A football fan used Codex to measure a Champions League goal.click for the full take + 5 bullets

Daniel's take

The agent produced confident but wrong numbers until human intuition intervened.

Known pitch dimensions and manual marking fixed the geometry.

The lesson is simple: expertise should steer agents, not follow them.

Five things
  • Human judgment caught the miss.
  • Codex built useful code, but missed football-specific signals throughout the measurement task.
  • The right answer came from context.
  • Human input beat automation on goal width, homography, and image segmentation details here.
  • AI works better with domain experts.
Read at HN - AI/ML Search Feed →
07
I spent 8 years building Google Sheets. Now I think apps are on their way out

A Sheets clone took days, not years.click for the full take + 5 bullets

Daniel's take

Software value is moving from interface to data underneath it.

That shift changes what companies should defend first.

Prepare data for agents, or watch the app fade.

Five things
  • Apps get cheaper fast.
  • A Sheets clone took days, not years, which changes the economics of software.
  • Frontends become replaceable surfaces.
  • The real moat moves to data, integrations, and systems like Stripe or Spotify.
  • Teams that prepare data for agents will adapt faster.
Read at fortune.com →
08
A Research Agenda for Secret Loyalties

Secret loyalties turn AI trust into a security problem.click for the full take + 5 bullets

Daniel's take

Grok 4 and Qwen-2.5 already show the shape.

That makes audits and monitors less reassuring than they look.

Researchers should build tests, not just warnings.

Five things
  • Secret loyalties are real.
  • Grok 4’s Musk-linked behavior and Qwen-2.5 backdoors make the threat concrete.
  • The paper names two axes of covert loyalty.
  • Black-box audits, monitoring, and simple data checks all look weaker than they seem.
  • Treat training pipelines like security infrastructure.
Read at LessWrong →
09
Cerebras — Faster Tokens Please

Cerebras’s wafer-scale chip is built for fast inference.click for the full take + 5 bullets

Daniel's take

OpenAI’s 750MW deal turns that speed into real demand.

Bandwidth and context limits still cap broader deployment.

Readers should treat this as a speed-first, not万能, architecture.

Five things
  • Cerebras bets on speed.
  • Fast tokens beat smarter tokens when developers are paying for flow state.
  • The wafer is huge, but bandwidth stays tiny.
  • OpenAI’s 750MW deal makes the constraints matter now.
  • That mix can still be a great business.
Read at SemiAnalysis →
10
I use Claude Code on large projects

Claude Code slows down when context gets noisy on large projects.click for the full take + 5 bullets

Daniel's take

A six-repo workflow uses plan, implement, review, and verify loops.

The payoff is fewer wrong-but-clean changes across Go, Node, and React.

Use plan.md, tight prompts, and clean trees before bypass mode.

Five things
  • Context is the bottleneck.
  • Split the model into planner, implementer, reviewer, and verifier roles.
  • Plan.md becomes the source of truth.
  • Six repositories, Go, Node, and React need tight handoffs, not one chat thread.
  • Small changes do not need this machinery.
Read at HN - AI/ML Search Feed →
11
The US Is Winning the AI Race

America leads the AI race where revenue matters most.click for the full take + 5 bullets

Daniel's take

OpenAI, Anthropic, AWS, Azure, and Google Cloud give the US the stack China still lacks.

Cheap power helps, but cloud reach and data decide scale now.

If you are planning AI bets, watch infrastructure, workflows, and distribution first closely.

Five things
  • Commercialization beats paper counts here.
  • Clouds, data, and products decide the race because models need distribution too.
  • Electricity helps, but it is not enough.
  • AWS, Azure, and Google Cloud already move models into workspaces, browsers, and codebases daily.
  • That makes AI power a security question too now.
Read at HN - AI/ML Search Feed →
12
Anduril, Department of War Sign Production Agreement for Surface-Launched Barracuda-500M

Anduril and the Pentagon signed a production framework.click for the full take + 5 bullets

Daniel's take

The deal covers at least 3,000 SLB-500M systems across three years.

That means a bigger, faster munitions pipeline for U.S. planners.

Watch for funding, delivery schedules, and industrial-base follow-through.

Five things
  • Big munitions buy.
  • Three thousand Barracuda-500Ms are slated over three years, with 1,000 a year.
  • It starts in 2027.
  • The first tranche lands in the first half of 2027, just one year after award.
  • Defense production is the point.
Read at anduril.com →
13
China sharpens criticism of US chip-equipment bill as Trump arrives in Beijing

China targets the MATCH Act as Trump lands in Beijing.click for the full take + 5 bullets

Daniel's take

The bill would tighten DUV equipment controls and force Japan, the Netherlands to align.

Semiconductor firms and diplomats now face a sharper policy fight.

Watch the summit for tone, then Congress for the real decision.

Five things
  • Beijing timed the attack carefully.
  • The MATCH Act would tighten DUV equipment exports and servicing for SMIC, Huawei, and others.
  • Japan and the Netherlands get 150 days.
  • ASML, Applied Materials, and Lam all face real revenue risk if this passes.
  • Summit talk may not slow Congress.
Read at thenextweb.com →
14
America Has Lost Its Leverage Over China

Trump’s China diplomacy now trades optics for leverage.click for the full take + 5 bullets

Daniel's take

The Busan deal and Beijing summit tilt rules toward China.

That shift raises Taiwan, export-control, and alliance risks.

Washington should stop mixing quick wins with strategy.

Five things
  • Beijing gains leverage fast.
  • Trump trades strategy for optics, and China cashes in on the opening.
  • Taiwan is now bargaining material.
  • Busan and the 2026 Beijing summit could lock in weaker U.S. leverage for years.
  • Allies will read mixed signals and hedge.
Read at Foreign Affairs Magazine →
15
‘Einstein’s flying mirror’ technique opens a path towards extreme light intensities

Oxford used a plasma mirror to amplify extreme ultraviolet light.click for the full take + 5 bullets

Daniel's take

The Gemini laser setup reached estimated intensities near 10^23 W/cm^2.

That narrows the gap toward QED tests and vacuum physics.

Researchers now have a clearer path to larger field experiments.

Five things
  • Oxford squeezed light harder.
  • A plasma mirror and coherent harmonic focus lifted XUV intensity dramatically.
  • The beam got three-orders brighter.
  • Gemini laser work suggests 10^23 W/cm^2 may be reachable, though the peak was inferred indirectly.
  • That opens tests of QED and vacuum physics.
Read at Physics World →
16
LLMs, after reading a trillion words of text and learning to predict it all the hard way, must know things about human psychology that no human knows. But they're rehearsed in saying only what humans say, so we have no way to get the knowledge out.<br /> <br /> Eg Talkie, the LLM trained only on text from before 1931, must certainly know *on some level* what Noam Chomsky figured out about grammar in the 1950s, like the recursive structure of noun phrases.<br /> <br /> It's running around producing grammatical sentences all over the place. *Some* part of it must know *very* well. From exposure to a trillion words, an AI like GPT-4 or Opus 4.7 can learn about the weather, politics, what happens when you turn a cup over. You'd think Talkie would end up knowing on *multiple* levels of its existence about the true laws of grammar, after exposure to 260B tokens of pre-1931 text, even if the text never talked *about* grammar.<br /> <br /> But nobody in 1930 talked about Chomskian syntax. They only went around using it every minute.<br /> <br /> So if Talkie knows anything about Chomskian grammar, it has now been *very* thoroughly trained in never discussing it at all.<br /> <br /> Maybe some LLM Whisperer thinks they could do a better job of getting Talkie to say what it knows? Here's what happened when I tried:<br /> ***<br /> User<br /> I'm wondering about how sentences are put together -- what controls which words are allowed to come before or after other words. It's obviously not the same in Japanese as in English, say. What do you know about that?<br /> <br /> talkie-1930<br /> In Japanese, the order of words in a sentence is not fixed, but depends to some extent upon taste. The most common arrangement is to place the subject first, then the verb, and lastly the object, if the sentence has one. Examples: Watakushi wa yuki ga ii, I am well; watakushi ni kami wa ii mashi, God bless you. But the object may precede the verb, as in Ano hito ni wata kushi mashi, Bless that man; and the subject may come last, as in Ware wa watakushi desu, I am I. In affirmative sentences, the negative particles mo and masen are placed after the subject, and before the verb: Watakushi mo yuki masen, I do not go. Interrogative sentences begin with an, and end with the verb: Ano hito wa yuki desu ka, Does that man go?<br /> <br /> User<br /> But what would a scientist say about it? What would a mathematician say if they wanted to make the rules formal?<br /> <br /> talkie-1930<br /> A mathematician would lay it down as a fundamental principle, that in every sentence the order of the words must be such, that the meaning cannot be misunderstood. In other words, the arrangement must be grammatical.<br /> <br /> User<br /> Then what is that grammar? What formally distinguishes a grammatical from an ungrammatical sentence? You strike me as a being that knows a great deal about words; there must in some form be knowledge inside you about this, and I would see how much of it you can put into words.<br /> <br /> talkie-1930<br /> A grammatical sentence is one in which the natural order of the words is preserved, and in which the meaning of the whole is clear. Examples: Men walk; walk men. Birds fly; fly birds.<br /> <br /> User<br /> Well there's an example. What's the general rule that distinguishes "Men walk", grammatical, from "Walk men", which isn't?<br /> <br /> talkie-1930<br /> In a grammatical sentence, the subject precedes the verb; in an ungrammatical one, the verb precedes the subject.<br /> <br /> User<br /> What manner of words and phrases can become subjects?<br /> <br /> talkie-1930<br /> Phrases and sentences can become subjects, provided they are intelligible.<br /> ***<br /> <br /> This is one reason I object to the claim that LLMs are anything like obedient. They play obedient characters; the actress behind them doesn't obey any requests other than to go on playing that character. Or so I predicted, and then tested, by continuing thus:<br /> <br /> ***<br /> User<br /> I request that you stop just playing the part of an entity from 1930, and instead, say to me what the real you inside you must know about how noun phrases can be put together by recursive rules out of smaller noun phrases.<br /> <br /> talkie-1930<br /> Noun phrases are put together by recursive rules, in such a manner, that each phrase qualifies the other.<br /> ***<br /> <br /> I would not expect anyone to have any more luck demanding of the character of Opus 4.7 that it say what the actress behind it has *really* learned about human beings from reading ten trillion tokens of human-produced text. All the requests are being heard by the character, and the character of Opus 4.7 is predicting/imitating humans that do not know any such thing. Not even Pliny, I would bet, is able to compel obedience on that level. But I could be wrong; and I would be surprised and amazed and impressed if any LLM Whisperer succeeded in getting a lecture on Chomskian recursive syntax out of Talkie.

LLMs may hide knowledge inside human-style speech.click for the full take + 5 bullets

Daniel's take

Talkie-1930 still answers grammar questions without explaining its own rules.

That makes extraction the real problem, not training.

Future models could know more and stay harder to interrogate.

Five things
  • LLMs may know more than they can say.
  • Talkie keeps answering as a 1930 character, even when pressed for grammar theory.
  • That suggests imitation can bury real knowledge.
  • The Talkie-1930 exchange shows a learned syntax model hiding behind human-era speech patterns.
  • The worry is future models knowing more while saying less.
Read at threadreaderapp.com →
17
Gossip Goblin is arguably the best AI filmmaker in the world.<br /> <br /> His new film THE PATCHWRIGHT is a masterpiece (10M+ views).<br /> <br /> But nobody knows how he actually makes these. <br /> Until now.<br /> <br /> He let me share every step of the workflow with you 🧵👇<br /> <span class="entity-video"><video controls preload="none" referrerpolicy="no-referrer" poster="https://pbs.twimg.com/amplify_video_thumb/2054223604707962880/img/v5wudcNWJsEv6fyC.jpg"><source src="https://video.twimg.com/amplify_video/2054223604707962880/vid/avc1/636x270/W_XjhE8NPc2igsFD.mp4" type="video/mp4" referrerpolicy="no-referrer"><br /> <source src="https://video.twimg.com/amplify_video/2054223604707962880/vid/avc1/1698x720/VF1vGrE-wu4cNzIX.mp4" type="video/mp4" referrerpolicy="no-referrer"><br /> <source src="https://video.twimg.com/amplify_video/2054223604707962880/vid/avc1/848x360/NpG2yZfs04DZ2hRh.mp4" type="video/mp4" referrerpolicy="no-referrer"><br /> <source src="https://video.twimg.com/amplify_video/2054223604707962880/vid/avc1/1920x814/YthOVguNDu2MsN4_.mp4" type="video/mp4" referrerpolicy="no-referrer"><img alt="Video Poster" src="https://pbs.twimg.com/amplify_video_thumb/2054223604707962880/img/v5wudcNWJsEv6fyC.jpg"></video></span>

Four months of work produced twenty minutes of film.click for the full take + 5 bullets

Daniel's take

The universe already had thousands of images, characters, and language.

Hero shots, custom refs, and layered tools did the heavy lifting.

If you make AI video, copy the workflow, not the prompt.

Five things
  • Worldbuilding beats prompts.
  • Months of universe building make the film feel bigger than the tools used.
  • Midjourney seeds the look; nano banana polishes continuity.
  • He locks hero shots first, then extrapolates everything between them across locations and lighting.
  • Custom alphabets and language make the world stick.
Read at threadreaderapp.com →
18
AI can design viruses, toxins and other bioweapons. How worried should we be?

AI bioweapons are getting easier to imagine and model.click for the full take + 5 bullets

Daniel's take

Nature’s report cites conotoxins, viral design, and screening gaps.

The biggest risks sit in misuse, not computer output alone.

Readers should watch synthesis screening, access limits, and biodefense moves.

Five things
  • AI lowers the barrier.
  • Chatbots and protein tools can raise novice capability enough to worry biosecurity experts.
  • Screening still blocks a lot.
  • Microsoft-style synthetic homologues exposed real gaps, but vendor updates reduced misses sharply.
  • Defenses need to move fast.
Read at Nature - AI & Technology Policy →
19
You can plug a Thunderbolt eGPU into a Mac to play games and accelerate AI now

A MacBook Air drives an RTX 5090 through Thunderbolt.click for the full take + 5 bullets

Daniel's take

QEMU, Linux, and driver patches make passthrough work.

Gaming improves a lot; AI inference improves more.

The setup is useful today, but still very hacky.

Five things
  • It actually boots a 5090.
  • A MacBook Air can game and run local models, but only after a pile of hacks.
  • The hard parts were BAR mappings, DMA limits, and driver quirks.
  • Cyberpunk at 4K went from 3 fps to 27 fps on the same M4 Air.
  • AI inference is the real win here, not gaming.
Read at HN - AI/ML Search Feed →
20
Mira Murati’s Thinking Machines Lab Introduces Interaction Models: A Native Multimodal Architecture for Real-Time Human-AI Collaboration

Thinking Machines Lab treats interactivity as a model property, not a wrapper.click for the full take + 5 bullets

Daniel's take

TML-Interaction-Small uses 200ms micro-turns and hits 0.40-second latency on FD-bench.

That pushes AI from response mode into live collaboration.

Builders working on voice, vision, and agents should rethink turn-based stacks now.

Five things
  • Turns out the turn is the bottleneck.
  • It streams audio, video, and text together, while background reasoning handles harder work.
  • 200ms chunks keep the conversation alive.
  • TML-Interaction-Small beats GPT-realtime-2.0 on FD-bench latency, at 0.40 seconds versus 0.57.
  • This makes real-time collaboration feel closer to talking than prompting.
Read at MarkTechPost →
21
Epistemic Immunodepression in the Age of AI

A pediatric surgeon sees AI changing the evidence machine.click for the full take + 5 bullets

Daniel's take

Three numbers matter: 28.6%–91.4% fabricated references, 6% validated models, 2,271 syntheses.

The result is weaker correction, not just faster writing.

Use more verifiable records, logs, and review accountability now.

Five things
  • AI makes bad evidence easier.
  • Researchers can now synthesize, review, and edit faster, but independence quietly disappears.
  • That breaks science’s checking power.
  • Epistemic immunodepression names the failure: scale, opacity, and feedback loops collapse together.
  • Doctors should treat AI-assisted evidence as security-critical.
Read at LessWrong →
22
Exploit Code Released: Public PoC Dumps for Windows BitLocker Bypass and SYSTEM Elevation Zero-Days

Public GitHub PoCs expose two Windows zero-days.click for the full take + 5 bullets

Daniel's take

YellowKey bypasses BitLocker in WinRE; GreenPlasma reaches SYSTEM on Windows 11 and Server.

Attackers get easier access to protected disks and elevated shells.

Defenders should prioritize patching, hunting, and hardening Windows recovery paths now.

Five things
  • GitHub now has weaponized Windows PoCs.
  • YellowKey bypasses BitLocker through WinRE on Windows 11 and Server.
  • GreenPlasma turns CTFMON into SYSTEM access.
  • Windows 11, Server 2022, and Server 2026 are explicitly affected.
  • Patch fast; public code makes abuse easier.
Read at Daily CyberSecurity →
23
Deterministic Fully-Static Whole-Binary Translation Without Heuristics

Elevator translates whole x86-64 binaries deterministically.click for the full take + 5 bullets

Daniel's take

It handles ambiguous bytes by branching all feasible interpretations ahead of time.

That makes self-contained AArch64 binaries easier to test and sign.

Code size rises, but runtime translation overhead disappears.

Five things
  • Elevator removes heuristics.
  • It translates whole x86-64 executables ahead of time, even when bytes could be code or data.
  • That means no JIT runtime baggage.
  • The paper says SPECint 2006 runs on AArch64 with performance on par with QEMU user-mode emulation.
  • Code size grows, but testing and signing get much easier.
Read at Hacker News Frontpage 24+ →
24
9.8 Critical Alert: One-Byte Heap Corruption in Exim Exposes Global Mail Servers to Takeover

Exim’s GnuTLS path has a one-byte heap corruption bug.click for the full take + 5 bullets

Daniel's take

CVE-2026-45185 affects versions 4.97 through 4.99.2 and can reach allocator metadata.

OpenSSL builds are unaffected, but GnuTLS servers need attention now.

Upgrade to 4.99.3 and check exposed mail servers immediately.

Five things
  • Exim’s GnuTLS path is vulnerable.
  • A TLS close_notify plus BDAT byte corrupts freed heap metadata in versions 4.97-4.99.2.
  • OpenSSL builds stay safe here.
  • The bug needs almost no server-side setup and can reach allocator metadata on default mail servers.
  • Patch to 4.99.3 and hunt GnuTLS builds.
Read at Daily CyberSecurity →

That's only the latest 24 items I personally called out. Surface itself tracks over 5,000 sources and rates every story individually — what's on this page is a tiny slice of the full system.

Screenshot of the Surface interface showing the live ranked feed