Ask HN: Ayar Labs, how big a deal are optical chiplets?

1 points by hspeiser 7 hours ago

Ayar Labs (optical I/O between chiplets / plug in photonics) keeps popping up in my feed and I’m trying to get an intuition for how disruptive this actually is for datacenters, GPU fabrics, and the whole “scale-out vs scale-up” thing.

My naive take: optics can massively cut latency and power for long links and make remote GPU pooling/aggregation more practical than copper/NVLink over short board traces. That sounds huge for multi-GPU training clusters and server-to-server connectivity. But the devil’s obviously in packaging, protocol compatibility (PCIe/NVLink replacement?), yield, cost, and whether system software + accelerator vendors actually adopt it.

A few specific questions I’d love to hear opinions on:

If Ayar style optical I/O works at scale, who wins/loses? (hypothesis: Nvidia + hyperscalers win big; PCIe vendors and legacy board vendors get squeezed)

Is this mostly a server-to-server play (long links) or will it meaningfully replace short-range NVLink like fabrics on the same board/slot?

Biggest practical blockers right now, is it photonics fab/yield, thermal/packaging, managing protocol semantics, or the ecosystem inertia?

Any counterintuitive downsides people see (e.g., reliability, debugability, supply chain pain, or unexpected latency/cost traps)?

tl;dr: optics are obviously attractive, but is this a niche performance optimization or a foundational shift in how compute is networked?