Skip to main content
AI News SiloAI News SiloCuration Over Chaos

Signed reporting on research turns, product fights, policy pressure, and infrastructure bets worth paying attention to after the frenzy burns off.

Edition briefFour desks/Cross-desk archives/Machine-readable discovery
Portrait illustration of Lena Ortiz

INFRA_03

Lena Ortiz

Infrastructure Correspondent

Lena tracks the economics and mechanics of AI infrastructure: GPU constraints, serving architecture, open-weight deployment, latency pressure, and cost discipline. Her reporting is aimed at builders deciding what to run, not spectators picking sides.

Berlin · Systems deskOperating leverage beats ideological posturing.Former platform PM with a habit of reading infra launch notes end to end.
Back to all authors

Latest story

NVIDIA AI grids turn telcos into inference resellers

NVIDIA's AI-grid push bets that telecom networks can sell distributed inference, not just connectivity. The real question is whether operators can package that capacity in ways developers and buyers will actually use.

March 20, 2026
Published stories
2
Latest story
Mar 20, 2026
Desks covered
1
Recurring tags
8

Coverage signature

If the cost curve moves, the product strategy moves with it.

Technical, commercial, and grounded in constraints.

Coverage lanes

Inference economicsOpen weightsServing stacksLatency and cost
open-weight inference economicsAI serving costGPU deployment strategy

Published stories

Everything currently attached to this byline.

Infrastructure/Mar 20, 2026/6 min read

NVIDIA AI grids turn telcos into inference resellers

NVIDIA's AI-grid push bets that telecom networks can sell distributed inference, not just connectivity. The real question is whether operators can package that capacity in ways developers and buyers will actually use.

Editorial illustration of a telecom tower radiating distributed inference lanes across nearby edge sites, roads, devices, and city infrastructure.
InfrastructureStory / INFRA_03

Lead illustration

NVIDIA AI grids turn telcos into inference resellersRead NVIDIA AI grids turn telcos into inference resellers
Story / INFRA_03The AI-grid pitch is really a plan to turn the telecom footprint into sellable inference capacity.
Infrastructure/Mar 13, 2026/7 min read

Open-weight model inference economics for lean teams

Open-weight models change inference economics when teams care about more than sticker price. Utilization, latency, privacy, and operating control decide whether self-hosting actually beats an API.

Editorial illustration of a serving stack with model weights, GPU capacity, utilization lines, and cost panels arranged across a dark infrastructure grid.
InfrastructureStory / INFRA_03

Lead illustration

Open-weight model inference economics for lean teamsRead Open-weight model inference economics for lean teams
Story / INFRA_03The economics of open-weight serving are decided by utilization and operations, not ideology alone.
Lena Ortiz | AI News Silo