C2i's $15M Bet: Cut Data Center Power Waste 10% Before GPUs Melt the Grid
Data centers will consume triple the electricity by 2035, and we're about to find out if a Bengaluru startup can save us from an AI-powered energy apocalypse.
C2i Semiconductors just closed a $15 million Series A from Peak XV Partners with a bold promise: slash data center power losses by 10% using integrated grid-to-GPU power delivery. That translates to 100 kilowatts saved per megawatt consumed—and for hyperscalers burning through power like there's no tomorrow, this isn't just nice-to-have efficiency.
It's survival.
The Real Story: Power Is The New Bottleneck
Forget GPU shortages. Power has become the hard ceiling for AI expansion. Goldman Sachs projects data center power demand will surge 175% by 2030—equivalent to adding another top-10 power-consuming country to the grid.
But here's what most people miss: the problem isn't generating electricity. It's the 15% to 20% energy waste happening inside data centers as high-voltage power gets stepped down thousands of times before reaching your precious H100s.
<> "What used to be 400 volts has already moved to 800 volts, and will likely go higher" - Preetam Tadeparthy, C2i CTO/>
C2i's approach is deceptively simple: stop treating power conversion like discrete Lego blocks. Instead of multiple conversion stages losing energy at each step, they're building a unified system spanning from data center bus to GPU.
Sounds obvious? Nobody's done it right yet.
April 2026: Make or Break Time
The company expects first silicon back from fabrication between April and June 2026. That's when we'll know if their integrated power-delivery platform actually works—or if this is another semiconductor pipe dream.
C2i's team brings serious credibility: former Texas Instruments executives including Ram Anant, Vikram Gakhar, and Preetam Tadeparthy leading 65 engineers across Bengaluru, with US and Taiwan operations launching soon.
Peak XV's Anandan gets the economics:
<> "If you can reduce energy costs by 10 to 30%, that's tens of billions of dollars"/>
For context, hyperscalers and data center operators have already requested performance data from C2i. When AWS and Microsoft start asking for your validation reports, you know you're solving a real problem.
Beyond The Electricity Bill
The 10% power reduction creates cascading benefits that matter more than the energy savings:
- Thermal relief: Less waste heat means cooling systems can actually keep up
- GPU density: Fixed power envelope suddenly fits more compute
- Hardware lifespan: Lower operating temperatures = longer-lasting expensive silicon
MIT researchers showed that power capping alone reduces consumption while lowering temperatures. C2i's hardware-level efficiency could amplify these gains significantly.
The Skeptical Take
C2i's claims remain completely unvalidated in production environments. Semiconductor startups are graveyards of ambitious promises that couldn't survive contact with real fabrication constraints and data center integration challenges.
Their success depends on three critical factors:
1. Fabrication execution - silicon working as designed
2. Integration simplicity - plug-and-play with existing infrastructure
3. Cost competitiveness - efficiency gains can't cost more than energy savings
Fail any of these, and Peak XV's $15 million becomes an expensive lesson in semiconductor reality.
But if C2i delivers? They're not just saving electricity—they're buying the AI industry more runway before we hit the grid's absolute limits. That's worth way more than $15 million.
Game on, June 2026.

