Portfolio About Team Blog Contact Get in Touch
Fund Intro

The NYAD AI Capital Thesis:
Why Now, Why This Team

We didn't start NYAD AI Capital because venture capital in AI seemed like a good business. We started it because we kept seeing the same gap: technical founders building infrastructure that the ecosystem genuinely needed, with no investors at the table who could have a real technical conversation about what they were building. The generalist funds were writing checks based on pitch narrative. The infrastructure founders were getting undervalued or passed over because the investors evaluating them didn't understand the technology.

That gap is the fund.

The Infrastructure Moment

When we closed our first fund in 2022, the conventional wisdom was that the money in AI was in applications — the SaaS products built on top of models. Infrastructure was seen as too capital-intensive, too dependent on the pace of hardware progress, and too close to the work being done by the hyperscalers for a small fund to find differentiated opportunities.

We disagreed for three reasons.

First, the application layer wasn't stable enough to produce durable businesses yet. Foundation models were changing faster than application-layer products could build on them. Building a product on the assumption that a specific model's capabilities would stay roughly constant was a risky bet when model capabilities were doubling every few months. Infrastructure, by contrast, sits below the model layer — it serves whatever models emerge.

Second, the infrastructure gap was genuinely large and largely unaddressed by the frontier labs. The labs were focused on training and research. The work of making trained models reliable, observable, and economically deployable at scale was happening in startups, often without much capital behind it. That's the setup for significant value creation.

Third, we had the right team to evaluate technical differentiation. The NYAD partners collectively have backgrounds in ML research, AI safety, LLM infrastructure engineering, and model development. We can read the papers, run the evals, and have substantive conversations about architecture choices. That's a sourcing and diligence advantage in a market where most capital is being evaluated by generalists.

What We Back

The six areas we invest in — LLM inference infrastructure, AI agents, MLOps and model lifecycle, AI safety engineering, foundation model tooling, and vertical AI — aren't random. They represent a map of the critical path from training a capable model to deploying it reliably and economically at production scale.

Every one of those areas is infrastructure by our definition: it's something other AI companies depend on. We're not in the consumer AI application business. We're backing the plumbing.

That doesn't mean we ignore applications entirely. Several of our portfolio companies are building vertical AI products that are application-layer from one angle and infrastructure from another — they're building a specialized model and serving stack that other workflows in their domain depend on. The test we apply is: if this company disappears, what breaks? If the answer is "a lot of things, for a lot of organizations," that's infrastructure.

How We Work With Founders

We make fast decisions. At the Seed stage, we can get to a term sheet in two weeks from a first meeting if the diligence is clean. We think of speed as a form of respect — founders evaluating capital should know where they stand.

We don't do co-investment theater. We take meaningful positions and we take board seats when we lead. Our value to portfolio companies comes from the technical network we've built — access to ML researchers, infrastructure engineers, and operators at the leading AI organizations who can help founders think through hard problems and open doors that matter.

We also try to be direct about what we don't know. None of us have managed a sales organization at scale. We have limited operating experience in regulated industries. For founders who need that kind of help, we'll tell them that and help them find investors or advisors who can provide it. Pretending to have expertise we don't have doesn't help anyone.

Why Now Remains True

Three years in, the infrastructure thesis has held. The companies that got early in LLM serving, MLOps tooling, and AI safety engineering are doing well. The market for that infrastructure is larger than we projected and growing faster than even optimistic estimates suggested.

The specific investment opportunities have shifted — some categories that were wide open in 2022 are now more crowded. But new gaps keep opening as the technology matures. Each time the foundation model capability frontier moves, it creates a new set of infrastructure requirements that nobody has yet built for. That's been true for three years. We expect it to remain true for at least five more.

If you're building in one of our focus areas, we'd like to hear from you early. Not after you've closed a seed round elsewhere, but at the stage where you're deciding who you want to build this with.

Building AI infrastructure at the earliest stage? This is exactly when we like to talk.