Specialized generalist, eternal pain and the future of AI
A conversation with Kiran Mysore, Principal at UTEC - The University of Tokyo Edge Capital Partners
Kiran is an AI-Researcher-turned-VC at UTEC, a deep-tech VC firm in Japan with $1B+ in AUM. He is focused on AI’s real-world applications with a penchant for Asian startups innovating for global markets . In this interview, he shares lessons on sourcing, founder feedback, enduring pain points, and how to time the AI adoption curve across bits, atoms, and cells.
How did you decide to become a VC in the first place?
How I became a VC is tied closely to my interest in the intersection of AI and startups. I see myself as a “specialized generalist.” I’m not someone who wants to focus on one thing for ten years like an entrepreneur, but I also don’t want to be a mercenary jumping from project to project.
I found a beautiful middle ground in deep tech VC, particularly working with strong technologists who also have a spark for business. These founders may not be fully formed entrepreneurs, but they’re exceptional within their domains. I struggled with this myself when I tried to start two startups, and I realized I could help others bridge that gap. I found my calling in helping technical founders excel at building successful businesses.
There’s also a personal alignment: I want to do work that doesn’t feel like work. People talk about 9-9-6, but it is irrelevant if you truly love what you do. Even if I weren’t paid, I’d still be doing what I’m doing (this is why we’re doing interviewing on a Saturday!). Across the quadrant of value created and captured, startups allow creators to capture the value they create, and that’s a powerful economic structure I want to help create.
Any regrets?
Do I regret not being a researcher or technologist? Not really. I knew I wouldn’t be in the top 1% of AI researchers (there are many very smart researchers out there). But I believed I could be in the top 10% in two or three overlapping fields, and that led me to the world of startups.
It’s like hill climbing: early on, you take a random walk. Sometimes you’re on a hill that looks promising, but as you walk up, you realize it’s not the one you want to be at the top of. So you climb down and pick another. It may not feel the most obvious, but you need to in order to climb your mountain. That’s how I navigated from research and consulting to Venture Capital.
Could I have started an AI company? Perhaps. If I were starting now, I probably would. But in 2017, the ecosystem was different. Timing matters.
Any key lessons learned from being a VC?
When you’re new to VC, you have a lot of time on your hands. You can go deeper into engagements and give more tailored, honest feedback to founders. Saying no is hard, but you have the time to do it thoughtfully. As you gain experience, the demands of the portfolio grow, and time for sourcing and winning new investments shrinks.
There are really four core VC functions: sourcing (including picking and winning investments), value-add, risk control, and exit. As time goes on, the sourcing piece shrinks, and many firms begin to outsource parts of it to junior staff. But I don’t believe in that. Regardless of seniority, everyone should stay involved in every step of the VC functions, because venture is a full-stack game. The beginning (sourcing to winning), middle (value-add), and end (exit and risk control) all matter.
Senior VCs must stay involved in at least two of the three core sourcing activities: identifying, picking, or closing. If you don’t, you risk falling out of touch with what’s actually happening on the ground. You can’t learn by watching; you only learn by doing. The world is changing too fast to operate on outdated instincts.
I was recently reminded of this: we recently led a Seed Investment in Maieutic Semiconductors, a GenAI startup revolutionizing Analog chip design. Years ago, I had passed on their previous startup but gave them detailed, respectful feedback. They came back to us, years later, in part due to that transparent feedback and our long-standing relationship. That was a big lesson for me. The time you spend saying no thoughtfully compounds over time. I now try to carve out time every week to do that properly.
What do you focus on in today’s world of AI hype?
We’re living through a wave similar to 2021. There’s a lot of short-term hype. But long-term, AI will become more embedded, not less. As investors, we need to focus on momentum: not just on velocity (market momentum) but also on mass (fundamental value). Momentum = mass × velocity. Without mass, the velocity doesn’t sustain.
A helpful mental model is “pain–technology fit” before product–market fit. Is there real customer pain? Is this the right technology to solve it? If so, even if velocity slows down, the startup will have substance.
Companies like Google are great examples: they started with a single algorithm (PageRank) and kept solving evolving problems (text, video, maps, etc). Early-stage startups don’t need to solve everything on day one, but they must evolve the pain they’re solving over time.
In deep tech and science-based ventures, we focus on what I call “eternal pain” problems like labor productivity, aging populations, antimicrobial resistance, revitalization of legacy industries. These are not one-year problems; they’re 10- to 20-year venture time-horizon problems.
What is your framework for AI investing?
I invest at the intersection of AI and three categories:
Bits: Pure software
Atoms: Physical world, robotics, semiconductors
Cells: Biology and life sciences
Each has different dynamics. In software (bits), evolving pain and fast iteration matter. In atoms and cells, the pain is enduring, and technology risk is inversely higher than market risk. In life sciences, the market is guaranteed if the science works. You don’t need to convince people to buy a cancer drug.
Any examples?
One of my earliest investments, Tricog Health, focuses on AI-driven cardiac diagnosis. They started with using AI to diagnose heart attacks in the emerging markets of Asia and Africa, and are now expanding to a full-stack care platform including heart failure and stroke in the USA and Japan.
Tricog has diagnosed over 30 million patients in 12,500 healthcare centres. That proprietary data is now powering generative models for cardiac care at home, like turning single-lead ECGs into full 12-lead outputs. The core problem hasn’t changed. The tools and technology to solve it have evolved.
In a field that’s evolving this quickly, what kinds of AI startups do you believe can be defensible?
I think Full-stack AI companies are quite defensible. Let me illustrate with an example. One of my portfolio companies, Eureka Robotics, which operates in the Physical AI space, began by building AI-powered “brains” (models) for industrial robots. Over time, they moved full-stack by developing the “eyes” of the robot (a 3D camera) in-house, and partnered with robotic-arm manufacturers to provide the “body” as well. This full-stack evolution made Eureka’s products far stickier for clients and helped propel the company to market leadership in the precision robotics space with over 30 million real-world tasks performed.
With the rapid progress in Physical AI, the next generation of AI companies will be able to go beyond full-stack - they can become fully vertically integrated from the ground up. The future ‘Eurekas’ of the world can build their own lights-out factories, staffed entirely by robots powered by AI-native hardware and autonomous agents, allowing them to compete directly with incumbent manufacturers. We’re already seeing early parallels in the digital world with AI-native legal and accounting firms, which don’t just provide tools but directly deliver labor and outcomes.
How do you see the future of AI evolving?
Most of the major breakthroughs in recent years have come from advances in the pre-training stack, driven by autoregressive models like Transformers trained on vast amounts of internet data. But the next frontier is emerging in paradigms that help models learn from the real world through experience (Reinforcement Learning), simulation (Sim-to-real) and vision (what humans rely on the most). Dr. Fei-Fei Li refers to this as “World Models”
From a startup perspective, the center of gravity has shifted from pre-training to post-training, specifically fine-tuning and inference-time compute. I believe that model memory and in-context learning will be crucial for unlocking real-world enterprise use cases. I recently led a seed investment in a multimodal AI startup (still in stealth) here in Tokyo that is tackling exactly this challenge of memory and context for video data, and I’m incredibly excited about what they’re building.
What’s the role of humans in such AI-simulated and trained environments?
Should humans stay in the loop? Absolutely. We’re not smarter, but the question shouldn’t be only “can AI outperform humans?” It’s “do we want a world where AI makes final decisions on life-and-death matters?”
Where humans are still key is in multi-step reasoning and long-term planning. Augmented intelligence, humans plus machines, is where the near-term value lies, especially in startups.
What should we do then?
Open-source infrastructure and open science. We need to democratize experimentation, not just outcomes. That means compute, data, and infrastructure should be accessible globally. Otherwise, we’ll have an elite group experimenting while the rest of the world deals with the consequences.
China is pushing forward in open science. The rest of the world needs to match that with shared foundations, not just proprietary models. Transparency builds resilience.
If you could give one piece of advice to a younger version of yourself just entering the AI space, what would it be?
I’d start in a physical-world domain that’s still in the experimentation phase. Perhaps materials, spatial data, , or neural computing. Get involved with one of the 5–6 global hotspots doing meaningful work.
Catch the wave early, during experimentation, not scaling. That way, by the time the technology reaches the harvesting stage, you’re already very well positioned.
Think in 5- to 10-year cycles, not 6-month hype windows.



