Impact Supporters
Impact Supporters
Podcast Description
Weekly podcast and newsletter with deep-dives, reflections, research, and interviews on key topics for impact VCs impactvc.substack.com
Podcast Insights
Content Themes
The content addresses critical topics in the impact VC space, with themes such as balancing financial returns with social impact, innovative VC model reinventions, and healthtech's role in impact investing. Episodes feature interviews that explore unique methodologies for impact measurement and the concept of aligning stakeholder interests, illustrated by insights from guests like Eric Gossart and Marie Ekeland.

Weekly podcast and newsletter with deep-dives, reflections, research, and interviews on key topics for impact VCs
Greetings to 3k+ Impact Supporters! 🌍 This is Jonas writing 👋 In this episode, I sit down with Arnau Tibau Puig, co-founder and CTO of TetraxAI, and one of my co-workers at Footprint, Will Nunn, to explore how AI is actually being deployed inside infrastructure and energy systems to accelerate the green transition, what defensibility really looks like when everyone has access to frontier models, and why reliability might end up mattering more than raw intelligence 🤔
AI has moved from novelty to necessity in record time. Valuations are soaring. Data centers are multiplying. Every second pitch deck claims to be AI-enabled.
But beneath the excitement sits a quieter and more important question: Where does intelligence actually create real economic value, and where does it quietly fail? And will AI ultimately benefit or harm the green transition?
Our conversation felt less like debating the future of AI and more like stress-testing its current reality, especially in industries where mistakes are expensive and systems are a bit more complex.
If you are building with AI, allocating capital into it, or simply trying to understand how it intersects with climate and infrastructure, I think you’ll find this one particularly interesting 💡
📋What’s inside
🤖 AI Beyond the Hype Cycle – Why model access is no longer a moat and where real defensibility now lives.🌍 Operational Intelligence in Infrastructure – How AI can accelerate permitting, financing, and project deployment in energy and climate systems. 🛡️The Reliability Problem – Why verification and safety standards matter more than AGI timelines in critical industries.⚡ The AI and Energy Paradox – Data centers, electricity demand, and whether AI becomes a burden or an accelerator for the energy transition. 🧠 The Future of Knowledge Work and Human Potential – What changes for legal teams, junior roles, and decision making when intelligence becomes abundant.
👋 Meet Arnau
Arnau is a data scientist with over 13 years of experience in tech by training, with a PhD in electrical engineering and computer science. He spent years in big tech in California before moving into the start-up world.
But what stood out to me most was not his technical background. It was the clarity behind his shift in focus. At some point, he realized he was solving interesting technical problems but not necessarily meaningful ones. Climate change and the energy transition felt different. He described it as a “big, beautiful problem.” Complex, systemic, imperfect. Worth committing to.
That perspective led him to co-found TetraxAI alongside Marta Vizcaíno Martín and Ekaterina Filina 🚀
TetraxAI focuses on operational intelligence for infrastructure investors and developers. In practice, this means processing enormous volumes of regulatory, legal, technical, and financial documentation so projects can move faster and with greater clarity.
It overlaps with legal tech, but it is far more vertical and infrastructure specific. This is not about generic document summarization. It is about embedding intelligence directly into the workflow of energy and infrastructure deployment.
🤖 AI Beyond the Hype Cycle
One of the most practical parts of our discussion centered on defensibility 🔐 Frontier models are increasingly accessible. The intelligence layer itself is becoming quite commoditized.
But Arnau framed it quite clearly:
“Defensibility used to be about how hard it was to build what you built. Now it is about how hard it is for your customers to leave.”
The moat is no longer the model. It is the integration, the embedded workflow and the absolute necessary layer of trust.
He also offered an analogy that I have not been able to shake:
“The strongest CPU or the smartest human is not very intelligent in the middle of the ocean. Whereas a shark; with much more limited cognitive capacity to humans, is extremely intelligent in that context.”
Raw intelligence without context is essentially useless. The shark understands its environment. It processes signals in ways shaped by evolution and adaptation. Its intelligence is inseparable from its domain🦈
AI, Arnau suggests, is quite similar in that regard. Intelligence becomes valuable only when it is deeply embedded into a specific context.
So, when founders talk about building intelligence, the real question becomes: Are we selling general intelligence, or are we carefully shaping it into something operationally useful inside a system?
That is a very different ambition.
Thanks for reading Impact Supporters! Subscribe for free to receive new posts and support my work.
🏗️ Operational Intelligence in Infrastructure
With the increasing complexity of infrastructure projects, the need for operational intelligence has never been greater. Infrastructure is rarely elegant. It involves long permitting cycles, public authorities, layered regulations, engineering constraints, financial models, and political dynamics. A single renewable energy project can involve thousands of pages of documentation and years of back and forth.
This is where TetraxAI comes in 🤖
Arnau described how project teams often spend disproportionate amounts of time navigating a plethora of documentation rather than making actual decisions. AI can help structure, interpret, and cross-reference this information far faster than manual processes.
But he was careful not to frame AI as a magic solution.
AI does not automatically eliminate regulatory complexity. It does not remove risk. It does not replace accountability. It augments decision-making. And that distinction is critical.
If you are building in climate or energy infrastructure, AI may not necessarily be the product. It may just be the accelerant that makes the product viable at scale.
This is a more humble framing of AI. But perhaps a more durable one 💡
🛡️ The Reliability Problem
This part of our conversation felt especially important with how fast AI is being deployed across organisations. We briefly touched on AGI and autonomous systems, but Arnau quickly redirected the focus toward reliability standards in critical industries.
He referenced sectors like aviation and nuclear energy, where systems are engineered to meet extremely low failure tolerances. In those environments, probabilistic errors are simply unacceptable ⛔
When you compare current large language models (LLMs) against those standards, the gap is substantial. The limitation is not intelligence in the abstract, but rather reproducibility, verification and failure tolerance.
LLMs are probabilistic by design. They generate outputs based on likelihood distributions, not deterministic reasoning chains. That makes them powerful for many tasks. But it also makes them quite unpredictable in edge cases.
In consumer applications, a hallucination might be inconvenient. But in infrastructure finance or grid management, it could be critical and costly 🚨
This is why Arnau emphasized that deployment standards matter much more than AGI speculation. Before asking when machines will surpass human intelligence, we should ask: Can they meet the safety thresholds required for the systems we are embedding them into?
In many critical industries, the answer today is no.
That does not diminish AI’s usefulness… It just reframes it.
Instead of full autonomy, we see augmentation. Instead of replacing experts, we see systems that compress information and support judgment and better decision-making.
Reliability is not a sexy topic. But in regulated and high stakes industries, it may just be the defining one 💡
⚡ The AI and Energy Paradox
There is an obvious tension at the heart of the AI boom 💥
AI systems require vast amounts of compute. Data centers are expanding rapidly. Electricity demand projections in multiple regions are being revised upward. At the same time, we are trying to decarbonize the grid and accelerate the energy transition.
So, is AI actually helping or hurting? 🤔
Arnau approached this question with a sense of clarity rather than defensiveness. The trade-off is real and unavoidable – AI will consume energy. The infrastructure footprint is not trivial.
But he reframed the debate in a way that stuck with me:
“It’s not about whether AI uses energy. It’s about whether it helps us use energy better.”
That distinction matters.
On one side, AI increases demand through training runs, inference, and the physical expansion of data centers. On the other side, it can dramatically improve grid optimization, energy forecasting, demand response, infrastructure planning, and operational efficiency across industrial systems.
The outcome is not automatic. It depends on design choices, where data centers are located, how grids are decarbonized, and whether AI is deployed to optimize energy systems or simply layered on top of them as another consumer.
In other words, AI can either amplify strain on the system or actually become a tool that makes the entire energy network more intelligent and efficient.
The technology itself is neutral. Its impact depends on where we point it 🎯
🧠 The Future of Knowledge and Work
Our discussion about knowledge and work started with legal teams and more junior analysts, but it quickly widened into something more structural.
For years, many entry level roles have centered on gathering information, synthesizing documents, preparing briefs, and escalating insights to senior decision makers. That informational bottleneck defined hierarchy. AI compresses that layer dramatically. Tasks that once took days now take minutes. Thousands of pages can be structured and cross referenced almost instantly ⚡
Naturally, that raises questions about displacement. But Arnau’s perspective was a bit more nuanced. As information processing becomes abundant, the scarce resource shifts. It is no longer access to data. It is judgment.
Client relationships become more valuable, not less. Accountability becomes clearer, not blurrier. Contextual risk assessment, ethical reasoning, and long-term decision making remain deeply human responsibilities.
In fact, as AI systems generate outputs at scale, the need for experienced professionals who can interpret, validate, and stand behind decisions only increases 📈 The routine layer compresses. The strategic and relational layer expands.
Arnau also shared his hope that AI could eventually enable far more tailored education, adapting to individual learning speeds, styles, and gaps in understanding in ways traditional classrooms struggle to achieve. The same principle applies in healthcare, where more personalized diagnostics and treatment pathways could significantly improve outcomes.
In that sense, AI is not just about efficiency inside firms. It has the potential to widen access to high quality, individualized support across society.
But even there, the same pattern holds.
Technology can surface insights.
Humans must decide what to do with them.
If anything, the evolution of AI does not reduce the importance of human expertise. It sharpens it.
The future may belong not to those who can process the most information, but to those who can exercise the best judgment once that information is at their fingertips🧑💻
Thanks for reading this issue of Impact Supporters! This post is public so feel free to share it.
✨ Closing Thoughts
What I appreciated most about this conversation was its sobriety. Arnau is building in one of the most hyped technological environments in decades. Yet his focus remains on systems, context, and verification.
AI is powerful. But power without reliability, without domain integration, and without thoughtful deployment can create fragility rather than progress.
For those of us working at the intersection of technology and impact, the opportunities out there are significant to say the least.
AI can accelerate infrastructure. It can compress complexity. It can unlock capacity.
But only if we treat it not as magic, but as integrated infrastructure. And infrastructure only works when it is built carefully 👷
📥 Tell us what you think
Will AI ultimately accelerate or complicate the energy transition?
Reply directly or drop us a note at [email protected]
👋 Thanks for reading,
Jonas
📚 Links to articles and books mentioned:
Sustainability Without Hot Air by David MacKay
Good Strategy/Bad Strategy by Richard Rumelt
Strategy – A History by Lawrence Friedman
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit impactvc.substack.com

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.