[iframe style=”border:none” src=”//html5-player.libsyn.com/embed/episode/id/40488495/height/100/width//thumbnail/no/render-playlist/no/theme/custom/tdest_id/4139994/custom-color/87A93A” height=”100″ width=”100%” scrolling=”no” allowfullscreen webkitallowfullscreen mozallowfullscreen oallowfullscreen msallowfullscreen]
Over the last couple of years, most of my conversations around AI have been about capability.
How fast models are improving.
How agents are becoming more autonomous.
How enterprises can adopt GenAI safely.
How teams can redesign workflows around intelligence.
But this week, I found myself thinking about something deeper.
Not what AI can do.
But what does AI cost?
And I don’t just mean money.
I mean energy.
I mean infrastructure.
I mean the hidden assumptions underneath the current AI boom.
Because when we talk about the future of AI, most people immediately jump to models, chips, data centers, agents, and software stacks.
But as someone who works closely with organizations trying to operationalize AI in the real world, I keep coming back to a harder question:
What happens when the current compute model itself becomes the bottleneck?
This is not a question most teams are asking yet.
But it is a question serious builders should start paying attention to.
This week, while reviewing different enterprise AI patterns and thinking through long-term architecture choices, I realized that much of the current AI conversation still happens within the assumptions of silicon, scale, and software abstraction.
But what if the next major shift is not a better model?
What if it is a different computing substrate altogether?
That’s exactly why today’s conversation is important.
Because this episode is not about another AI app.
It is not about another wrapper.
It is not about another productivity layer.
It is about something much more fundamental:
What might come after silicon, and how should we think about it today?
Chapters:
00:00 Introduction to Ewelina Kurtis and Final Spark
00:52 Understanding Living Neurons and Their Potential
02:44 The Vision Behind Final Spark
05:34 Current Progress and Future Goals
08:27 Collaborations and Research Opportunities
11:17 Programming Living Neurons
14:02 Ethical Considerations in Biocomputing
16:59 Benefits of Biocomputing for Society
19:39 Advice for Aspiring Bioengineers
22:30 Commercial Aspects of Final Spark
24:24 Investor Insights and Future Directions
Episode # 184
Today’s Guest:
Dr. Ewelina Kurtys, Scientist from FinalSpark
- Website: FinalSpark
What Listeners Will Learn:
- Why the future of AI may require rethinking computation itself, not just models
- How energy efficiency is becoming a core strategic issue in AI
- What biocomputing means in simple terms
- How living-neuron-based computing differs from traditional silicon-based systems
- Why future AI progress may depend on alternative hardware paradigms
- How emerging scientific computing trends should matter to enterprise AI leaders today
- Why staying ahead in AI means looking beyond current tools and architectures