AI with Alec. Get smarter on AI. The easy way.

AI with Alec. Get smarter on AI. The easy way.
Podcast Description
Conversations with leading technical minds in Artificial Intelligence - from CTOs of pioneering AI startups to AI architects at Fortune 500 companies. We explore their strategies, implementations, and innovations to help you better understand and deploy AI in the real world. Hear enterprise AI insights and practical perspectives you won't find anywhere else. If you're a technical leader, business executive, AI practitioner, or innovation strategist, this is for you.
Podcast Insights
Content Themes
The podcast revolves around essential themes in AI, such as enterprise AI strategy, innovative implementations, and ethical considerations in technology. Examples of episodes include Igor Jablokov discussing the evolution of AI technologies like Siri and Alexa, as well as insights on AI's role in creative fields. Other episodes explore practical applications of AI in finance and productivity tools, emphasizing actionable takeaways for listeners.

Conversations with leading technical minds in Artificial Intelligence – from CTOs of pioneering AI startups to AI architects at Fortune 500 companies. We explore their strategies, implementations, and innovations to help you better understand and deploy AI in the real world. Hear enterprise AI insights and practical perspectives you won’t find anywhere else. If you’re a technical leader, business executive, AI practitioner, or innovation strategist, this is for you.
Is AI thinking or just imitating our thinking?
A new Apple research paper, ”The Illusion of Thinking,” suggests these reasoning models are doing the latter. Sophisticated mimicry at a previously unimaginable scale.
In an AI with Alec fireside chat, I spoke to Henry Saba to gather his perspective on the paper and get his take on what this really means for businesses and builders in the near-term.
As a leader in the space of building data infrastructure, automation and intelligent systems for Citadel and Lazard before starting his own company, Specialized Data Company, Henry’s perspective was insightful and clear.
Exactly the kind that is the most helpful when trying to make sense of the exponential change all around us:
1️⃣ Biggest takeaway from the paper is “their inability to solve generalizable problems”
2️⃣ Think of LLMs as pattern-matching machines (aka massive idioms / Mad Libs → s / o Jack Clark and Rick Rubin) vs LRMs as systems that appear to think through problems step-by-step
3️⃣ Put “AI at the core of the systems that you build” but be sure you’re asking it to solve problems it has seen before
4️⃣ Near-term, “tightly scoped” and “very refined, very purpose-built agents for specific use cases” is where a ton of value will be generated
5️⃣ Gap between AI marketing hype and product / capabilities is getting really wide, critical to know the difference
Research link: https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.