Law://WhatsNext
Law://WhatsNext
Podcast Description
How are leading practitioners leveraging emerging technologies and ways of working to pursue their passion and objectives, and as a by product what are the implications for the future of legal practice? Let’s explore this together. What to expect:
- Focused conversations with leading practitioners; technologists and educators
- Deep dives into the intersection of law, technology, and organisational behaviour
- Practical analysis and visualisation of how AI is augmenting our potential- Insights from adjacent industries that might inform our own
Podcast Insights
Content Themes
The podcast explores themes such as responsible AI governance, geopolitical risk, access to justice, and innovative legal technology. Episodes include in-depth discussions on AI ethics with experts like Hadassah Drukarch and AI applications in the justice system with guests such as Steph Needleman, showcasing practical analysis on how AI can augment legal practices.

How are leading practitioners leveraging emerging technologies and ways of working to pursue their passion and objectives, and as a by product what are the implications for the future of legal practice? Let’s explore this together. What to expect:
– Focused conversations with leading practitioners; technologists and educators
– Deep dives into the intersection of law, technology, and organisational behaviour
– Practical analysis and visualisation of how AI is augmenting our potential
– Insights from adjacent industries that might inform our own
We sit down with Rok Popov Ledinski — an independent legal AI and data consultant whose background spans high-security enterprise engineering through to advising law firms on their AI and security strategy. Our initial interest in Rok's work was sparked by his YouTube channel, where he's been producing sharp, accessible breakdowns of the real risks underpinning today's AI tools.
Within minutes, we're into a forensic dissection of Anthropic's Claude Cowork — the agentic tool pitched at non-developers that launched earlier this year. Rok walks us through the contradictions in Anthropic's own technical documentation: a tool demonstrated by its creators as a way to organise your desktop, while the same support pages advise against granting it access to sensitive local files. A tool marketed for running tasks autonomously in the background — while its activity isn't captured by audit logs. A tool whose safety guidance asks users to watch for ”suspicious actions that may indicate prompt injections” — aimed at an audience that, as Rok points out, has largely never heard of prompt injections.
Rok explains, in terms accessible to non-technical listeners, how hidden instructions embedded in an innocuous document can hijack an AI agent into exfiltrating sensitive client data. His hypothetical attack vector for law firms is disarmingly simple: find lawyers on LinkedIn who are openly using Cowork, send a document to their publicly available email address containing concealed instructions, and let the agent do the rest.
But this isn't an anti-AI conversation. Rok is emphatic that these tools should be used — just not naively. Drawing on enterprise security frameworks from companies like Cisco, he advocates for a practical middle ground: map what your AI has access to, create sanitised copies of sensitive folders, scope permissions tightly, vet your MCP servers and plugins, and understand (physically, not just contractually) how data flows through your systems.
Key Takeaways
The Cowork Paradox: Anthropic's own documentation reveals a tension between how Cowork is marketed (autonomous, background task execution) and how it should be used (limited permissions, no sensitive files, manual monitoring for prompt injections).
Security attacks are now a ”When,” Not an ”If”: Unlike traditional cybersecurity breaches, prompt injection attacks exploit a fundamental limitation of large language models — they can't distinguish instructions from data. Research shows success rates as high as 90% for some proprietary LLMs. Claude is among the more resistant, but not immune.
Practical Security for Legal Teams: Rok's actionable advice for in-house teams and law firms includes: creating clean data environments separate from originals; using self-hostable workflow tools like n8n; scoping AI permissions to the minimum necessary; and conducting genuine due diligence on every plugin and MCP server before connecting it to your systems.
Key References
- Rok's YouTube Channel: where our interest in Rok's work began, and a recommended follow for anyone wanting to stay across the security dimensions of legal AI adoption
- Rok's LinkedIn — he hosts weekly live sessions every Saturday with a security expert specialising in air-gapped, offline AI deployments in regulated industries
- The Art of Modern Legal Warfare — Rok co authors with a former guest and friend of the show Anna Guo and Sakshi Udeshi a series of vulnerability types specific to legal AI use cases.
If you enjoyed this conversation please do share it with someone or a community who you feel would benefit from listening. If you have any more time do tell us what resonated; what didn't; and, rate the show (it helps us grow the audience and get great guests like Rok)!

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.