Halyard Consulting
Home » Press » Press Videos » Bold Encounters

Key Takeaways

  • Jonathan Goodman discusses practical AI implementation in businesses, highlighting a shift from chatting to doing.
  • He emphasizes an ethics-first approach, treating AI as a tool that augments human judgment, not replaces it.
  • Goodman identifies AI pitfalls like hallucinations and compliance issues, advocating for clear boundaries and verification.
  • A voice automation demo showcases how conversational AI can effectively achieve defined business outcomes.
  • He advises leaders to embrace creativity and stay ahead by partnering with trustworthy entities and establishing solid financial controls.

Mark S. Cook welcomes Jonathan Goodman (Founder & CEO, Halyard Consulting) to Bold Encounters for a candid conversation on the future of work and the practical realities of implementing AI inside real businesses. Jonathan explains how modern AI is shifting from “chatting” to doing: voice and chatbot automation, workflow connectors, and agent-like systems that move information between tools, reduce repetitive admin burden, and create faster, more consistent service experiences.

Throughout the episode, Jonathan emphasizes an ethics-first approach: AI should be treated as a tool that augments people, not a replacement for judgment. He also acknowledges where AI can go wrong, hallucinated outputs, imperfect accuracy, and compliance blind spots, and argues for a disciplined model: clear boundaries, verification, and escalation to humans when the stakes rise. Halyard Consulting Profile

A key moment is the voice automation demo, which shows how conversational AI can sound natural while still executing a defined business outcome (intake → qualification → scheduling). The conversation then widens to leadership: how to stay “on the edge” without getting swept up in hype, why creativity becomes more valuable as automation expands, and what leaders should do next to prepare their organizations—starting with trustworthy partners, sound financial controls, and second-opinion review.

Have a specific question or project in mind?

We work with organizations exploring practical, responsible AI systems.
If you think there’s a fit, we’re happy to start a conversation.