Contact Information

This article is sponsored by NLP Logix and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.

In today’s rapidly evolving technological landscape, governance is emerging not as an optional enhancement but as a foundational necessity for artificial intelligence (AI) applications. This need is accentuated by the recognition among government and industry leaders that generative and predictive AI systems increasingly influence pivotal public sector decisions.

The Colorado Office of Information Technology’s recent guidance on generative AI reveals some concerning statistics: nearly 25% of organizations have encountered inaccuracies in AI outputs, while about 16% face cybersecurity challenges attributed to unregulated AI deployment. These figures highlight a critical disparity—often, the rate of AI adoption can significantly outpace the establishment of regulatory frameworks and governance.

Furthermore, a report by the OECD underscores ongoing issues that hinder effective AI adoption within governmental structures. Fragmented data, outdated legacy systems, and inadequate impact assessments frequently trap government AI projects in pilot phases, impeding scalability and effectiveness. The report emphasizes that robust governance structures must articulate accountability and impact measurement quite early in the deployment process.

NLP Logix elaborates on AI governance within a framework of ethics, policy, and rigorous testing, laying out practical steps for organizations. Documenting AI models, implementing human oversight in sensitive operational workflows, and applying standardized bias and robustness assessments pre- and post-deployment are integral to fostering a dependable AI ecosystem. This perspective elevates governance from mere oversight to a catalyst for scalable, responsible AI applications.

In a special series sponsored by NLP Logix, Emerj Editorial Director Matthew DeMello recently spoke with key figures: Naveen Kumar, Head of Insider Risk, Analytics, and Detection at TD Bank; Matt Berseth, Co-founder and CIO at NLP Logix; and Russell Dixon, Strategic Advisor at NLP Logix. Their insights revolve around how organizations can effectively harness AI tools while balancing innovation and stringent governance measures, framing the conversation around measurable business outcomes.

Their discussions converge on a critical realization: AI initiatives frequently stumble when risk controls, necessary training, and robust measurement practices are relegated to afterthoughts. Let’s explore three core insights emerged during their conversations which can fundamentally influence an organization’s approach to AI deployment.

AI Governance as a Built-in Control Layer

Episode: Governing AI for Fraud, Compliance, and Automation at Scale – with Naveen Kumar of TD Bank

Guest: Naveen Kumar, Head of Insider Risk, Analytics, and Detection, TD Bank

Expertise: Regulatory Compliance, Fraud and Threat Detection

Brief Recognition: Naveen brings over 16 years of specialized experience in AML, Insider Risk, and Fraud management. Previously associated with PwC and Stellaris Health Network, he holds a Master’s in Science in data modeling.

According to Kumar, effective AI governance is anchored in traceability. Organizations must know the specifics of the data they use, the individuals authorized to access it, and the nuances of AI’s interactions within the framework. He describes role-based access controls as essential, comparing them to a polite bouncer at a club—only providing necessary information based on one’s role. This foresight can prove crucial during insider investigations, ensuring that sensitive data remains protected.

“Role-based AI is like a polite bouncer. It only provides information based on the role—if there’s an insider investigation going on, finance has nothing to know about it.”

– Naveen Kumar

Kumar advocates for a phased rollout strategy that starts with narrowly defined use cases and minimal data access. This method allows organizations to prove their governance controls before incrementally expanding permissions. He further stresses the role of data classification—tagging data as safe, sensitive, or critical—so that critical information is excluded from initial iterations, effectively managing risk.

Striking a balance between automation and oversight is a significant challenge. Kumar highlights that different domains require different approaches: while more aggressive AI deployment may be justifiable in retail, compliance-related AI applications demand a conservative touch. His approach favors keeping high-risk cases under human scrutiny, positing that AI should function as an efficiency enhancer rather than a fully autonomous solution.

Practical steps that Kumar suggests include starting with secure sandboxes, developing a comprehensive inventory of both internal and supplier AI tools, and engaging compliance teams early in the process. The overarching goal is to maintain clear visibility into available models, their data interactions, and compliance governance.

Plan, Govern, Train, and Measure AI for ROI

Episode: Making Microsoft Copilot and ChatGPT Enterprise Work for You – with Matt Berseth and Russell Dixon of NLP Logix

Guest: Russell Dixon, Strategic Advisor, NLP Logix

Expertise: Technology Innovation, Business Transformation

Brief Recognition: With over two decades in information technology, Dixon specializes in AI integration and global operational strategy, focusing on how businesses can optimally deploy AI tools to drive ROI.

During his podcast appearance, Dixon stresses that while generative tools like Microsoft Copilot and ChatGPT can be widely beneficial, their success hinges on structured deployment that encompasses user training, established guardrails, and clear metrics for measuring productivity.

Failure to implement these frameworks, he warns, can lead to user frustration, counterproductive behaviors, and ultimately a perception that these tools lack value. Governance must be established prior to actual deployment to ensure responsible use within the organization.

“You have to ask yourself how you’re going to measure productivity. Relying solely on user feedback may be inadequate without formal measurement tools in place.”

– Russell Dixon

Dixon closely acknowledges the correlation between well-defined use cases and project success rates. The more generic the use case, the higher the chances are for meaningful outcomes. Initiatives utilizing tools like ChatGPT that enhance generic workplace productivity could yield significant results, while highly specialized applications might fall short.

He aligns with his colleague, Matt Berseth, in suggesting that a typical success rate of approximately 80% in AI proof-of-concept initiatives reflects a reasonable expectation, thereby allowing room for failure in the pursuit of innovation.

However, he insists on the importance of early success indicators. If organizations do not witness prompt adoption and intended results, it becomes crucial to reassess—not the technology itself, but rather user engagement and the alignment of the tool with the intended use case.

Enforce Strategic Planning and Metrics for AI Success

Episode: Making Microsoft Copilot and ChatGPT Enterprise Work for You – with Matt Berseth and Russell Dixon of NLP Logix

Guest: Matt Berseth, Co-founder and CIO, NLP Logix

Expertise: AI, Data Science, Software Engineering

Brief Recognition: Berseth leads the development of advanced ML solutions across sectors such as healthcare and finance, leveraging extensive experience from previous roles in leading tech firms.

Berseth views successful AI deployment as an ongoing process of measurement and learning rather than a simple rollout. He maintains that understanding user behavior is crucial and warns against conflating user adoption with actual value generation—how users engage with the tools is paramount to effecting high-impact outcomes.

Identifying “tool creep,” where organizations might accumulate AI tools without deriving real value, is a significant concern. Berseth argues that without a cohesive plan encompassing clear goals and metrics, organizations may find themselves navigating a tangled web of underused technologies.

“You need to see these tools as a strategic part of your enterprise AI strategy—and if you don’t, you may end up trying to fix a rollout that started off on the wrong foot.”

– Matt Berseth

Berseth also emphasizes a reasonable expectation of failure as part of innovation. Near 80% of proofs-of-concept reaching production signifies the feasibility of such technologies. When initiatives stumble, the issue typically lies more in the selection of use cases rather than the AI tools themselves. Berseth concludes that as AI tools like ChatGPT become increasingly user-friendly, organizations ought to leverage them to tackle the right problems.

In summary, robust governance structures, strategic planning, and appropriate metrics are paramount for ensuring the responsible and effective deployment of AI tools within organizations, ultimately enabling them to harness AI’s transformative potential. By focusing on these elements, businesses can navigate the complexities of AI governance while realizing tangible benefits.

Share:

administrator

Leave a Reply

Your email address will not be published. Required fields are marked *