Contact Information

Understanding the ROI Challenge of Generative AI in Enterprises

The rise of Generative AI (GenAI), exemplified by enterprise platforms like ChatGPT Enterprise and Microsoft Copilot, has seen a remarkable increase in capital allocation over the past few years. However, research from respected institutions, including MIT and Gartner, reveals a troubling trend: the return on investment (ROI) from these tools is frequently difficult for business leaders to quantify.

The Disconnect Between Investment and Value

Numbers from the MIT Project NANDA State of AI in Business 2025 report highlight a critical issue: 95% of organizations report achieving “zero return” from their GenAI investments. This doesn’t stem from a lack of deployment; instead, it points to structural challenges—such as brittle workflows and limited feedback loops—that hinder these tools from delivering sustainable value at scale. According to NANDA, organizations often miss the connection between GenAI’s usage and measurable business outcomes.

Unpacking the Executive Confidence Dilemma

Many GenAI programs encounter skepticism at the executive level. While spending on licenses and platforms is easily tracked, the lack of effective measurement models tied to day-to-day operations contributes to leaders’ inability to demonstrate productivity gains. Consequently, decision-makers often struggle to justify continued investments in what they perceive as underperforming technologies.

Insights from NLP Logix

In a conversation on Emerj’s ‘AI in Business’ podcast, Matt Berseth, Co-Founder and CIO of NLP Logix, and Russell Dixon, Strategic Advisor at NLP Logix, discuss why AI assistant deployments often stall and what can be done to operationalize and measure ROI effectively.

Berseth emphasizes that tools like Copilot and ChatGPT Enterprise are often perceived merely as “collaboration apps” rather than critical assets requiring a strategic approach. This oversight leads to weak outcomes, inadequate ownership, and shallow metrics that fail to capture the full potential of these technologies.

Treating AI Assistants as Strategic Systems

Berseth advocates for treating AI assistants as managed capabilities. Instead of viewing them as tools to be turned on, organizations need a comprehensive operational plan detailing how these systems will fit into existing workflows. Specifically, he highlights the importance of defining:

  • Priority Workflows: Identify which processes the AI assistants are expected to enhance significantly.
  • Usage Beyond Logins: Track not just how often employees log in, but how they’re actually using features within those workflows.
  • High-Leverage Users: Recognize and replicate the practices of users who achieve the best results with the AI tools.

This model encourages organizations to move beyond superficial adoption metrics toward a deeper analysis of usage patterns, ultimately focusing on how GenAI can create tangible business impact.

Establishing Goals, Guardrails, and Measurement

Dixon expands on the essential steps that need to take place before deploying AI assistants. He argues that without defining clear goals, realistic use cases, governance structures, and a measurement framework, organizations risk launching tools that are ineffective and unmeasurable.

When determining a use case, it’s crucial to specify:

  • Workflow Intent: Clarify what specific work products the AI will enhance.
  • Expected Changes: Define the anticipated improvements in turnaround time or quality.
  • Roles Impacted: Understand which employee roles will interact with the GenAI system.

Dixon also emphasizes that governance isn’t just a regulatory formality; it’s foundational in ensuring that data and operational risks are well-managed. Without clear guidelines, employees may either hesitate to use new tools or misapply them in ways that compromise security.

Creating a Framework for Measurement

Once the groundwork is set, effective measurement systems must be put in place. Selecting between relying on user feedback or formal measurement methodologies is crucial. Tools and processes for monitoring adoption and usage must be established early on to track progress consistently throughout deployment.

Dixon underlines that successful deployments don’t merely materialize from releasing technology; they require intentional design that incorporates:

  • Tool-Workflow Fit: Align the AI assistant with workflows that benefit from its capabilities.
  • Measurement Strategies: Outline how productivity and usage will be monitored, ensuring that leaders can access trustworthy data to assess success.

Guarding Against Shadow AI

A significant challenge arises in environments where “shadow AI” is prevalent. When employees can easily access consumer-grade tools, enterprise solutions might falter under the lure of familiarity. Organizations must create clear training and usage guidelines to make the value of their AI investments visible and compelling.

By adhering to a structured framework—defining tool-fit, goals, use cases, and measurement strategies—business leaders can effectively evaluate and improve their AI assistant initiatives over time. This structural approach not only justifies ongoing investments but also establishes a solid foundation for leveraging the full potential of Generative AI in the workplace.

Share:

administrator

Leave a Reply

Your email address will not be published. Required fields are marked *