Skip to content

Navigating AI’s Gray Areas: Why Human Oversight Still Matters

Artificial intelligence is changing the way organizations identify risk, monitor compliance, and make policy decisions. But as the technology evolves, so does the challenge of knowing where its limits lie. During a recent panel at Marketplace Risk NYC, Tom Cook, Chief Product and Technology Officer, and Ted James, Associate General Counsel, explored how AI operates in the “gray areas” of regulation and why human oversight remains essential to responsible use.

A robotic hand presents a glowing triangular warning sign, symbolizing caution in technology. Perfect for illustrating futuristic concepts, cyber security, and robotic advancements.

November 12, 2025 | by LegitScript Folks

The “Well-Read Intern”

Today’s AI agents are powerful but imperfect. They can process enormous amounts of data and recognize complex patterns, yet they often lack the context needed for nuanced decisions.

Tom Cook, Chief Product and Technology Officer, compared them to “a well-read intern,” knowledgeable and fast, but in need of guidance.

AI can efficiently surface information and highlight potential risks, but without clear policies and human review, it can drift away from an organization’s intended risk posture. Treating AI like that intern means giving it explicit instructions and clear boundaries, then letting human judgment make the final call.

When AI Meets the Gray Area

AI performs well when tasks are straightforward, such as identifying keywords, matching images or flagging policy violations. However, when decisions depend on intent or reasonableness, the gray area begins to show.

Ted James, Associate General Counsel, noted, “The days of simple keyword lists are behind us. AI isn’t a reasonable person.” Regulators often apply the reasonable person standard when evaluating whether something is misleading or noncompliant. AI, on the other hand, lacks the intuition to weigh context, interpret subtle cues, or distinguish between legitimate and problematic behavior.

For instance, a system might identify a product image or description correctly but fail to understand the intent behind it. In those moments, common sense and experience—qualities machines do not have—must guide the outcome.

The Trouble with AI-Only Research

Another challenge arises when organizations rely on AI to conduct regulatory research. Even when systems are restricted to official sources, the internet’s information ecosystem can still be polluted with biased or inaccurate data. AI models can unintentionally amplify misinformation or circular references, especially in emerging or lightly regulated areas.

“You really need a human being to evaluate that research,” James said. “Otherwise, AI can end up reinforcing the wrong conclusions.”

Building a Balanced Approach

The future of effective risk management lies in hybrid systems that balance machine efficiency with human expertise. AI should be used to surface data, spot trends, and speed up workflows, but the ultimate responsibility for interpreting gray areas must stay with people.

This approach not only improves accuracy but also preserves accountability. AI can handle the scale, but humans provide the sense-making and the ability to decide when something feels off or when more review is needed.

The Path Forward

As AI continues to evolve, new challenges will emerge, including security risks, biased data, prompt manipulation, and overconfidence in automation. The key is transparency and control: knowing what AI is doing, how it makes decisions, and when to intervene.

Cook emphasized, “The goal isn’t to eliminate human involvement. It’s to make sure humans and AI are working together, each doing what they do best.”

The Takeaway

AI has revolutionized how organizations detect risk and enforce compliance, but it is not a substitute for human reasoning. As regulations and enforcement priorities shift, human oversight ensures that decisions remain informed, ethical, and aligned with policy.

AI may be the well-read intern, but humans still need to be the mentors who provide guidance, context, and judgment in the gray areas where technology alone cannot see clearly.

Recent Blog Articles

Industry Experts Talked About These Three Key Issues at the Marketplace Risk Global Summit

The Marketplace Risk Global Summit, which took place this month in London, provided a space for the experts, founders, and industry leaders of online platforms to share their experiences, strategies, and forward-thinking solutions to the most pressing challenges in risk management, trust & safet...

FTC Payments Enforcement Trends: Key Takeaways for Payment Service Providers

The Federal Trade Commission continues to scrutinize the payments ecosystem, viewing these companies as critical gatekeepers in the fight against consumer fraud. LegitScript's recent webinar, FTC Payments Enforcement Trends: What Merchant Acquirers and Payment Service Providers Need to Know, examine...

LegitScript Marketplace Risk NYC Panel Recap: AI Blind Spots: The Gray Areas of Risk, Policy & Regulation

AI agents are transforming workflow automation, bringing speed, scale, and intelligence to complex processes. Think of them as well-read interns: capable but inexperienced, requiring context and human judgment. As their role expands, how do we ensure automation enhances, rather than replaces, though...

Watch Out For These High-risk Merchant Behaviors During the Holiday Season

With holiday spending set to rise this year - especially through online shopping - the expected surge in transactions makes payment processors and online marketplaces vulnerable to increasingly sophisticated scams and other high-risk behaviors that are becoming harder to spot as fraudsters integrate...