Skip to content

Navigating AI’s Gray Areas: Why Human Oversight Still Matters

Artificial intelligence is changing the way organizations identify risk, monitor compliance, and make policy decisions. But as the technology evolves, so does the challenge of knowing where its limits lie. During a recent panel at Marketplace Risk NYC, Tom Cook, Chief Product and Technology Officer, and Ted James, Associate General Counsel, explored how AI operates in the “gray areas” of regulation and why human oversight remains essential to responsible use.

A robotic hand presents a glowing triangular warning sign, symbolizing caution in technology. Perfect for illustrating futuristic concepts, cyber security, and robotic advancements.

November 12, 2025 | by LegitScript Folks

The “Well-Read Intern”

Today’s AI agents are powerful but imperfect. They can process enormous amounts of data and recognize complex patterns, yet they often lack the context needed for nuanced decisions.

Tom Cook, Chief Product and Technology Officer, compared them to “a well-read intern,” knowledgeable and fast, but in need of guidance.

AI can efficiently surface information and highlight potential risks, but without clear policies and human review, it can drift away from an organization’s intended risk posture. Treating AI like that intern means giving it explicit instructions and clear boundaries, then letting human judgment make the final call.

When AI Meets the Gray Area

AI performs well when tasks are straightforward, such as identifying keywords, matching images or flagging policy violations. However, when decisions depend on intent or reasonableness, the gray area begins to show.

Ted James, Associate General Counsel, noted, “The days of simple keyword lists are behind us. AI isn’t a reasonable person.” Regulators often apply the reasonable person standard when evaluating whether something is misleading or noncompliant. AI, on the other hand, lacks the intuition to weigh context, interpret subtle cues, or distinguish between legitimate and problematic behavior.

For instance, a system might identify a product image or description correctly but fail to understand the intent behind it. In those moments, common sense and experience—qualities machines do not have—must guide the outcome.

The Trouble with AI-Only Research

Another challenge arises when organizations rely on AI to conduct regulatory research. Even when systems are restricted to official sources, the internet’s information ecosystem can still be polluted with biased or inaccurate data. AI models can unintentionally amplify misinformation or circular references, especially in emerging or lightly regulated areas.

“You really need a human being to evaluate that research,” James said. “Otherwise, AI can end up reinforcing the wrong conclusions.”

Building a Balanced Approach

The future of effective risk management lies in hybrid systems that balance machine efficiency with human expertise. AI should be used to surface data, spot trends, and speed up workflows, but the ultimate responsibility for interpreting gray areas must stay with people.

This approach not only improves accuracy but also preserves accountability. AI can handle the scale, but humans provide the sense-making and the ability to decide when something feels off or when more review is needed.

The Path Forward

As AI continues to evolve, new challenges will emerge, including security risks, biased data, prompt manipulation, and overconfidence in automation. The key is transparency and control: knowing what AI is doing, how it makes decisions, and when to intervene.

Cook emphasized, “The goal isn’t to eliminate human involvement. It’s to make sure humans and AI are working together, each doing what they do best.”

The Takeaway

AI has revolutionized how organizations detect risk and enforce compliance, but it is not a substitute for human reasoning. As regulations and enforcement priorities shift, human oversight ensures that decisions remain informed, ethical, and aligned with policy.

AI may be the well-read intern, but humans still need to be the mentors who provide guidance, context, and judgment in the gray areas where technology alone cannot see clearly.

Recent Blog Articles

What Payments Companies Need to Know About the New Federal Law on Intoxicating Hemp

LegitScript's recent webinar, Intoxicating Hemp: What the New Federal Law Means for You, explained recent federal changes that will reshape the intoxicating hemp market and the risk landscape for payments companies supporting these types of merchants. Keep reading to learn how we got here, what's ch...

On-Demand! Marketplace Risk Global Summit Panel

Artificial intelligence is reshaping the trust and safety landscape, creating new challenges for online platforms while also offering powerful tools to identify and address risk. During the Marketplace Risk Global Summit, LegitScript joined experts from Meta and Vinted to explore how generative AI i...

Industry Experts Talked About These Three Key Issues at the Marketplace Risk Global Summit

The Marketplace Risk Global Summit, which took place this month in London, provided a space for the experts, founders, and industry leaders of online platforms to share their experiences, strategies, and forward-thinking solutions to the most pressing challenges in risk management, trust & safet...

FTC Payments Enforcement Trends: Key Takeaways for Payment Service Providers

The Federal Trade Commission continues to scrutinize the payments ecosystem, viewing these companies as critical gatekeepers in the fight against consumer fraud. LegitScript's recent webinar, FTC Payments Enforcement Trends: What Merchant Acquirers and Payment Service Providers Need to Know, examine...