Proof of Human Performance: A Process Focus for AI Detection


Students and organizations alike are embedding GenAI into their work without understanding the consequences for others and themselves.

Simultaneously, we’re only just beginning to understand how others can use AI to influence, defraud, and shape our world in subtle but important ways.

A common cliche I encounter is the assertion that “AI is just another innovation like the calculator.” While I like having a calculator wherever I go, this comparison does not leave me feeling at ease.

Why?

Well, here’s a question for you…. Who among you still does mental math? Or math on paper?

That’s right, many can’t. The calculator DE-SKILLED them. 🚫

Imagine Duolingo follows through with their AI-first mandate. What if their people succeed in trying to use AI for every task?

Yes, after they get done firing all their contractors, they’ll face a new problem — the de-skilling of the rest of their workforce. 🚫

And then what? Play this all the way out!

I’m not going to moralize about the role of humanity. Instead, I’ll just point at the consequences. You know, after they eventually fire and replace the rest of their workforce with agentic AI or whatever…

These unsupervised systems WILL screw up! 💥

The sloppy road we took to get to the AI utopia will collapse beneath us, and there will be pain. Lots of pain. (Ah yes, AI handling Social Security benefit payments. What could go wrong?)

I assume we’ll recover, eventually. (But at what cost?)

And after we recover, I expect that attestation of human performance will have a critical function in compliance going forward.

That is, only after we experience enough large-scale, large impact disasters will regulators or the market (or the firms with a self-preservation instinct) begin to care whether a human did or did not do specific things in a process.

How will all this work? How can we prove a human performed a task?

I posit that it will hinge on GenAI detection, but not in the way we do it now.

Currently, GenAI detection focuses on evaluating outputs after-the-fact (i.e., analyzing paragraphs and examining pixels). I think that’s misguided.

I would like to explore a process focus instead, one that defines those much-needed boundaries between human-only, human-AI, and AI domains.

I have much work to do, and plenty of existing literature to learn from. The implications for education, employment, and outsourcing are far-reaching.

Could your firm be the home for this project? If your organization is in the European Economic Area, I would like to speak with you! Please message me for a 1-pager and to schedule a call.

Thanks for reading! (This post used ChatGPT 4o in a limited role as a thesaurus. None of its outputs were directly used.)


Leave a comment