Discussion about this post

User's avatar
Pawel Jozefiak's avatar

The "AI theater" concept is painfully accurate. I see it constantly - teams bolting a chatbot onto their product and calling it AI-first.

The test I use: does the AI make decisions that affect outcomes, or does it just present options? If every output still requires a human to review, approve, and execute - that's theater.

In my own setup, the shift from theater to real happened when I stopped reviewing every agent output and started reviewing outcomes instead. Let the agent decide how, measure what it produces. Huge difference in what becomes possible once you make that leap.

Dean Peters's avatar

I agree with this so much. That's why I've been teaching my AI product management classes about the need for contracts and constitutions being put in place before a line of code is vibed so the agentic system slinging the software into existence has some boundaries. That's why I also teach product managers to take a step back and look at both internal and external risks in just the areas you'd identified and then some. I know yeah, have some conversations with legal and finance before you seek sponsorship from the CPO and CFO. it'll save you from getting vibe fired.

8 more comments...

No posts

Ready for more?