There may be no clearer sign of our era’s cognitive dissonance than the way we treat artificial intelligence. Walk into any boardroom, newsroom, classroom, or training seminar, and you’ll hear the same urgent message: learn AI or fall behind. Entire industries are being reshaped by it, and institutions are scrambling to prepare their people. At the same time, however, the very act of using AI — especially in writing or ideation — remains taboo.

A student who submits a clearly AI-assisted paper may face discipline. An employee who drafts a report with ChatGPT risks reputational damage if discovered. AI is hardly a secret — but using it, to a weird degree, somewhat is. This is a paradox with real consequences, and it reveals something deeper: we have not yet come to terms with what AI is for, what it should and shouldn’t do, and how we value work, skill, and originality in an AI-integrated world. We are stuck in a transitional moment, asking people to learn tools they’re not supposed to admit using. That tension — between the desire for progress and the need for authenticity — is proving difficult to resolve.

Business schools offer courses on prompts. Employers from McKinsey to Deloitte are reskilling workforces: Wall Street training analysts to use AI for modeling and compliance; law firms for case summaries, healthcare systems for diagnostics; marketers for content. A school or company that ignores AI risks irrelevance.

Yet when that same student or employee submits a polished, AI-assisted piece of work, the reaction is often suspicion. “Did you write this?” becomes an accusation rather than a question. If the answer is “not really,” consequences can follow, no matter how clever and numerous were the prompts that propelled the work. Schools still threaten expulsion for “AI plagiarism,” and companies implicitly expect employees to hide their use of generative tools — even as they train them to be proficient. What’s going on?

We are applying old standards to a new reality while dealing with the tension between two instincts: the drive for efficiency and the reverence for human originality. Moreover, we have a societal need to judge each other – for grades, hiring performance appraisal, accreditation and promotion. Prompting talent alone seems to be an insufficient palate.

Meta Open AI
Meta Open AI (credit: SHUTTERSTOCK)

So we want the best AI has to offer — speed, cost-cutting, scalability — but we still judge work based on how much of it seems “authentic.” We want AI to make us smarter and faster, but only behind the scenes. That contradiction is not sustainable.

To be fair, institutions are not ignoring the challenge. Universities are refining honor codes. Companies are drafting AI-use policies. Even humanities faculties now explore AI’s intersection with creativity, ethics, and authorship. And it seems likely that we will make advances in coming years.

It seems clear that AI should not be embraced without scrutiny. When students use AI to bypass the learning process, they lose vital skills. When workers rely too heavily, they risk diminishing their judgment and expertise. And in high-stakes fields like law, medicine, or journalism, errors in AI-generated content (whose frequency can be expected to decline) can have serious or even catastrophic consequences. There’s also trust: presenting AI-created work as one’s own can feel deceptive.

These concerns are real and important, but they should not lead us to demonize AI use across the board. Instead, we should refine how we evaluate effort and originality. We should develop standards that recognize degrees and types of AI involvement. If a student uses AI to brainstorm, organize, or refine an essay, that’s a world apart from using a tool to generate the entire submission. Likewise, an employee who summarizes a complex legal document with AI may still contribute valuable insights and strategic framing. The tool may assist, but the thinking is still human.

Transparency is key. Encouraging people to disclose how they used AI allows for honest conversations about what counts as misuse versus smart augmentation. Institutions —academic and professional — can lead the way by creating environments where such disclosures are normalized and even rewarded. Instead of asking “did you use AI?” we should ask: How did you use it? What parts did you contribute? What judgments did you make?

Treating AI collaboration as a skill 

We need to treat AI collaboration as a skill in its own right – an understanding that is already occurring. Knowing how to prompt, question, verify, and reframe AI output is not trivial. It requires creativity, clarity, and critical thinking. Just as using a calculator doesn’t mean someone can’t do math, using a chatbot doesn’t mean someone lacks ideas. In fact, using AI effectively — balancing its capabilities with your own judgment — will become a hallmark of high competence in the near future.

That said, limitations are critical in education especially. No system can function if it doesn’t assess whether students can perform basic tasks on their own. Grades and degrees must reflect a person’s actual knowledge and capabilities – and in most fields a merely rudimentary knowledge combined with AI skills should not be enough. That means maintaining testing environments where no digital assistance is allowed: no AI, no devices, no auto-complete. Not because we fear AI, but because we need baselines for unassisted human performance. Just as a pilot must demonstrate the ability to fly without autopilot, a student must be able to reason and write without AI.

If we abandon those baselines, we risk something greater than declining standards. We risk allowing AI to become a counterfeit version of ourselves — standing in for skills and thought processes we no longer bother to develop. A society full of people who rely on tools they don’t understand or control is not empowered. It is a hollow and brittle one.

So we need to evolve our cultural expectations around authorship and effort. That means distinguishing between mindless automation and thoughtful collaboration. AI is here to stay, and the institutions that figure out how to integrate it with integrity will have a distinct advantage. If we get this balance right, AI can make us sharper, faster, and more capable. If we get it wrong, it will yield mediocre humans with diluted standards. The choice is not between mastery and honesty. We need them both.

Dan Perry is the former Cairo-based Middle East editor and London-based Europe/Africa editor of the Associated Press, who holds a Master’s Degree from Columbia University in Computer Science and publishes danperry.substack.com. Veteran entrepreneur Ronni Zahavi is the founder and CEO of HiBob, an HR technology innovator and AI-driven unicorn.