Professor Gibson
An AI image of a neon sign stating AI surrounded by red and blue lights.
AI and the Human

A Case for Not Disclosing AI Work

Hello Human Friends,

This is a highly unpopular opinion, BUT hear me out–

AI comes with challenges, and one of those is knowing what was produced by AI and what was not.

This challenge, fueled by fear of the unknown, has led to many discussions about how to disclose the use of AI.

I believe the real question isn’t about constant disclosure and how; it’s about understanding our audience.

AI doesn’t need to come with a neon label in every interaction—what’s more important is whether the audience can clearly grasp the nature of the interaction and if it impacts the outcome. In fact, when used correctly, it is hard to define the boundaries between AI and humans. This is what leads to lengthy and complex disclosure statements.

Instead of focusing solely on disclosures, let’s consider context.

If the AI’s involvement would mislead or trick someone into thinking they’re interacting with a 100% human creation, transparency is essential. But if it’s obvious or irrelevant to the outcome, maybe less is more.

This is a nuanced approach that requires training and understanding. But, for me, it is better than the negative “got you” approach of over-disclosing. Lengthy, complex disclosures often take away from the message of the original creation.

Ultimately, trust is built on how well we understand and respect our audiences’ expectations—not just on labeling.

The Bigger Challenge: Understanding the Fear

The fear of AI stems from a broader concern: it feels like something we can’t fully control or predict. This fear is often why people push for over-disclosure. They believe labeling AI involvement will create clarity, but it can also reinforce misunderstandings about how AI works.

Take the classroom, for example. A student sees me use AI to help with brainstorming or lesson planning, but because the process isn’t fully explained, they assume it’s a shortcut or lazy. The truth is, AI-assisted work done correctly often requires deep critical thinking, iteration, and refinement. But, without understanding that, students dismiss it as less valuable than purely human effort and label me as a lazy professor.

Similarly, in professional contexts, creators may feel compelled to over-disclose their use of AI, leading others to question the originality or effort behind their work. A neon sign saying “AI was involved here” can overshadow the creativity and strategy that went into the process, leaving audiences fixated on the tool rather than the result. It can also harm the intent of the message.

Moving Toward Understanding

To address this challenge, we need to shift the conversation from labeling to education. Instead of relying on disclaimers, we should focus on helping people understand how AI is being used and why it matters.

  • Teach the Process: Show that AI isn’t an end but a means—a tool that enhances human creativity and efficiency rather than replacing it.
  • Contextualize the Work: Explain the role AI played when it’s relevant, like helping generate ideas or refine drafts, so the audience can appreciate the collaborative effort between human and machine.
  • Foster Critical Thinking: Encourage audiences—students, professionals, and consumers alike—to evaluate outcomes, not just origins. Did the work solve the problem, inspire, or communicate effectively? That’s what matters most.

By focusing on understanding rather than over-disclosure, we can build trust in a way that respects both the audience and the creative process.

In the end, disclosures themselves are not a bad thing. But disclosures just for disclosing is an issue. If the audience would feel betrayed by the message coming from AI, then let’s disclose.

AI isn’t the villain here, nor is it a magical solution. It’s a tool—a powerful one, yes—but only as valuable as the intent and thought behind its use. Let’s shift the narrative from fear and skepticism to thoughtful engagement and transparency that fits the context.

In the end, trust comes not from lengthy labels but from how well we meet the expectations and needs of those we’re trying to reach.


The image was created by Midjourney using the following prompt: Show an overuse of AI disclosures like neon signs pointing to a work that are distracting and overboard.