Discerning the Digital Dilemma: Sports Illustrated, Vanderbilt, Grammarly, and Artificial Words
Hello, Human Writers,
When can you use AI ethically?
I am asked that question almost every day. Here is the 30,000-foot answer.
I first heard about Chat-GPT a year ago. As with any new technology, apprehension and concern quickly surfaced. In the case of AI, questions about human creativity, privacy, and misinformation rose to the top. If you’ve learned anything about me at this point, it’s no surprise that I am an early adopter. I came by it naturally; my mom became an advocate for composing digital music on early Macs in the 80s. However, I am also the daughter of a lawyer. My father always taught me that privacy was foundational to autonomy; without true autonomy, we would lose our freedom. I am very comfortable working within the space between rapid adoption and deep concern.
If you consider yourself to be cautious about technology, that is okay. Fear and apprehension regarding new and emerging media is a normal part of the process. There are stories of people who jumped out of the way when seeing a train projected on a movie screen for the first time or considering telephones as a way to communicate with the dead. These examples extend to the invention of the printing press, trains, computers, and even Xerox machines. To a certain degree, our worry shows that our critical thinking skills are working. According to David Henkin, in an article published in Forbes, as technology evolves rapidly, it can lead to dramatic change that leaves us feeling like we are losing control. “We may feel like we are losing our privacy and autonomy.” And this makes us feel unsettled about the future of our privacy.
Every advance or change comes with apprehension, but it is our job to weigh the cost versus the benefit. By agreeing to allow Google to track my location, I definitely give up some privacy. I also gain the ability to use Google Maps to navigate my way. And for someone like me, who is directionally challenged, the benefit outweighs the cost. At a level, these decisions are personal, but they also have societal ramifications. In every decision, we have to acknowledge that there is a cost for society.
So, whether you are an early adopter or a reluctant follower, let’s approach this topic with a healthy mix of enthusiasm and skepticism. Take a moment, just for this article, to live between the tension of the two.
Class is now in session.
Side Note
I will address a critical side note at the end of the article, but I have a moral obligation to place the caveat here as well. I am exploring using AI in a professional setting. While these guidelines can apply to students, students should ALWAYS follow their professor’s rules. See more about this at the end of this article.
The Dilemma
When do words and their combination into sentences based on ideas belong to the author? When machine-assisted material are combined with human creativity and critical thinking, when does it become plagiarism? At what point does a work cease to be credited to human thought and become a product of machine output based on pattern recognition and advanced coding?
When you boil it down, the dilemma is between human critical thought and machine pattern output. It hits on the question of what it means to be human.
When I was in 6th grade, my teacher, Mrs. Cole, had us write a series of short stories. I am a very creative individual, but any author can attest nothing is scarier than a blank page. The hardest part of the writing process is going from nothing to the first draft. In Bird by Bird, Anne Lamott says, “Almost all good writing begins with terrible first efforts. You need to start somewhere. Start by getting something – anything – down on paper.”
I remember sitting at our early Mac and staring at the blank page. I couldn’t think of what to write or even where to start. As it became apparent that my frustration level was rising, my dad called me over. He made me leave the “scary blank” page to sit with him.
He proceeded to ask me a series of questions. “Who is this story about?” “Tell me more about them.” “What trouble do they get it?” As he prompted me, the story became clear. I quickly left to type out the inspiration he instilled in me.
When I found myself stuck again, he was ready to discuss the next points in my plot. He coached me through the process by asking thought-provoking questions or suggesting changes in the narrative, fueling my creativity. After completing the first draft, he helped me rewrite, rework, and review my story.
While this discussion concerns using AI to modify works, it is worth noting that there is much discussion about whether using AI-generated content is considered plagiarism. It comes down to whether plagiarism is unique to copying human works or can also apply to copying output from a machine. Furthermore, in 2023, a US district court ruled that works of art created by artificial intelligence cannot be copyrighted under US law without substantial human input.
The Response
AI is currently a divisive topic. Some are ready to fully embrace the technology, and others warn that its rise will lead to humanity’s demise. As in many cases, the answer is somewhere in between. Let’s examine the arguments on both sides.
In March of 2023, more than 1,000 leaders in the technology industry signed an open letter warning that AI raises “profound risks to society and humanity.” They called on all AI labs to “immediately pause for at least six months the training of AI systems more powerful than GPT-4. The letter currently has over 33,000 signatures. The letter argues that to plan and manage AI resources appropriately in what will be a significant shift in life, the potential outcomes, and prediction of their impact need to be addressed rather than the “out-of-control race to develop and deploy ever more powerful digital minds.”
Their concern built on a statement from OpenAI admitting that “at some point, it may be important to get an independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” Open AI laid out three principles they were considering.
- AI needs to “empower humanity to flourish.”
- Benefits, access, and governance of AI should be shared.
- Risks need to be mitigated by releasing less powerful versions to observe potential unintended consequences in practice.
The open letter argues that “humanity can enjoy a flourishing future with AI,” but we must pause and consider what is coming next.
In March 2024, the UN General Assembly adopted a resolution to promote the “safe, secure, and trustworthy” use of AI systems. Backed by more than 120 Member States, the resolution acknowledged areas in which AI could assist humanity in achieving goals. But, as Linda Thomas-Greenfield, US Ambassador and Permanent Representative to the UN, said, society must learn how to “govern this technology rather than let it govern us,” there is a balancing approach that must take place.
AI researcher Timnit Gebru, speaking at RightsCon in 2023, said, “Ascribing agency to a tool is a mistake, and that is a diversion tactic. And if you see who talks like that, it’s literally the same people who have poured billions of dollars into these companies.”
Many voices in the public share a concern for the use of AI.
However, not everyone sees the potential doom of AI, including Geoffrey Hinton, an Emeritus Professor of Computer Science at the University of Toronto.
Members of the public agree.
Frederike Kaltheuner, Human Rights Watch’s tech and human rights director, suggests we lean into the risks we know AI has rather than spending our energy speculating on what risks might come. According to an article in MIT Technology Review, well-documented risks include:
- Increased Misinformation
- Biased Outputs
- Erosion of Privacy
Examples of AI Scandals
Sports Illustrated
In December 2023, Sports Illustrated announced the firing of CEO Ross Levinsohn after months of speculation and reports about the website publishing AI-generated articles.
To go one step further, these AI-generated articles were published by AI authors and were complete with photos and bios. Futurism sounded the alarm on the content and AI authors whose bios included where they lived and fun facts about them. Sports Illustrated initially did not respond to Futurism’s questions but did delete the authors in question from their website. When the company finally responded, they blamed the error on a third party, AdVon, who provided the content to the website.
This issue is not unique to Sports Illustrated. Futurism claims it caught other organizations, including CNET, BuzzFeed, USA Today, Gannet, and other companies publishing AI content, sometimes disclosed, containing factual mistakes and even plagiarism.
Regardless of how the articles came to be on the site, the scandal deepens when AI writers publish them. When the site contains photos of these AI writers and includes biographies, they attempt to “trick” the audience. This level of deceit and dishonesty takes the scandal to a dark place. It starts to constitute fraud.
Vanderbilt University
In February of 2023, Vanderbilt University’s Peabody College’s Office of Equity, Diversity, and Inclusion sent a mass email to its student body and posted a condolence letter online in response to a mass shooting at Michigan State.
Although the statement disclosed “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023,” it attracted national headlines and Vanderbilt students reacted negatively. The letter contained a few inaccuracies, including referring to “shootings” when there was just a single incident.
According to an article published by The Hustler, Vanderbilt’s student newspaper, a university spokesperson stated, “We believe all communication must be developed in a thoughtful manner that aligns with the university’s core values.”
Many students expressed the irony of using AI capabilities to write a heartfelt message in response to such a tragedy “because you can’t be bothered to reflect on it yourself,” Bethanie Stauffer, a 2022 graduate, said. Others pointed out that some faculty would consider this similar use on a paper cheating.
The university released a statement to multiple news organizations claiming that “the development and distribution of the initial email did not follow Peabody’s normal processes providing for multiple layers of review before being sent” and that university administrators were “unaware of the email before it was sent.” Dean Camilla P. Benbow expressed her personal concern for the Michigan tragedy, but by that point, the damage had been done.
In this instance, in response to a tragic event, the issue boils down to the message’s intent. The message was intended to bring a hurting community together and offer support in a time of need. However, no one wants condolences from a machine.
Grammarly
In 2023, University of North Georgia junior Marley Stevens claimed that using Grammarly to edit her paper led to her receiving a failing grade and being placed on academic probation. Grammarly has been aggressive in adding AI capabilities to its platform. According to Stevens, this led to her “unintentionally cheating.” The university claims her use of Grammarly was considered an unauthorized use of AI. It is important to note that systems that flag papers have been shown to flag Grammarly as AI-generated content unintentionally.
Jenny Maxwell, the head of Grammarly for Education, admitted that AI detection across the board is faulty. As it is a new software, the company has yet to perfect its place within their platform. She continued saying, “These tools are here. They are not going away, and education has a moral responsibility to teach students to use these tools both responsibly and effectively.”
Grammarly makes a compelling case that we need to redefine academic integrity. However the policy is adapted, it needs to leave room for context. Matthew Nimeth, director of instructional technology at Colorado Christian University, says, “These are tools that are just accepted in the workplace. Now more than ever, students need to know how to use Grammarly as they emerge with their college degrees.”
The University of North Georgia released a statement to Fox 5 Atalanta stating, “The inappropriate use of AI is addressed in our Student Code of Conduct. Everything we do at UNG is grounded in our shared value—students come first, always. This guides all our decision-making, with no exceptions.”
In the end, Grammarly and Stevens partnered to create educational videos to help other students understand the use of AI within Grammarly software.
Detecting AI Content
It is worth pointing out that it is extremely difficult to detect AI content. In the summer of 2023, Turnitin and ChatGPT acknowledged a “low rate of accuracy” in their plagiarism detectors. This is because the generated content is based on complex pattern recognition rather than an intelligent return. Faculty need to be aware of these inaccuracies.
As Plagiarism Today points out, “There’s a great deal of confusion about AI in the classroom and, while there’s a lot of promise there, there’s also a great deal of fear and anxiety, on all sides…It is important to both ensure students are learning what they are supposed to be, not simply using AI to ape that knowledge. Still, it’s also important to teach and familiarize students with AI as, whether we want it to or not, it is going to be a tool that they will have to use at some point in their lives.”
I run everything I write through plagiarism detectors to understand what they flag and where potential issues might lie. However, I have also found them to be highly unreliable. ChatGPT Zero often flags human content as AI or AI content as human. I have also attempted to give ChatGPT back its own content and ask if it wrote it; I’ve been surprised by how many times it has denied writing the content because it shows “too much creativity,” something the system claims it cannot do.
So, what is the answer for the classroom?
Rather than focusing on detection, prevention will become commonplace. Faculty members need to address the factors that lead to relying on AI and modify their assignments to focus on factors that AI is not great at. ChatGPT Zero’s new feature, Request Edits, shows promise. It allows students to write in a monitored environment where edits are made and tracked in detail.
Professionally, The MLA-CCCC Joint Task Force on Writing and AI released a paper warning faculty about faulty AI detectors. The task force recommended focusing on student support rather than punishment to “promote a collaborative rather than adversarial relationship between teachers and students.” The task force continued to warn that while we know the detection tools are unreliable, we must be mindful of the consequences of false accusations against particularly marginalized groups.
The Gradient
The dilemma comes down to two things: the disclosure of the information (transparency) and the expectation of human creativity (authenticity). No one wants to be “Willy Wonka’d” (as my students say, referring to the Glasgow Willy Wokna experience). Acting ethically and with integrity is about moving forward with authenticity and transparency. The more a project leans toward factual accuracy, the more the audience expects authenticity and disclosure of AI use. How do we balance these two things while using AI? Let’s examine the Gradient Scale and break down our cases.
The scale has three primary considerations the content creator must acknowledge.
- Be aware of the “deal” your content makes with the audience.
- The audience’s understanding and sophistication can/will change over time.
- Elements of visual and non-visual storytelling occasionally convey unstated messages to the audience.
Be aware of the “deal” your content makes with the audience. (Authenticity)
A “deal” represents a nonverbal agreement between the content creator and the audience. In the cases of writing, many expect that your words are created by you; this is the expectation of authenticity.
I use Chat-GPT and Grammarly in my writing. I use them as partners. I develop the initial ideas, approve them, and review and deliver the final product. How is that different from using a writing coach OR having a PR professional write a speech for a client? You know the President has a speechwriter, right? Compared to the President, my thoughts and ideas are not nearly as important or have as many implications for people.
If I put my name on an article, there is an expectation that I wrote it and not someone else. If I list a co-author, we are equal contributors to the words, thoughts, and ideas. AI cannot and should never be your co-author. Most of the words, thoughts, and ideas should come from you as the primary author. That is a fine, nuanced line, but it is one that we must maintain.
The audience’s understanding and sophistication can/will change over time. (Authenticity and Disclosure expected)
It is important to note over time, as the public becomes more aware of AI and it is integrated deeper into our work, the expectation of authenticity and the required level of disclosure will change. With that understanding stated, let’s look at how I am defining what constitutes as significant AI contribution to a creative project.
In this case, I define a co-author and collaborator as someone contributing at least 41% to the project. Humans should contribute at least 60% to the words, thoughts, and ideas.
Why allow AI to contribute 40%?
Many will argue that this number is too high. I understand that, but I also want to recognize that AI is on the road to becoming commonplace. Our audience’s sophistication is not there yet, but at the pace in which AI is developing, I anticipate the public’s sophistication of these tools will develop quickly as well.
So, while 40% seems shocking when the audience’s sophistication increases, 40% will be a reasonable expectation. A 50/50 partnership with a machine seems inappropriate. At that level, you need to credit the machine as a collaborator.
Elements of visual and non-visual storytelling occasionally convey unstated messages to the audience. (Disclosure)
By disclosing the use of AI, this non-visual element conveys a message to the audience that the author is not fully responsible for the content. But that level of transparency is not black and white.
I use AI openly. Do I need to disclose, assisted by AI, at the end of every email and blog? Probably not. I did not disclose that my dad helped me write my story in the 6th grade—sorry, Mrs. Cole.
The Insight
In each case we examined in this article, the audience’s expectations were mismatched by the creator’s delivery. As we’ve seen before, an ethical line is crossed whenever this inconsistency is present. The more a project leans toward factual accuracy, the more the audience expects authenticity and disclosure of AI.
In the case of Sports Illustrated, they went to great lengths to hide that the articles were generated by AI acting without authenticity or transparency.
With Vanderbilt University, they provided transparency; however, the weight of the tragedy left the audience in a position of wanting more authenticity from human creators rather than superficial machine thoughts.
Finally, with Grammarly, Marley Stevens claims she was acting authentically (the work was hers) and only used Grammarly for sentence structure and spellchecking. However, the school claims that her lack of transparency led to an ethical breach as it related to the intended outcome.
The difference between using AI ethically and unethically is the—re. By using AI to re-something, you are building upon your human creativity and critical thinking. For instance, using AI to reimagine, rework, rewrite, review, refuel, reconceptualize, reignite, refashion, reinvent, rethink, revitalize, reexplore, reenvision, research, recreate, refresh, reposition, etc., puts AI as the ‘in-between step’ bridging the gap between the initial idea and the final product.
Human creativity births the initial idea, and human revision shapes the final product. Just as a writing coach, AI is merely the facilitator, enabler, intermediary, bridge, or enhancer. To maintain the ethical line, it cannot be the co-creator or collaborator. We must preserve AI in the nuanced role of extending human capabilities rather than replacing humanity. This stance prioritizes human agency, judgment, and responsibility throughout the creative process. In the decision-making era of AI, machines should never become autonomous entities with a creative agency.
AI has some distinct limitations. To learn more about current best practices for generating content with AI, visit my blog post on the topic.
What it is good at.
- Leverage AI for brainstorming and first-draft content generation.
- Use AI to analyze the writing style and tone (active voice).
- Giving recommendations on changes.
- Seeing patterns and outlining content.
What it is bad at.
- Critical thinking.
- Making changes. It tends to want to change everything, like using a thesaurus.
- AI tends to hallucinate.
- AI does not have access to up-to-date information unless plugged into search engines.
- Fact-checking.
According to Gary Smith and Jeffrey Funk in an article published in the Chronicle of Higher Education, “When asked to solve problems that necessitate critical thinking, AI’s responses are consistently confident, verbose, and incorrect. AI is incapable of the critical-thinking abilities required to offer reliable advice or “intelligent contributions” — the kind of critical-thinking skills that educators should promote.”
This extends to AI’s ability to navigate nuanced decisions. Not only are they not good at it, but machines are notorious for being unable to understand the gray areas where humanity lies.
So here is my takeaway: if you use AI in the way you would use a human being, asking for suggestions, searching for ideas, getting unstuck creatively, or even word-smithing phrases, then it is ethically okay. But just as a writing coach, AI alone should never write your final product.
Classroom Sidenote
I told you we would return to the huge caveat in this entire discussion: the purpose of assignments within the classroom—the expected outcome changes in a learning environment, especially in an English class. Rather than focusing merely on the final product, a classroom environment is a place to learn and hone the craft of writing. To do this effectively, AI technology has to be removed from the process. You must crawl before you walk.
I tell my students never to give AI something and say rewrite this. You get odd results. But asking for it to list potential changes or highlight passive voice is an aspect where it excels. You have to maintain control over the process.
So, the big caveat is there are times when the assignment’s learning outcome is to improve as a writer. In that case, you have to remove AI from the process. You cannot use a calculator to do math in elementary school because the objective is to understand the basics. But at some point, using a calculator becomes an acceptable practice when the objective shifts.
Critical thinking is foundational to utilizing AI effectively. You can only effectively evaluate AI’s output if you understand the fundamentals yourself. In order to develop this critical thinking, you must practice without the technology. This is seen with other technologies as well.
My 3rd grader is learning multiplication and division. Even though I use a calculator at work, it is unethical for her to use one on her homework. The difference is the expected result of the task. I am producing a product for my employer. I am tasked with creating a high-quality product. I should utilize the tools available to boost productivity and deliver the best product possible. On the other hand, she is tasked with learning the foundations of math.
It’s not only about the fundamentals; it’s also about finding your own style. I am starting to see students who are using AI tools to improve their writing (ethically), but they are starting to write like the AI. So, it is a balance between learning to write, developing your own style, and knowing how to harness the power of AI. It is a journey, and the target will be a moving one for a while.
Sometimes, we must first understand what the machine is doing so we can utilize our critical thinking skills to assess whether it is correct or incorrect. You can only do this by removing your reliance on the machine. So, if you are a student, understand your faculty’s expectations for the use of technology on each individual assignment.
Next time, we will explore the use of AI to generate headshots, including my own personal experience.
What other topics would you like me to look at?