Are you one of those people that have asked things of ChatGPT or other text generators and have used “please” in your prompts or even “thank you” after getting your results? Amusing, right? We humans often treat inanimate things as if they were human, and as if they could appreciate our politeness. Almost like patting a dog on the head when it has performed a trick and smiling when it wags its tail.
There are many sound reasons to resist the temptation to do this, to anthropomorphize generative AI. Personally, I want to be careful about what terminology I am using when talking about various aspects of this AI. Here’s why this is especially important for educators aiming to promote and model AI literacy.
Artificial Intelligence
As many people have pointed out, what we are now calling “artificial” or “machine intelligence” - a term that has existed since the days of Alan Turing for the capabilities of software, is a bit of a misnomer, especially what it insinuates, essentially attributing human qualities to a machine. A human can be intelligent, intellectually capable of reasoning, thoughts, planning and creativity. When generative AI mimics certain processes and produces suggestions based on algorithms, calling it “intelligence” is not accurate, and may make some people feel that it is more capable = intelligent than they are. That is not the case. The software is predicting on the basis of a training model and outputting things according to its programming. Even when machines will be capable of autonomous processes, let us refrain from calling it “reasoning” or “making decisions”. Having now entered the mainstream, and it is difficult to extricate the term “artificial intelligence” from the general discourse. Lately, I’ve used the abbreviation AI (in German KI) or generative AI, having no better alternative at the moment.
Thinking, believing, feeling
We have also seen users describing the AI’s actions as “thinking” or some being angry and disappointed at the chatbot after reading outputs, or not getting expected results, or thinking that it is “lying” or being “woke”. It is evident that software cannot feel emotion, have any nefarious intents, or hidden personal agendas. Of course, each chatbot is trained on certain models, with many companies adding restraints to pre-empt abuse or false use. But the programme itself cannot be held responsible for any things it produces. As such, I’ve gotten into the habit of using neutral language when describing outputs.
Creativity vs generating
In the same vein, I try to encourage my students to see the creative process as human and that when they use AI, it is in an assistive role, generating options, which should be reviewed by a human. Especially young learners in the process of learning key writing and cognitive skills should be aware of the higher role that they themselves play as authors, responsible for what they submit or publish. As such, I say “the software suggests, or generates” results.
Quality of text
We educators need to impress certain truths on our students. Do they know how texts (and other outputs) are generated? How it works? Chatbots produce synthetic text - machine-generated - mimicking human expression and style but still synthetic. Because the chatbot is declarative (e.g. “here is the answer”) it is easy for the inexperienced to accept their “confident” statements at face value, overestimating the AI’s capabilities or believing its biases. Let us create activities for our students to interact with chatbots so that they can become familiar with these pitfalls and learn how to see them in their subordinate role.
Don’t believe the hype
While we all have been surprised at what generative AI can do, it is important to discern how companies are framing their products and services, because they are, at their core, companies vying for market share. Whether it’s Google showing off Gemini or Microsoft touting the potential of CoPilot, they are showing off prototype versions that are not immediately available to regular users and in some cases won’t be for some time. Many in the media are guilty of repeating their PR talking points. The podcast Mystery AI Hype Theater 3000 has been doing a great job showcasing over-hyped articles on generative AI. It is imperative we adopt a wait-and-see attitude so that we can remain healthily critical and use common sense.
So, while we educators navigate this brave new world and collaborate to create norms, rules and wise ways of working, let’s be mindful of how we frame what AI is doing to us and for us so that we can remain balanced.
Recommended reading: AI and the Rise of Mediocrity by Ray Nayler https://time.com/6337835/ai-mediocrity-essay/
Picture: DALL-E3
Prompt:
A whimsical image of a single cute robot in a Pixar-like animation style, set against the backdrop of a futuristic library. The library is filled with towering wall-to-wall bookcases, brimming with colorful digital books. The robot, with a round, friendly shape and bright colors, strikes a thoughtful pose reminiscent of Rodin's 'The Thinker', with one hand on its chin. It has expressive digital eyes conveying a sense of wonder and contemplation. The scene combines elements of advanced technology with the charm of a classic library, creating a unique and imaginative setting.
One other thing I saw recently that feels like it is inline with your post's theme is this piece in The Guardian:
https://www.theguardian.com/commentisfree/2023/dec/29/the-guardian-view-on-the-ai-conundrum-what-it-means-to-be-human-is-elusive
A lot of food for thought in this post. I agree with a lot of your points, especially those about how educators describe and talk about AI with students. I think definitions of AI and its "intelligence" are maybe a bit more a grey area, or an area that might be shifting under our feet. I think there is more to this, but I like this overview from an IBM course on AI Fundamentals I'm taking:
What's the difference between AI and augmented intelligence?
AI and augmented intelligence share the same objective, but have different approaches.
Augmented intelligence has a modest goal of helping humans with tasks that are not practical to do. For example, reading 1000 pages in an hour.
In contrast, artificial intelligence has a lofty goal of mimicking human thinking and processes.
Artificial intelligence is the ability for machines to perform tasks that normally require human intelligence such as reasoning, natural communication, and problem solving.
**AI**
> It replaces the need for a human
**Augmented intelligence**
> Machines and humans working together, to enhance each other's efforts when completing tasks
There's more but it's in graphic and table form that doesn't work here,