ChatGPT: thinking out of the box

Posted on January 23, 2023 | [Eduardo Oliveira]

My own reflections on ChatGPT #2
(perspectives and opinions are my own)

In my previous post on Dec 9 (2022), I spoke a little bit about what ChatGPT is and briefly discussed a few challenges this technology poses to education. Since then, we've all been bombarded by ChatGPT articles, posts on social media, and so on. I feel at this point that most of the articles I've read about ChatGPT and education have been focused on raising concerns about assessments, academic cheating, originality of essays and so on. That’s truly important, no doubt. But that’s not only where our attention should be. Apart from a select few, we are missing the opportunity to focus on the unmeasurable value this technology brings to education.


OpenAI/ChatGPT team is aware of (and concerned about) these issues and will release watermarks to let academics know when texts were generated by the tool (as much as there's also an acknowledgement that there might be - potentially - ways to overcome that as well). Additionally, heaps of new models are under development to try and deal with the detection of ChatGPT use in texts. There is a lot happening in this space (academic cheating, text originality, ...)!


A few references on this:


Personally, I believe we have a fantastic opportunity to reflect about our teaching and assessment for learning methods: use of new assessment forms, new taxonomies, new conversations and reflections on honour code/ethics and so on… what a wonderful and really exciting time to be an educator. Remember the famous quote from Christa McAuliffe? "I touch the future. I teach". That's it! This post is about the future, of which ChatGPT is a part of.

As educators, we may feel intimidated and even a bit panicked by the challenges and changes that ChatGPT is requiring from us. In response to the unexpected surprise (we were not ready for ChatGPT release), news such as "NY has blocked ChatGPT access on schools" and "state schools in QLD and NSW decided they will also ban ChatGPT from students" have become normal. Educational institutions are divided. Some avoid it and some say the technology is unavoidable and should be explored. Perhaps these are temporary bans to bide some time to figure out a way to incorporate the use of these technologies into the teaching and learning journeys/processes. Who knows?

Professor Silvio Meira brilliantly asked us to imagine ourselves in the Middle Ages, in a place where there was no paper, pen and ink. In his post here, he says... "Suddenly, the three (paper, pen and ink) appear and someone who already knows how to read and write begins to enable many more people to do the same. From there to books appearing and alternative versions of scriptures is just a jump in social space-time; even if it takes decades, in previous centuries nothing had happened. Technology -like writing, paper, pen and ink- liberates. And, as NYC has interpreted it (*here, his text makes a reference to NYC blocking ChatGPT use in schools*), it threatens the status quo. It is not by chance that, at the beginning of the information revolution -with Gutenberg's press, from 1454 onwards- and when it had the power to do so, the Catholic Church controlled the publication of books with its imprimatur [bit.ly/3WUfGWM]. Smartphones, which have the potential to free students from classroom walls, are still banned in most schools around the world, under the assumption that their ban reduces bullying and student dispersion [bit.ly/3GPupfZ]. But what does science say? That there is no impact of the cell phone ban on student performance [bit.ly/3Cw1eMI]. The case, of course, is not solved by science or technology, but by a combination of habits, behaviours, politics and… learning. One day, the school learns that makes no sense to ban things. Not smartphones, not ChatGPT, not… books. None of them".


I fully agree with Professor Meira. Schools that do not embrace these technologies and adapt their methods will do a disservice to the students by denying them access to technologies that will perhaps be essential for their present and future employability.


Educators

There is a gigantic opportunity to reinvent ourselves (and our T&L spaces). Imagine these new teaching possibilities:

  • ChatGPT can impersonate historical figures such as Freud, Gandhi, Turing, Darwin, … and promote engagement and learning among students. We can let students ask these historical figures how things happened back in time, to let them explain ideas, and to receive contextualise/immersive answers for their questions. "Freud, can the unconscious ever truly be conscious?";

  • ChatGPT can also impersonate people from different culture/backgrounds so we could explore different perspectives in different contexts. "What is it like to live as an indigenous in the Amazon forest?";

  • ChatGPT can describe things in different ways to people with visual or cognitive impairment (promote inclusion). The same content can be delivered in various ways, according to different students' needs. "Can you explain Einstein's Theory of Relativity to someone with limited vocabulary?";

  • ChatGPT can help your students validate ideas/products: how would such and such people like to use a product with these features and characteristics? This could be incorporated as part of a learning process for students when designing projects. Many reflections would be involved in this experience. "how do you think indigenous people in the Amazon forest would benefit from the use of Apple AirPods?";

  • ChatGPT can help to expose people to different perspectives. AI can describe how people feel, how they experience things, and so on. Think about ChatGPT helping you in the design of personas, for example. We can do this in completely innovative ways: ask it to create a story told from the perspective of a refugee, someone with lived experience of XYZ, an athlete, someone with a particular personality, whoever we want;

  • Digital ethics? Yes. ChatGPT has us covered. Students could chat with Aristotle, Plato, Socrates… We can also use it to create ethical problems including different perspectives from those involved. Think about this case study (real world one):

The Boeing 737 is a pure jet airliner with engines that increase fuel efficiency and operational efficiency. After the 737 entered service, international airlines were eager to place orders for the new aircraft. However, there were significant flaws in the 737's rudder design, which were responsible for several air accidents. So Boeing introduced the 737Max with the MCAS system. However, the newly added system is not mentioned in the flight manual and training program of the 737Max series, and the operation of the 737Max is the same as that of the 737NG series. This has left many pilots unaware of the system's existence. After the replacement of the new 737Max flight manual and the improvement of the original MCAS system, the 737Max still caused many air accidents, and the reasons are very complicated.


ChatGPT can be used to promote reflections and understanding on the causes of accidents caused by software developers, for example.

  • Incorporating AI literacy into learning provides opportunities to develop a culture of academic integrity (such as honour code). These reflections, which are aligned with 21st century skills, can help students commit to ethical practices in relation to their studies. The appropriate, mindful, conscious use of AI (and similar technologies) can promote an ethical mindset among students and encourage a habit of thinking about the right thing to do, not simply following a set of rules, which is crucial to their future as responsible citizens;

  • ChatGPT can help us with the creation of teaching content and examples that are diverse and inclusive;

  • And much more. This great post from The University of Sydney highlights how we can use ChatGPT to generate lesson plans, to generate/promote discussions on discussion boards, to help us create assessment rubrics on specific content and for specific audiences, to provide feedback to students and so on;


Students

How wonderful that you can now learn and access information in various innovative ways? Imagine these:

  • Opportunity to use technologies that will be (largely) available in the future;

  • Opportunity to access diverse and inclusive contents;

  • Opportunity to learn contents from different perspectives;

  • Opportunity to learn how to better perform coding tasks;

  • Opportunity to communicate, interact and learn from impersonated historical figures;

  • Opportunity to improve writing and communication skills: how to structure articles? essays? working plans? training programs? recipes? meeting minutes? how to explain things in different ways? endless possibilities.

  • Opportunity to think out of the box and to explore contents in creative/innovative ways: "explain what is a comet as a bee to a year 1 school child"

  • And much more. Students have the opportunity to also learn about ways to provide better feedback to each other, to reflect on their essays and tasks, to work in collaboration with an AI assistant so they don't feel isolated (plus they can validate their ideas, which will help them building confidence in those related topics) and so on.


ChatGPT is one of the most significant innovations of our time. No doubt! However, the technology is far from being perfect! It's important (during this learning time) we acknowledge the tool's limitations and potential risks/issues as well. Many articles have been raising concerns about its generated biased, racist or inappropriate contents, for example. ChatGPT (and similar AI models) can also invent facts (while presenting them with great confidence and sophisticated vocabulary) without providing source reliability or transparency. OpenAI Chief Executive Officer, Sam Altman, suggested people hit the “thumbs down” on these kinds of results to help ChatGPT to improve.

A few references on this:


As this post from The Decoder suggests, "the core question controversially discussed in the educational context is: Should AI-supported writing tools be used proactively in the sense of generators of draft texts in the classroom to ultimately generate higher quality work via the automated creation of initial draft texts and the subsequent “manual” optimization of the texts? From our standpoint, the answer is: Yes. Or rather: Yes, but. It is what it is: AI language models and systems are a fact of life in knowledge work. Hiding and ignoring are not appropriate tactics. But if it’s no longer a matter of “whether” to use AI tools, then the question must be: How should we use these tools in the future? What knowledge, what competencies do students (teachers, pupils) need?"

In this context, I believe it's time for us to start planning, designing, learning, promoting and developing AI literacy among academics and students, and to discuss and reflect on the opportunities that these technologies offer to education (considering its risks and challenges). We have a chance to significantly change the way we've been thinking about education. Let's do it!


In the meantime, I hope others will find the information above useful. Please reach out to me with your own reflections and suggestions. I would love to hear from you.