Can AI make us all
P R O G R A M M E R S?

Posted on March 18, 2023  | [Eduardo Oliveira]

My own reflections on ChatGPT #3
(perspectives and opinions are my own)

A few weeks ago (Feb 28) I had the opportunity to participate in a public panel organised by the Melbourne Centre for the Study of Higher Education, The University of Melbourne. It was a fantastic experience and I truly enjoyed the conversations with participants (during and after the event). Those chats made me think even more about the way generative AI (ChatGPT) has been impacting my discipline and education in general. The real WOW moment, however, came a few days later when I thought not about what's happening now... but on what's about to happen in 5-10 years (a bit of magic number here). The way we've been interacting with machines is about to change SIGNIFICANTLY and FOREVER!

This post is NOT about detecting ChatGPT, avoiding academic cheating, ethical or other important related topics largely discussed in the past few months but a post in which I document and share my reflections on new interesting/insightful ideas involving generative AI, humans, and machines!  Are you ready? Let's gooooooooo!

Who am I? A bit of much needed context!

For those new here, welcome! :) Let me start this post with a short story about myself... I'll keep it short. This is just to give you a bit more context on what I'm about to share next. 

I'm kind of a nerdy guy! I always felt enthusiastic and passionate about technology. Always! I was gifted my very first Pentium (personal computer) in 1993 (Windows 3.1 seemed so impressive at that time!). However, my first contact with a computer happened before that. My mum was a Senior Lecturer at the Federal University of Pernambuco (Brazil) and when I was between 8-10 years of age, she used to bring me and my brother to the Mathematics School. In one of those visits, I remember being invited to "visit NASA" (online visit, through the computer). At the time, there were no browsers or sophisticated user interfaces. Internet was not the way we know now. However, through some black screens, I could for the very first time, connect to contents that were overseas! The whole experience happened through command lines and in a terminal, and even like that, I absolutely LOVED what I was seeing. I started asking my mum to drop me to that lab as much as possible. At a really young age I learned about FTP, DOS, TELNET, and so much more. A few years later, Netscape Navigator was release and in just a few months after that, I started developing (static and simple) websites, on Notepad. I was 13 years of age at the time (what a great time I had!). 

Fast forward a bit to the year 2000.  I finally start my undergrad studies in Computer Science - completing a Master and PhD in Computer Science came straight afterwards. Between 2006 and 2014, my postgraduate research was mainly focused on the creation and use of chatbots in educational environments. However, I also wanted to continue to improve my coding skills and programming knowledge while doing research. I decided it'd be great to work full-time as a Software Engineer, while also doing my postgraduate degree (I don't recommend that load to anyone!). For 12 years I worked at CESAR (Recife-Pernambuco/Brazil), awarded two times as the most Innovative IT Institute in Brazil, researching/leading/developing for Motorola, Samsung, Compal Electronics, Gemalto and other international projects. 

Whenever I was at the university, colleagues would refer to me as the 'tech guy'. Whenever working in the industry, people would call me 'the researcher'. For a long time, I didn't feel I really belonged anywhere. I tried to publish papers while also trying to get international programming certificates and participating in coding marathons and hackathons. You can imagine this would never work well :D This all changed when I moved to Melbourne in 2014 to do my postdoctoral research at the University of Melbourne. Since then, I've been focusing on teaching software engineering and on conducting research involving AI and education.

The experience I had during my studies, however, prepared me for what I do now: applied research in education and software engineering. I'm fortunate enough to be able to code my own projects and experiments and, in current context of generative AI, to understand its impacts and to use it with an engineer brain.

[you ask me]: Wait... I get this blah blah about yourself but... engineer brain and generative AI? ChatGPT is a conversational bot. It's all based on natural language. 

[me]: Voila! We finally reached to the crucial point in this post and my whole intro will start to finally make sense to you (fingers crossed).

Programming: A love/hate relationship!

I always loved technology but I didn't always love programming. Whenever I was trying to develop a software project for myself or I was working as a software engineer, I always felt a bit of frustration interacting with machines. "Why does every single program I try to create takes me so much time to get completed?" "Why do we need to go through such a painful process of following precise instructions, debugging code and so on every single time?"

A milestone for me as a student was when I recreated the game Pong in 1998. Fast forward a bit again and in 2016 I developed a new Pong game to display as part of the University of Melbourne Open Day. Different from the Pong I developed in 1998, this 'new' Pong was integrated with Microsoft Kinect so players could play the game using their own hands. Even though I wrote a code for my very first game again 18 years later, that experience wasn't much different from first one. Why does this still takes me at least half a day to get it done? My experience in telling computers what to do is not considerably improving in regards to the time required to teach the computer.

In short, as a programmer I didn't always enjoy programming! 

We have MANY new programming languages and MANY new programmers (or developers, coders, software engineers...) now but teaching and communicating with  computers (or machines) is still unnatural today! We increased the number of programmers in the world by offering more courses to students, by making humans learn machines, and by teaching people how to code earlier (at school), not by interacting with machines in natural ways.

You may say: "Wait, hold on! That's not true! We have better tools now. We automated several tasks for programmers. We boosted their productivity with fancy new technology. We even have drag and drop environments now. Heaps more of available resources for programmers. You can even ask chatbots like ChatGPT, Siri, Alexa and others to generate codes for you! Things changed significantly!"

I get that. Trust me. I acknowledge all evolution in our field and that's fantastic! But... did things really change to everyone? To non-engineers? My friends still think coding is some sort of hieroglyphics (and get really impressed every time they see codes in my screen - I love it)! 

ChatGPT and AI are indeed creating the next generation of automation for the whole world! Generative AI is already making waves in various industries and this is no different to software engineering, IT, and so on. AI tools like ChatGPT, GitHub copilot and Alphacode can make the way we perform programming tasks quicker, improve code quality, become more creative, and optimise customer experiences (reduce customer costs by delivering projects in less time). However, personally, I don't think programming is getting easier. Different programming languages ask us to do things in similar ways. Structure, structure, structure. Instruction, instruction, instruction. My reflection here, together with you... can a normal citizen code today? Can they even understand what a prompt is? Advanced prompts or prompt engineering are still unnatural, even though they make use of natural language (and I'll talk more about this soon). 

How many of us can really code or create a software without an engineer brain? Machines are OBSESSED with instructions! To make things harder, add complexity to your software and it'll be much harder to also debug it. And, trust me, debugging software is way harder than creating software with perfectly expected flows/behaviours. I agree programming interfaces got better, I agree we have many more resources to access online, I agree we have discussion boards and stronger programming communities, I agree we have public repositories with free codes... BUT programming is still unnatural and time consuming (to me)!

The WOW moment to me happened while riding my bike to Uni and listening to a podcast with Binny Gill from Kognitos a few days ago. Gill said "machines are getting MUCH better at communicating with us". He also challenged us listeners to think about when WE will also start to communicate better with machines. Can anyone teach machines (not only programmers)? And, how would this impact our whole world? (Exciting, hey?)

In fact, the whole motivation to put this post togehter comes from the inspiring words from Binny Gill: "We are unlocking the power of AI for humanity. Now every person will be able to use generative AI for automating what they want—utilizing the English language. It’s time for computers to behave like humans and humans to stop behaving like machines. [...] anyone can now describe what they want to be automated, and their automation is generated—all in auditable English. That means no developers, no complex tools, no bots".

Democratising ChatGPT and AI 

Let's go back in time a little bit again... (oh gosh, here we go!)

During the Middle Age monks and priests were some of the only people who knew how to read and write. That was a privilege only founded in monasteries. Today, everyone can share ideas; everyone has access to reading and writing (ideally). Reading and writing was democratised. 

Can we do the same with AI and programming? I think so! But a few things must happen first!

Remember my love/hate relationship with programming? That's mainly because machines crash a lot! If we don't provide perfect and complete instructions to it, BA-BOW :(
We need to follow precise instructions to teach machines what to do. In order to make AI and ChatGPT more accessible, for example, we need to make sure humans can communicate to machines. Without thinking about specific commands, prompts, structures, or any other form of engineering knowledge. "To succeed, interfaces to computers must be the same as human interfaces" (Binny Gill). 

How would this be possible?

Natural language is ambiguous, which is challenging in any communication. As humans, whenever something is not clear enough or whenever provided information is not complete enough, we dialogue. 'Hey! I'm on my way to meet you for the run. I forgot my hat, can you bring me one please?". If this was a system, potentially I'd need to provide that information about the hat (or parameter) in advance, otherwise the system would break. So, to improve our interaction and communication with machines, we will need to DIALOGUE. And, in REAL TIME (or runtime, if you're a bit more nerd ;) ). 

[me]: 'Hey, ChatGPT, can you create a Pong game for me?'

[ChatGPT]: okey dokey artichokey! 

[me]: oh, sorry! I forgot to say this should be a multiplayer game.

[ChatGPT]: In this case, I need just a few more information before I get that sorted for you. Is your game designed for one or two players? [there was a missing parameter here]

[me]: Yay! I'm glad you asked me this. And sorry I forgot to let you know about this in advance. Please (yes, I use please when I communicate with ChatGPT :D) design the game to support individual mode and two-players mode. I also forgot to mention I need that to be integrated with Kinect. Players should be able to use their hands instead of keyboard to control the paddles. 

[ChatGPT]: Roger that! Here it is: brand new version of Pong for you. Have fun!

[me]: I forgot to say that this Pong game will be played to 15 points, instead of 11. Oh, and that the game should start as soon as we detect one or two people in front of the Kinect. First we detect users, then we start game 10seconds after that (just to give second player some time to join the game, after seeing player one has been confirmed in the screen).

[ChatGPT]: Updated that for you. Enjoy!

In the example above, me and the machine dialogued to create a software together. No engineer brain. No technical instruction (but discussions about game dynamics, which is fine as this is part of my expertise - not coding). High-level conversations about requirements, use cases and features. 

[me]: 'ChatGPT, mate, I need your help to go through my Excel spreadsheet file and to clean that large dataset for me. I don't need duplications there'

[ChatGPT]: always a pleasure to perform these silly tasks for you. 

[me]: Oh, I forgot to say... can you please make sure data is formatted in DD-MM-YYYY pattern? And, as you're working on the spreadsheet, can you see any correlations between authors, level of experience, years, and publications there? any insights? This will be so helpful. Coffee on me next time!

[ChatGPT]: Let me have a look at that. No need to get me coffee. Maybe a new Intel Core i9 processor? hehe just kidding!

[ChatGPT]: There are quite a few empty values in your spreadsheet and tons of crazy special characters. Do you want me to ignore those data for you? Or should I fix the use of special characters in the spreadsheet?

[me]: how refreshing to have this dialogue with you! thank you. you can fix that and include the data as part of your analysis (my brain: o m g! I'm so glad I didn't break ChatGPT by not anticipating there was more data processing needed as part of this task. I didn't need to anticipate all parameters, conditions, instructions... this is HEAVEN!)

OR (another simpler example)

[me]: let me share this corporate data with you so you can make sense of it.

In short, I believe we will soon be able to act as a data scientist or accountant (for example) on Microsoft Excel, and develop business models based on certain number of provided inputs, and Excel will be able to generate new spreadsheets for us. Excel is already a powerful programming environment and is getting even more powerful as it starts to work together with new natural programming user interface. See more about this here. Now think about many other program interfaces that will be created to let us all become programmers.

ChatGPT and other similar tools will become more and more available and accessible through new integrations with tools like Microsoft Office, social media, Slack, and others. A lot more ubiquitous. We will become more creative and productive. Soon, AI like ChatGPT will be in our phones, generating answers for our emails and messages (for example). My AI will generate and send emails/messages to your AI, which will potentially decide what should result in a calendar invite so we can talk to each other or finally meet in person again. Interestingly, we should be able to decide on what tasks we'd like to be performed by AI and 'program' that ourselves. I am not sure we ever had to ask ourselves what work is required to be performed by a human but... I guess that's also happening soon.

We democratised the idea of sharing knowledge, ideas, thoughts... Maybe the same thing will happen with computers and programming in a few years.

To give a bit more of context on current available technology, GPT-4 was released this week. We can see things are moving fast and this way... Again, Cleo is here to help us to understand what is new on GPT-4 (she was also in my very post about ChatGPT in Dec):

if you're REEEEEEEEEEALLY enthusiastic about this, DO NOT miss the chance to watch OpenAI GPT-4 Developer Livestream. But not now. Oi? Stay with me :D
You can come back here in just 5mins. Let's keep moving...

Aren't prompts already natural? Can we get even better at this communication with machines?

ABSO-BLOODY-LUTELY! I have many colleagues that still don't understand what a prompt is or, what prompt means.

“Prompting” is how humans can talk to artificial intelligence (AI). It is a way to tell an AI agent what we want and how we want it using adapted human language. It's our communication interface between humans and machines. A prompt can contain information like the instruction or question you are passing to the AI model and include additional details details such as context, inputs, or examples. The better you make use of these elements to instruct AI, the better your results will be. 

A prompt engineer will translate your idea from your regular conversational language into clearer and optimized instructions for the AI. Can you see this whole process is not always natural to everyone and can involve an 'engineer brain'? Yes, we can achieve a lot with simple prompts at them moment, but the quality of results will change significantly based on information you provide to AI and how well-crafted that is.

"Prompt engineering is a relatively new discipline for developing and optimizing prompts to efficiently use language models (LMs) for a wide variety of applications and research topics. Prompt engineering skills help to better understand the capabilities and limitations of large language models (LLMs). Researchers use prompt engineering to improve the capacity of LLMs on a wide range of common and complex tasks such as question answering and arithmetic reasoning. Developers use prompt engineering to design robust and effective prompting techniques that interface with LLMs and other tools" [see Prompt Engineering Guide for more details and examples on this]

Let me share and discuss a few examples of prompts I'm currently using in my Software Engineering subjects (or, in other words, let me show you our new  - but still unnatural - way to communicate with machines):

I am subject coordinator for Software Project subject. This subject gives students in the Master of Information Technology experience in analysing, designing, implementing, managing and delivering a software project related to their stream of IT speciality. The aim of the subject is to guide students toward being an independent member working within a team over the major phases of IT development, giving hands-on practical application of the topics seen throughout their degree.

Our 12 teaching weeks are organised in 3 main sprints of 4 weeks: (i) first sprint, students communicate with real industry partners to plan, analyse, and design a software solution to them. This is our requirements engineering sprint. Students work on requirements elicitation, elaboration, analysis, validation... (ii) second and third sprints, students work on the development of these software solutions (development, testing, deployment). Students work in teams the whole semester. It's been a fantastic experience for everyone involved in this subject: industry partners, students, supervisors. We really simulate a real-world environment in our subject.

During Sprint 1, after interviewing and chatting with industry partners, students start elaborating on that. At some point, they need to come up together with 'User Stories' to their projects, so they can plan future development sprints. "A user story is an informal, general explanation of a software feature written from the perspective of the end user or customer.  The purpose of a user story is to articulate how a piece of work will deliver a particular value back to the customer". Often, we user a template to write and document user stories.

Now that you have a bit more of background and context on this, let's see how ChatGPT and advanced prompts have been adopted in this subject:

First, we generate a motivational model diagram and validate the system to be developed (together with industry partners): what are the goals of this software solution? Who will be using this solution? How? (non-technical discussions around the project to be developed from Sprint 2). On a side note, I've been working on motivational models together with Prof. Leon Sterling for a few years now. He is my  super duper guru on this :) and together we designed and developed a microcredential on this topic at The University of Melbourne. After building motivational models, students create personas, user stories, prototypes and so on to their projects. And then, they plan their development sprints. This is all very simplified here to keep the post at readable size for you ;)

To help students with the creation of user stories in this process, Prof. Leon and I created the following prompt to be used with ChatGPT:

Create user stories for the following software project:

Goal of project: <short description of your project, one or two sentences to give domain-context to ChatGPT>. <Include goals of your project here, CONSISTENT with goals identified in your validated motivational model>

Personas involved in this project: <list your personas here, CONSISTENT with validated motivational model>

The software requirements must meet following consistency criteria:

- there should be at least <insert_a_number> different user stories for every persona of the project

- user stories should follow the template 'As a <user> I want to <do> so that <goal>'

- user stories should be diverse and inclusive

- group user stories into epics, if they correlate to same goal

- every user story needs to relate to one of the goals of the project

- there should be at least one user story for every goal of the project

Organize your answer to follow the template below:

[EPIC <number>: <name of goal of project>]

<name of persona>

<enumerated list of user stories for that goal and persona>

Can you see the connections we created between VALIDATED non-technical artifact and the prompt to feed ChatGPT on the images below?

One additional example here to show you another use of 'sophisticated' prompt to ChatGPT (or, to communicate with machines). As students move from Sprint 1 to Sprints 2 and 3 (development), we thought we could use AI to also improve code quality in this process. "mmm, maybe we could ask ChatGPT to review students' codes before we deploy and make them available to industry partners. In case ChatGPT does a great work, we may: (i) increase code quality; (ii) generate knowledge and awareness to students on better ways to write code; (iii) build confidence on them at exposing their codes to external feedback".

And so we did it. 

Together with one of my brilliant students, Max Plumley, we automated this process and integrated students' coding repository (GitHub) with ChatGPT (thank you heaps, Max!) and adopted the following prompt in this process:

Please evaluate the code below.

Provide answers to the following questions in numbered lists

- does the code below has obvious bugs?

- are there any security issues in the code?

- how do you assess the readability of the code?

- are there any code duplication in the provided Java code?

- are variable names descriptive enough?

- is the code well documented?

- can you identify possible performance improvements for this code?

At the end of your answer, summarise and explain what changes should be performed in the provided code to improve its quality 

We designed a new quality assurance workflow to students, explained the new process to them in details (DOs and DONTs), and shared the automated scripts (GitHub Actions + GitHub repositories) with them.

Can you see both prompts I presented here have a few instructions/criteria in it? To keep results consistent, we couldn't have done it in different ways. This approach of designing optimal prompts to instruct the model to perform a task is what's referred to as prompt engineering. 

Prompt engineering is not just about designing and developing prompts. It encompasses a wide range of skills and techniques that are useful for interacting and developing with LLMs like ChatGPT. It's an important skill to interface, build with, and understand capabilities of LLMs.  This skill is booming and this year we are already starting to see salaries for this new job as high as 335k USD.

Going back to the motivation behind this post... is this natural to everyone? I don't think so! But we're getting there. Remember when I asked ChatGPT about this? 

"Humans tend to communicate more freely and spontaneously, without necessarily considering the specific prompts or cues that an AI language model might need to generate accurate responses. However, it's worth noting that humans do use prompts in their communication with each other in various ways. For example, in a job interview, the interviewer might ask specific questions to prompt the candidate to talk about their qualifications and experience. Similarly, in a classroom setting, a teacher might use prompts to guide students through a discussion or activity."

Even though our new chats and experiences with LLMs like ChatGPT are clearer and easier, we still need to provide instructions to it in a sequential and structured way in order to improve results (instead of having a conversation or dialogue with it). The way we 'prompt machines' will keep changing and improving significantly in the the coming years and that, my friends, will be a radical change to everyone! Especially when that involve real-time development of programs!

The next radical change will come when WE ALL have the chance to become... P R O G R A M M E R S !!! My lovely elderly neighbours, my not-so-geek friends, my triathlete friends, my colleagues in different schools and faculties at Uni, teachers, psychologists, athletes, nurses,  dogs (haha just kidding), and so on. I believe in a few years we should all be able to design and generate software solutions to most of our digital daily/routine tasks. DIALOGUES are coming next and our communication and interactions with machines will never be the same!

BY THE WAY, remember my conversation about my very first programming milestone recreating Pong? One day after GPT-4 was released, this happened:

The 60seconds Pong that Pietro requested ChatGPT to create is available here: (what a moment!)

What are some of the concerns and opportunities with the rise of Generative AI?

Again, there are many concerns involving ethical issues, privacy, confidentiality, moral dilemmas... all absolutely necessary!


If we democratise generative AI by improving our communication and interactions with machines, what do we need to do to make sure we can handle machines that are way smarter and more powerful (brain processing unit, if that's a thing) than us?

A few days ago I watched Episode 4 of 'Cunk on Earth' show on NetFlix on Rise of the Machines.  The episode shows, among Diane Morgan's hilarious acting, that electricity was one of the main sources of moving forward and advancing in the Industrial Revolution. It was a time we scaled the introduction of machinery. Many new inventions changed the world forever. Some of the powerful machinery created have the capacity to hold, move, push thousands of kilos. Even though many machines were (and are) created, we always had control over them. These machines automated processes for us, increased production speed, but didn't make decisions for us.

Personally, I'm absolutely happy to live in a time in which we can experience and live through these changes. How lucky are we? But, I also have some concerns about the use of generative AI. I worry about these technologies being integrated with other IT systems (our social media, our online bank system, our company systems) and to what extent we will let these technologies be able to make decisions for us. 

Think about the example below (extracted from a conversation I had with one of my students the other day while having coffee at Melbourne Connect):

[student]: ChatGPT, can you please update my CV on LinkedIn to include the completion of my MIT at Melbourne Uni and suggest me interesting jobs in my area?

[ChatGPT]: absolutely. I didn't only update your CV but I also found the perfect job for you. I've already submitted your CV to that opportunity and sent HR team a message about your passion for their company. You can thank me later!

[student]: (pallid face! high heart rate! panic!) omg! I didn't ask you to make that decision for me. That's one big player that doesn't share the same values as me and I would never work for that corporation!

Naturally, this is just a silly example. But, imagine this starts to happen in other domains (healthcare, academia,...). Would you let AI make decisions for you? To what extent? (on this topic... LinkedIn is now implementing ChatGPT to help users with profiles and job postings)

As my colleague Toby Murray twitted a few days ago, "Giving LLMs the ability to call APIs means you’re allowing a weird machine, whose behaviours are an emergent property beyond anyone’s ken, to execute code in response to untrusted input. What could possibly go wrong?". Commented in the context of this paper on More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models. I hear you, Toby :)

Whatever happens next, I hope we can improve the way we will dialogue with AI while keeping the control on decision-making.

What this all has to do with education?

In my next post, I'll discuss the implications that the democratisation of AI and programming may have on teaching IT courses and to education as a whole.
Related to our thoughts here, my colleague Tim Miller just released an amazing position paper on 'Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support'. His paper is available here: The paper argues for a shift in how we view AI decision support systems. A must read if you're up for a more technical insight!

Thank you for spending a few minutes here today. Again, these are just a few personal reflections based on recent readings. I hope, if anything, this post provides food for thought.