How Should Universities Respond to the Challenges of ChatGPT?

(The opinions expressed in this article are those of the author and do not necessarily reflect those of Al-Fanar Media).

On November 30, the California research company OpenAI released an artificial intelligence chatbot called ChatGPT that can generate answers to questions, among other capabilities.

Using the AI chatbot, anyone can enter a question and have a unique answer generated. This is different from an internet search, which identifies already existing text. ChatGPT draws upon available information and generates a unique response, including to multi-part and complex questions, along with references. There are limitations; however, these are likely to be addressed swiftly.

Alongside amazement, these developments in AI present serious challenges. For education, there are integrity challenges, as short answers and longer essays can be AI generated, and plagiarism-detection software now in use fails to catch it. As with other technological developments, such as facial recognition, AI appears to also replicate problematic biases and produce unethical recommendations.

What should universities and professors do?

Re-Envisioning Assessments

Today, we must re-envision assessments. From short answers to final essays, AI chatbots can generate answers in seconds. Educators might assume that tasks could be creatively re-designed to reduce the potential for AI-generated submissions. However, AI chatbots can be instructed to generate content in a wide range of formats (e.g., write in the style of a speech or poem).

Some of the ways assessments will need to be changed draw on familiar approaches (e.g., in-person tests, oral examinations), while others will require new forms (e.g., develop a concept map or systems map using points discussed in class, produce a podcast or a vlog).

Forms of teaching that have been experimental, such as flipped classrooms, might take greater prominence, along with other creative forms, such as regular short presentations in the style of the Three Minute Thesis or a TED Talk).

Advancing the ongoing shift of educators from instructors to learning facilitators might be one of the key pedagogical areas where AI and machine learning force an educational transformation.

In the coming academic year, universities will need to consider whether learning objectives and programmes of study are appropriate. Employers will need to adjust interviewing practices so that skills must be demonstrated as part of the hiring process.

Additionally, new policies may need to be considered regarding academic integrity, particularly for assessment types that are not able to be adjusted to make the use of AI and machine learning less feasible.

The Inevitable Shift    

As AI increasingly becomes a tool everyone can use, universities need to ensure that learning outcomes and skills are transformed in such a way that they are complementary to the developments in AI and machine learning, as opposed to trying to play an endless game of catch-up and leaving students behind their peers who have learned to integrate and utilise new technologies.

Logan Cochrane is an associate professor in the College of Public Policy at Hamad Bin Khalifa University.

What might an example of this be? AI could be integrated into the learning and assessment process. Assignments of this sort would move away from one-time answers to iterative portfolio development.

More importantly, the student experience needs to shift toward the knowledge and skills that are complementary to technological advancements.

What might the skills be that learners will need for the future, and that educators should adapt to enable? Here are five:

Critical Thinking. Students need to develop the skill of asking the right questions and engage with AI critically. They need to ask questions like, Are any of the AI-generated answers incorrect or incomplete? Is all the relevant data being considered? If some data is not considered, why not? How would you identify bias? Some of these questions have the potential to be generated by AI. However, a portfolio approach allows the learning to convey the journey, which might then be communicated to peers in-class, or in a range of other forms. 

Ethical Thinking. AI and machine learning systems have built-in biases because of the limitations of the data they draw on. These biases have been identified in past technological advancements, such as in facial recognition software programs, as well as in currently available AI systems for content generation. All learners and educators will need to have a much deeper understanding of, and critical engagement with, ethics and ethical thinking so that when a multitude of AI-generated answers are provided, questions of ethics will be at the fore, not only of ethical theories and contested ethical issues but also of AI systems, their function and utilisation. Assessing what is unjust or setting bounds on AI requires grappling with a wide range of ethical thinking, and hence the positioning of ethics as a core skill for education in the age of AI and machine learning.

Systems thinking. The financial crisis, the Covid-19 pandemic, and the impacts of conflict have all highlighted the interconnectedness of the global economy, governments around the world, and humanity more broadly. These events have also shown the high degree of unpredictability of some of the causes, connections, and consequences. One of the challenges of anticipating, recognising, and addressing complex challenges of these types is that doing so requires not only a lot of information from many domains, but also insight into how systems may interact in the future. Enabling learners to have the experiences and opportunities to understand complex system dynamics, and gain the skill of insight and prioritisation, including with the use of AI and machine learning, may allow for the best capabilities of both to be utilised (without suggesting that uncertainty and unpredictability will necessarily be reduced).

Creativity. AI and machine learning systems are creating, but that creation is a directed process that is largely taking place within the bounds of the data given or available to them. Creativity requires the ability to imagine what does not exist and see horizons beyond. Seeing beyond the parameters of what has been and what could be, based on existing parameters, requires entirely new ways of teaching and learning. Consider questions like when, how, in what form, and with whom a conflict resolution and peace agreement might be sought. AI could generate a long list of ideas, but creative thinkers will be better placed to know how to identify a particular common ground within a window of opportunity for intervention. This would require learning to move from content and theory to experimentation and experiential learning in much more substantive ways.

Future-oriented solution-creation. AI will be able to assess and predict anticipated biodiversity loss, but it will probably not be able to predict the way in which people will respond to specific events that have the potential to change the direction of that anticipated future. Consider the global movement that erupted following the actions of a single protesting student in Sweden, or the wave of revolutions across an entire region sparked by a single protester in Tunisia. These moments captured individuals, communities, and societies in unpredictable ways, and changed (the anticipated) future.

To utilise the strengths of AI and machine learning, future learners and leaders will need to have a problem-solving and solution-creation orientation to the future, wherein AI and machine learning play an important complementary role to their future-oriented problem solving. 

A Matter of integrity

The age of AI and machine learning is still in its inception. What will be possible in the coming months and years will continue to amaze and challenge. At the most basic level, AI and machine learning present an integrity challenge for education. This is not necessarily a new challenge, and one that intentional course and assessment design alongside revisions of learning objectives can address.

However, education institutions face a much more substantive challenge of suitability and relevance in the age of AI and machine learning. If the opportunities of AI and machine learning are not embraced, integrated, and viewed as complementary tools, education itself runs the risk of offering obsolete skills and in ineffective ways. As education has transformed and transitioned in the past, the age of AI and machine learning presents a clarion call to transform again. 

Logan Cochrane is an associate professor in the College of Public Policy at Hamad Bin Khalifa University.


Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button