Teaching In The Age Of Artificial Intelligence And ChatGPT

Teaching In The Age Of Artificial Intelligence And ChatGPT

Artificial intelligence empowers increasingly complex interactions between humans and machines. This technology, currently popularized by ChatGPT, provides a potentially enormous opportunity for business professionals, for business students, and for business educators. However, these same technologies pose similarly enormous challenges for teachers attempting to illustrate, explain, and apply these ideas to students. How can teachers intentionally and openly incorporate AI into our classrooms while reducing the risk of students substituting AI output for their own (also known as “cheating”)?

The term “metaverse” was first popularized in 2003 by Neal Stephenson in a science fiction novel entitled “Snow Crash.” It is fiction no longer. The idea gained momentum for online multi-player games like World of Warcraft. It has now evolved to allow all kinds of human interactions, including commerce, in two and three dimensions. Perhaps the “person” you are talking to is not actually a person but instead a machine. When combined with artificial intelligence (AI), blockchain, and decentralized autonomous (i.e. human-less) organizations – collectively known as Web3 – we are at the next frontier for our students to find business opportunities.

This is also a new frontier for education, improving upon the online and hybrid experiences that were employed during the pandemic to reach students in remote areas with immersive, practical experiences. This could be the dawn of the “death of distance,” where learning can occur anyway with internet access.

As a professor at Harvard, Stanford and Hult, I ask graduate students studying innovation to employ AI for their major assignments. Here’s the method to my madness. First, I ask teams of students to envision new businesses that either employ multi-sided platform marketplaces or generate revenues in the purely virtual world of Decentraland. They record these presentations to video. Second, I randomly assign students to critique these team presentations. In the template I provide, I pose several different questions: for example, how did the team design its network effects and how did they mitigate some of the headwinds that undermine the network effect? The last question in the series requires that students ask ChatGPT to write its own critique of the team’s idea. This requires that students iteratively improve the query to have AI provide an optimally useful answer. This is a skill – asking the right question – that the next generation of business leaders need to learn.

I also require that the students independently verify the accuracy of ChatGPT’s response. Borrowing from an expectation from Ethan Mollick, an Associate Professor at Wharton, I declare that students are responsible for the final conclusions that they extract from AI. They must find sources to bolster or reject AI’s response. Just as with any tool or external resource, students must include an accurate citation of their use of ChatGPT. Because I do not know the ideal formal format for such a citation, I suggest that they ask ChatGPT for how to cite ChatGPT.

There are several objectives that underlie this portion of the assignment. First, students must learn to leverage AI to improve the breadth, depth and articulation of their knowledge. AI is out of the box. We cannot put it back. Instead, just as with fire, iron, light and other mind-bending innovations, professionals must learn to best deploy technology to solve customer and societal problems. Universally accessible AI will also change the skills that business graduates will be expected to deploy soon after graduation. Instead of gathering and manipulating data to support business decisions, professionals must master the art of how to pose the right question such that Web3 scans and organizes all existing public knowledge to produce a useful answer. Instead of building a competitive sustainable advantage on intellectual property protected by patents or trademarks, professionals in the future will earn profits through the speed of adaptation.

Second, AI is not infallible. It not just flawed by the data upon which it bases its replies. It is also slanted by the people who ask the question and how they ask. The only way to improve its conclusions – and therefore the data that future AI machine consider – is to make human individuals responsible for the accuracy of AI’s conclusions. This is the same method that academia employed with the proliferation of books, with the development of statistical methods, and with reliance on the internet: authors are accountable for their conclusions.

Third, this assignment forces students to contrast their own wisdom with that of AI so that they can personally and directly compare their own experiences and logic with the responses of artificial intelligence. This reinforces the importance of human creativity and intuition as the driver of innovation.

AL can be good news for education. There is also a dark side: Web3 will pose challenges for teacher and student alike. The anonymity and vastness of virtual worlds – which could drive equity and diversity – may lead to distraction and inappropriate interactions. DecentraLand, for example, is accessible in the two-dimensional browser to anyone with no requirement for identity verification. An avatar, a fake name, and poor impulse control could lead students down unproductive paths.

AI is now fully capable of constructing a student’s assignment. I asked ChatGPT, “How can a teacher tell if a student wrote an essay himself or used ChatGPT?” It responded:

There are several ways that a teacher can tell if a student wrote an essay themselves or used a tool like ChatGPT to generate the text. One approach is to use plagiarism detection software, which can identify if the essay matches any existing texts on the internet. This can be a reliable way to determine if a student used ChatGPT or another text generation tool, as such tools often produce output that is similar to existing text.

Another approach is to ask the student directly. A teacher could ask the student to explain their thought process and reasoning behind the essay, and compare their response to the essay itself. If there are significant discrepancies between the student’s explanation and the essay, it could be a sign that the student used a tool like ChatGPT to generate the text.

Additionally, a teacher could also look for telltale signs of text generation in the essay itself. For example, the essay might contain repetitive or nonsensical phrases, lack coherence or logical structure, or use language or terminology that is not typical for a student at the appropriate level. All of these could be indicators that the essay was generated by a tool like ChatGPT.

This response itself is almost impossible to distinguish from one written by a real person! Note that it even inserted gender-neutral language and transition words. Even after a decade of reading student submissions, I cannot distinguish this AI response from that of a human MBA.

A new tool, GPTZero, endeavors to use AI to determine if an essay was written by AI by calculating its score for ‘perplexity’, which measures the consistency (from AI) or inconsistency (from humans) of sentence structure and word choice. A perplex essay from a human has many inconsistencies. The obvious hole in this tool is to ask ChatGPT to construct an essay with high perplexity.

It is a pesky axiom that technology moves faster than culture. The release of ChatGPT a few months ago has already changed the landscape of education. My purpose as a servant of learning is not to be a sycophant to academic tradition but instead to discover new ways to prepare MBAs for the professional world they will soon join. This includes ChatGPT. More to come.

Source link

Share This
COMMENTS

Leave a Reply

Your email address will not be published.