AI is one of the top emerging disruptive technologies currently. It is revolutionizing almost all industries in various ways, and education is no exception. AI comes with a lot of benefits for both teachers and learners. It is undoubtedly a useful and powerful technology. But the discussion of benefits is often much more prominent than the talk about downsides.
Like any other technology, AI comes with a dark side. And it is important to acknowledge the problems and potential risks it brings to the educational sector.
AI Chatbots in Education
AI gives unprecedented possibilities for optimization and automation. The forecast is that soon 70% of organizations will have AI-based architectures and solutions. And with the rollout of ChatGPT, the usage of chatbots has only accelerated.
ChatGPT was released in November of 2022 and has become a huge hit. Its main benefit is the ability to produce natural and conversational text. It can write essays, participate in debates, and even take exams. Because of its functionality, it raised a lot of concern and has already been banned in several educational institutions.
Some educators are worried that it is going to be used for cheating, and such cases have already been spotted. However, they were spotted because it is really easy to see if a human or an algorithm wrote the text.
Although ChatGPT can be helpful in research or drafting, there is still no technology that can be as good at performing complex and analytical tasks as humans. This is particularly true for academic writing. If a student wants to use paper writing websites like Paperwriter, collaborating with professional writers is much more efficient since the technology is too young to show perfect final results.
Also, educators have been changing their approach to curriculums and assignments since the ChatGPT rollout. Some prefer more oral tasks now or make students create the first draft of an essay in class. Others formulate their questions differently so that only a human can understand them.
Yet, it is easy to understand why such solutions are gaining popularity. AI chatbots offer significant advantages in education, namely:
- They save time for students and professors;
- They are available 24.7;
- They serve as an accurate source of information;
- They are self-sufficient essay writing services;
- They collect all the essential data in one place.
However, many chatbots still need improvements in terms of human interactions, complex questions, and extended vocabulary. Currently, ChatGPT is the best development in this area. It is efficient, fast, and powerful.
Overall, the implementation of chatbots in education is not very widespread still on organizational levels. But students do use such solutions individually quite often.
The Dark Side of AI Chatbots
One of the major concerns regarding such solutions as ChatGPT is that it is going to help students cheat on exams. For example, scientists experimented with this chatbot and its ability to pass exams.
As a result, it cleared the US Medical Licensing Examination. It also scored A+, B, and B- on the MBA exams of the University of Pennsylvania. Scientists also used this app to generate exam questions. The risk here is that those who use such chatbots for cheating will not study appropriately. This undermines the whole purpose of education.
Particularly, it is concerning when it comes to professions that deal with human lives. A person cheating on their Medical exam won’t become a trusted professional. This could be dangerous on individual and societal levels.
However, there are several ways to prevent that. From creating a specific exam environment to formulating questions differently – the options are countless. Also, oral examinations might become more widespread in universities.
Another major concern that comes with AI chatbots is privacy. Such algorithms learn from interactions and acquire valuable information.
For instance, a lot of people have talked about the TikTok algorithm and how it can be incredibly specific. It is already suspected that it collects a lot of personal information about users. And it is not known how this information is or can be used besides making your FYP custom curated.
The same goes for ChatGPT and similar solutions. The public doesn’t know what information it collects and how it can be used in the future. AI can gain deep insights into customer behavior or individual patterns and habits.
To create safe solutions, developers need to train chatbots to recognize and filter out sensitive content. OpenAI, the company behind ChatGPT, Dall-E, and many more exciting AI projects, outsourced sensitive content labeling to a team in Kenya operated by Sama.
The team had to label images and texts that depict all types of abuse, murder, self-harm, incest, bestiality, and other horrible things. As a result, the team reported being mentally scarred after this project. They had to see and label such imagery to train the algorithm. One of the employees described it as “torture,” and Sama stopped cooperation with OpenAI.
So there are a lot of ethical, moral, and psychological issues that come behind such algorithms that people have yet to define.
The restrictions available in such algorithms are set by humans. And they are as good as developers decide it to be. Also, such restrictions can often be overcome by other humans.
For example, ChatGPT can be used:
- To write malware or harmful code;
- For phishing emails or text because it produces conversation-like text;
- To spread misinformation and false news;
- To change or shape informational discourse as a whole.
Any technology can be used for a malicious purpose. AI chatbots like ChatGPT come with a lot of risks, although they offer students progress and innovation. There are still huge gray areas in terms of responsibility and limitations when it comes to such disruptive technologies.
AI chatbots are convenient, useful, and exciting to try. They can give a lot of opportunities in education. But such solutions also come with substantial risks on individual and organizational levels.