Artificial Intelligence: The Rise of ChatGPT and Its Implications

In December 2022, an event occurred that caused a revolution in how we find, interact, and record information. The ChatGPT release was an immediate sensation. While it caught many by surprise, ChatGPT is the result of a series of developments in the use of chatbots (Chatter Robot) or computer programs that simulate and process human conversations, both written and oral. This is combined with Large Language Models (LLMs) which collect extensive data from various sources, including Wikipedia, public forums, and programming-related websites like Q&A sites and tutorials. This vast amount of information is utilized by LLMs to enhance their responses.

Within the name ChatGPT, GPT stands for Generative Pre-trained Transformer. The Transformer expands the chatbot to not only predict the next word or so but expands it to predict paragraphs.  However, while impressive, these types of programs are not always accurate since they are not necessarily fact checked. The following discussion will use ChatGPT as a specific AI (artificial intelligence) program and the more general term AI for the increasing number of new programs.

The impact

There are both positive and negative aspects of ChatGPT. Some academics see these tools as a danger to the full spectrum of learning and as a shortcut. Further, students may not develop the skills necessary for original ideas and critical thinking. However, some professors are enthusiastic and see possibilities for brainstorming, breaking blocks to writing, and creating first drafts—essentially preparing students for the world where these technologies are now everyday life. Most importantly, the one thing that academics cannot afford to do is ignore what’s happening.

The development

The observation that ChatGPT is a logical development from word processors, spell checkers, grammar checkers, autocompletion, and predictive text programs is valid. However, there’s an essential difference between Machine learning (ML) or predictive text programs and AI-generated writing. Predictive text programs are a subset of AI programs and tend to be more single purposed whereas AI programs are capable of a wider range of tasks. However, the critical issue is the quality of the input, or as referred to in the past, “GIGO,” garbage in-garbage out.

Potential

As with any new development, advantages often come with disadvantages. Wu (2023) pointed this out in a report entitled, “Professors published a paper on AI with a ‘plot twist’ — ChatGPT wrote it.” The paper passed three peer reviewers who reported that they believed the paper was written by a person. After it was revealed it was generated by ChatGPT, they were able to detect several errors (p.1). A report by Paul (2023, p.1) noted that ChatGPT wrote academic abstracts that passed through the peer review process 32% of the time even after reviewers had been told that some of the abstracts were fake.

There are many programs that can reduce the work of proofreading for spelling, grammar, and citation errors; however, these programs aren’t like ChatGPT which produces complete texts. Most recently, a study showed that ChatGPT fooled scientists nearly one-third of the time (Bushard, 2023, p.1). However, AI detection tools like GPT-2 Output Detector and Turnitin have now incorporated AI detection capabilities.

Future

Several areas that are using AI programs include the medical profession. Their application provides a framework of safeguards. In a study of ChatGPT in 60 records, the authors found benefits were cited in 85% of the cases (51 cases). Conversely, more than 96% of the reports (58 cases) expressed concerns (Sallam, 2023, Abstract).  They found that the program offered a diagnosis that offered a different perspective.

The discourse regarding the position of AI-generated writing in academia is in its early stages. As with any emerging technology, it would be wise for educators to not only coexist with this innovation but also consider integrating it into their teaching methods. Areas to consider are teaching students how to use these tools, not only for their time-saving benefits, but also for addressing ethical concerns associated with plagiarism. This preparation will help equip students for a future that will include AI.

In 2023, The Chronicle of Higher Education reported on the results of a virtual forum attended by 1,600 people on how ChatGPT affects education and the recommendations. The recommendations included:

  • Communicating with students about ChatGPT
  • Being cautious about detection tools
  • Using other tools to bolster academic integrity
  • Using ChatGPT as an educational aid
  • Knowing and understanding the term digital literacy
  • Starting a conversation on your campus or in your discipline

Applications

Encouraging students to develop critical reading and editing skills is an ambitious objective that will necessitate a significant change for many. However, it is a challenge that should be embraced sooner rather than later. Because digital literacy is more important than ever, there are many potential ways to promote academic integrity and critical thinking skills when using ChatGPT.

Positive uses of ChatGPT:

  • Use for brainstorming to help students generate ideas and overcome writer’s block
  • Writing assistance by creating a first draft or proofreading for spelling, grammar, and citation errors
  • Research assistance in finding relevant sources and summarizing information
  • Promote digital literacy skills, such as evaluating the trustworthiness of information and using external, reliable sources
  • Teaching academic integrity by detecting instances of academic dishonesty and encouraging students to develop critical thinking skills and original ideas
  • A feature usually overlooked is “Regenerate Response”—this feature allows users to iterate and refine the conversation until they obtain a satisfactory response or continue the dialogue in the desired direction

Negatives of ChatGPT:

  • May lead to a reduction in critical thinking and originality
  • Potential for abuse, such as creating fake academic articles
  • Does not always generate accurate or trustworthy information
  • Can rely too heavily on AI-generated content, leading to a lack of human engagement and creativity
  • May contribute to a growing reliance on technology over human abilities

Making the negatives positive:

  • Encourage students to use AI-generated writing tools as a supplement, not a replacement, for critical thinking and originality
  • Establish policies that prohibit the use of AI-generated content in academic research or publications
  • Provide education on how to evaluate the trustworthiness of information generated by AI, and the importance of verifying information with external, reliable sources
  • Emphasize the importance of human engagement and creativity in the writing process
  • Encourage the use of technology as a tool, but not a replacement, for human abilities

Steps that improve the use of ChatGPT

Most users of ChatGPT will have experienced using various search engines and are familiar with the term “keywords.” These keywords help search engines understand the user’s intentions, motivations, and expectations for the results. The same concepts are useful with ChatGPT. The use of verbs can express what the user wants to occur. For example:

  • “Explain” or “Elaborate” will provide a longer response than just stating, “What is…,” “Provide,” or “Give”
  • “Discuss” or “Talk about” indicates that the user seeks an interactive exchange of thoughts or opinions
  • “Explore” or “Consider” can indicate a desire for a comprehensive analysis or evaluation
  • “Summarize” or “Expand” will provide either an expanded or abbreviated version
  • “Include sources” will make it easier to verify what was accessed

ChatGPT is an artificial intelligence program that uses Large Language Models to simulate and process human conversations. In its simplest form, it is an information tool to be used by humans with both positive and negative aspects. Some see it as a shortcut and a danger to critical thinking skills, while others see it as a helpful tool for generating ideas and creating first drafts. As with any source of information, verification is needed to avoid errors. As ChatGPT notes, “ChatGPT may produce inaccurate information about people, places, or facts.”


Dave E. Balch, PhD, is a professor at Rio Hondo College and has published articles in the areas of ethics, humor, and distance education. Balch has been awarded excellence in teaching by the Universities of Redlands and La Verne, and has also been awarded  “Realizing Shared Dreams: Teamwork in the Southern California Community Colleges” by  Rio Hondo College.

Appendix: AI type programs

ChatGPT 3.5. This is the free version of ChatGPT developed by OpenAI. It was released in November 2022 and is accessible at https://chat.openai.com/. While it is a powerful language model, it may not have accurate information beyond 2021 due to its knowledge cutoff.

ChatGPT 4.0. This is the newer version of ChatGPT, and it is available for a fee. Unlike ChatGPT 3.5, it is not connected to the internet but tends to provide more accurate responses. You can access it at https://chat.openai.com/.

Bing AI. This LLM is connected to the internet. There are three modes available: “Creative Mode,” it uses the same advanced AI model as ChatGPT 4.0. “Balanced” and “Precise” mode use less powerful models (https://www.bing.com/new).

Google Bard. Google’s LLM is not connected to the internet and is not as good as the other models. Google Bard at https://bard.google.com/. ( https://bard.google.com/).

Caktus. The program uses a large language model along with the CORE database to draw academic sources from its own database and is intended specifically for students (https://www.caktus.ai/ ).

Mollick, E., & Mollick, L. (2023, p.1)

References

Bushard, B. (2023, January 12). Fake scientific abstracts written by Chatgpt Fooled Scientists, study finds. Forbes. Retrieved March 24, 2023, from https://www.forbes.com/sites/brianbushard/2023/01/10/fake-scientific-abstracts-written-by-chatgpt-fooled-scientists-study-finds/

Mollick, E., & Mollick, L. (2023, April 26). Let ChatGPT Be Your Teaching Assistant. Harvard Business Publishing Education. Retrieved May 4, 2023, from https://hbsp.harvard.edu/inspiring-minds/let-chatgpt-be-your-teaching-assistant

Paul, M. | B. M. P. (2023, January 10). When chatgpt writes scientific abstracts, Can it fool study reviewers? Northwestern Now. Retrieved May 1, 2023, from https://news.northwestern.edu/stories/2023/01/chatgpt-writes-convincing-fake-scientific-abstracts-that-fool-reviewers-in-study/

Ramshore, A. (2022, July 7). Eliza: The chatbot who revolutionised human-machine interaction [an introduction]. Medium. Retrieved March 5, 2023, from https://medium.com/nerd-for-tech/eliza-the-chatbot-who-revolutionised-human-machine-interaction-an-introduction-582a7581f91c

Sallam M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns.[Abstract] Healthcare. 2023; 11(6):887. https://doi.org/10.3390/healthcare11060887

The Chronicle of Higher Education. (2023, March 16). What you need to know about chatgpt. The Chronicle of Higher Education. Retrieved March 24, 2023, from https://www.chronicle.com/newsletter/teaching/2023-03-16

Wu, D. (2023). Professors published a paper on AI with a ‘plot twist’ – chatgpt wrote itWw. MSN. Retrieved March 24, 2023, from https://www.msn.com/en-us/news/us/professors-published-a-paper-on-ai-with-a-plot-twist-%e2%80%94-chatgpt-wrote-it/ar-AA18YDXr


Post Views: 3,973