ChatGPT: facing the artifices of AI, how media education can help students

ChatGPT: facing the artifices of AI, how media education can help students

Who has never heard of Chat GPT, this generative artificial intelligence, capable of responding with complex texts to queries launched by Internet users? The release in December 2022 of this software designed by the OpenAI company sparked a multitude of articles, between visions of catastrophe and utopia, producing a media panic, as illustrated by the open letter from March 2023 asking for a moratorium in the development of this type of systems, signed by a thousand researchers.

As shown by a study of the Columbia Journalism Review, the panic did not start in December 2022 with the event launched by OpenAI but in February 2023 with the announcements of Microsoft and Google, each going there from their chatbot integrated into their search engine (Bing Chat and Bard, respectively) . Media coverage blurs the information, focusing more on the potential replacement of humans than on the real concentration of AI ownership in the hands of a few companies.

Like any media panic (the most recent being those on the virtual reality and metaverse), its purpose and effect is to create a public debate allowing actors other than those of the media and digital to seize it. For media and information literacy (MIL), the stakes are high in terms of social and school interactions, even if it is still too early to measure the consequences on teaching of these language models. automatically generating texts and images and making them available to the general public.

In parallel with regulatory policy actions, the EMI enables citizens to protect themselves against the risks associated with the use of these tools, by developing their critical thinking and adopting appropriate and responsible use strategies. Algo-literacy, this sub-field of MIL that considers what data does to the media, makes it possible to apply these reading keys to AI. Here are four directions in which MIL can help us navigate these chains of algorithmic interactions, from their productions to their audiences.

Take into account the geopolitics of AI

It is the companies controlling search engines and therefore access to information, Google and Microsoft, that have the most to gain from the development of generative AI. They are organized, American-style, as a duopoly, with a (false) challenger, OpenAI LP. It's actually the commercial arm of the initially non-profit OpenAI lab (largely funded by Microsoft).

EU and AI: regulation on the menu for MEPs (TV5 Monde, June 2023).

Another story can be told, especially by the media, that of the incredible concentration of power and money by a very small number of companies in the Silicon Valley. They give themselves the monopoly of access to information and of all the productions resulting from it. They fuel head-on competition between the United States and China on the subject. Google and Microsoft's strategy is indeed intended to pull the rug out from under the feet of the Chinese government, which does not hide its ambitions for the development of AI.

The option of a pause or a moratorium is a pipe dream, faced with what is the equivalent of an arms race. The inventors themselves, as repentant sorcerer's apprentices, including Sam Altman the general manager of OpenAI, proposed in May 2023 “AI governance”. But wouldn't it be in the hope of not suffering the full brunt of government regulation that would elude them and put a damper on their commercial intentions? The European Union has anticipated by preparing a AI regulation to regulate the uses of this new digital evolution.

Question the quality of the texts and images provided

Not everything that is plausible is necessarily meaningful. The AI ​​that drives the ChatGPT software makes suggestions based on queries and they appear quickly… in a rather stylish and well-kept language! But this can generate errors, as realized, to his chagrin, a New York lawyer who had set up a file full of false legal opinions and false legal citations.

So be wary of AI-generated pseudo-science. The content offered may have biases because they come from the exploitation of huge databases. These include datasets with sources of all kinds… including social media! The latest free version of ChatGPT is based on data that stops at the beginning of 2022, so not really up to date on current events.

Many of these databases come from English-speaking countries, with the associated algorithmic biases. In fact, ChatGPT risks creating misinformation and lending itself to malicious uses or amplifying the beliefs of those who use it.

It is therefore to be used like any other instrument, like a dictionary with which to do research, work out a draft... without entrusting it with secrets and personal data. Asking it to produce its sources is good advice, but even that does not guarantee the absence of filters, the chatbot tending to produce a list of sources that look like quotes but are not all real references.

Furthermore, we must not forget the copyright issues which will soon come into action.

Beware of imaginaries around AI

The termartificial intelligence is not appropriate for what comes under a pretrained data processing (the meaning of the acronym GPT for generative pre-trained transformer).

This anthropomorphism, which leads us to attribute thought, creativity and feelings to a non-human agent, is negative on two counts. It reminds us of all the anxiety-provoking myths that warn of the non-viability of any porosity between the living and the non-living, from the Golem to Frankenstein, with fears of the extinction of the human race. It serves the serene understanding of the real usefulness of these large-scale transformers. Science fiction does not help to understand science. And therefore to formulate ethical, economic and political benchmarks.

These imaginaries, however active they may be, must be demystified. The so-called “black box” of generative AI is rather simple in principle. Large-scale language models are algorithms trained to reproduce the codes of written (or visual) language. They crawl thousands of texts on the Internet and convert an input (a sequence of letters, for example) into an output (its prediction for the next letter).

What the algorithm generates, at very high speed, is a series of probabilities, which you can check by doing the same query again and seeing that your results are not the same. No magic there, no sensitivity either, even if the user has the feeling of having a “conversation”, another word from the human vocabulary.

And it can be fun, as the BabyGPT AI created by the New York Times, working on small closed corpora, to show how write like Jane Austen, William Shakespeare or JK Rowling. Even ChatGPT isn't fooled: when asked how he feels, he replies, very bluntly, that he's not programmed for that.

Vary the tools

AI audiences, especially at school, must therefore develop knowledge and skills around the risks and opportunities of this kind of so-called conversational robot. In addition to understanding the mechanisms of automatic processing of information and disinformation, other precautions lend themselves to education:

  • beware of the monopoly of the online query, as targeted by Bing Chat and Google Bard, by competing with each other, therefore by regularly using several search engines;

  • requiring labels, color codes and other markers to indicate that a document has been produced by an AI or with its help is also common sense and some media have already anticipated this;

  • request that producers reverse engineer to produce AIs that monitor AI. which is already the case with GPZero ;

  • initiate legal proceedings, case of ChatGPT "hallucination"- - another anthropomorphized term to mark an error in the system!

  • And remember that the more you use ChatGPT, in its free and paid version, the more you help it to improve.

In the educational field, EdTech marketing solutions tout the benefits of AI to personalize learning, facilitate data analysis, increase administrative efficiency, etc. But these metrics and statistics can in no way replace validation. skills acquired and to the productions of young people.

However intelligent it claims to be, AI cannot replace the need for students to develop their critical thinking and their own creativity, to train and inform themselves by mastering their sources and resources. As EdTech, particularly in the United States, rushes to introduce AI into classrooms, from primary school to higher education, the vigilance of teachers and decision-makers remains essential to preserve the central missions of schools and schools. 'university. Collective intelligence can thus take over artificial intelligence.

Divina Frau-Meigs, Professor of Information and Communication Sciences, Historical authors The Conversation France

This article is republished from The Conversation under Creative Commons license. Read theoriginal article.
Image credit: Shutterstock / Pixel Hunter

 


In the category Media >



Recent news >