ChatGPT is an advanced language processing technology developed by OpenAI
![]() |
Photo Source: Google Image |
ChatGPT is an advanced language processing technology
developed by OpenAI. It was trained using text databases from the internet and
300 billion words were fed into the system. The end result is a Chatbot that
can seem eerily human, but with an encyclopedic knowledge. However, academics,
cybersecurity researchers and AI experts warn that ChatGPT could be used by bad
actors to sow dissent and spread propaganda on social media. The potential of
language models to rival human-written content at low cost suggests that these
models may provide distinct advantages to propagandists who choose to use them.
This could expand access to a greater number of actors, enable new tactics of
influence, and make a campaign's messaging far more effective.
AI systems could improve the persuasive quality of content and make it
difficult for ordinary Internet users to recognise as part of co-ordinated
disinformation campaigns. Josh Goldstein, a co-author of the paper and a
research fellow at Georgetown's Center for Security and Emerging Technology,
says that generative language models could produce a high volume of content
that is original each time, allowing each propagandist to not rely on copying
and pasting the same text across social media accounts or news sites. However,
access to these systems may not remain the domain of a few organisations, as
more actors invest in state-of-the-art generative models, which could increase
the odds that propagandists gain access to them. Gary Marcus, an AI specialist
and founder of Geometric Intelligence, an AI company acquired by Uber in 2016,
says that even if platforms such as Twitter and Facebook take down
three-quarters of what those perpetrators spread on their networks, there is
still at least 10 times as much content as before that can still aim to mislead
people online.
The surge of fake social media accounts has caused a major headache for Twitter
and Facebook, and the quick maturation of language model systems like Chat-GPT
can only exacerbate the problem. Both the January 2023 paper co-authored by Mr
Goldstein and a similar report from security firm WithSecure Intelligence warn
of how generative language models can quickly and efficiently create fake news
articles that could be spread across social media, further adding to the deluge
of false narratives that could impact voters before a decisive election.
However, if misinformation and fake news emerge as an even bigger threat,
should social media platforms be as proactive as possible? Some experts think
they will be lax to enforce any of those kinds of posts. Luís A Nunes Amaral,
co-director of the Northwestern Institute on Complex Systems, argues that these
fake posts are meant to infuriate and divide people, which drives engagement.
Labels: News
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home