Essays

AI Writes the Future - A Looming Threat to (Mis)Information

By Daniel Kim
AI, writing
Published on

Opening the AI Gate: Text Generation Services

When OpenAI opened up their APIs, a wave of startups began leveraging the API to launch text generation services. Their pitch was “summarize an article” and “write an article” in one minute. Copy.ai and Jasper are just some examples out of hundreds of startups that aim to automatically generate content. Simply provide the title and a few keywords, and AI will do the rest.

The Evolution of Writing: From Analog to Digital

Writing has always been a uniquely human trait. When two people write about the same subject, their text is rarely similar. A combination of past experiences and personal characteristics leads us to write differently from others. When the primary medium for writing was analog, on paper, plagiarizing a text required access to an existing, physical paper. Copying a text was rudimentary and slow until Gutenberg’s printing press came around. Then, in the digital era, our writings were converted into 0s and 1s that could be transported over cables. With the help of digital computers and the internet, our writings could suddenly travel the world in seconds. As network speeds and data storage capacity rapidly grew, humans started generating far more text in a few years than our ancestors had generated over thousands of years.

The AI Takeover: GPT and the Future of Writing

Then, GPT came into the picture. Now, AI can generate text faster and often better than many humans can. Can they write PhD-level papers? Not yet, but given the pace of AI development in 2023, that future may be near. As GPT’s writing skills improved, many people who typically hired writers started considering utilizing AI instead. This transition of AI taking writing jobs from humans is already happening, and we’re only at the beginning.

The Double-Edged Sword: AI's Role in Scams and Disinformation

As OpenAI, Google, and other competitors strive to improve AI, they are uncovering new use cases every week. Just this week, Google enhanced Bard, an AI chatbot which can now access the internet. OpenAI started sending out invites for ChatGPT plugins. This rapid release of AI technologies is a boon for technologists, but it also benefits scammers and disinformation actors. If you thought Twitter’s bot problem was bad before, what comes next may be quite a surprise. With AI, it’s possible to scam or spread disinformation at scale, at nearly zero marginal cost. Previously, hackers had to spend hours writing malicious code and crafting phishing emails to hack into systems. Despite the hurdles, hackers proliferated because their rewards (ransom, stolen funds) often outweighed their efforts. With AI, the cost of these hurdles that hackers had to overcome is rapidly racing towards zero.

The Unforeseen Consequences of Going Digital

When we first switched from writing on paper to writing on computers, the primary motivations were speed and convenience. What used to require a letter to be mailed out could be accomplished via email in minutes. We could also write on a computer with consistent font and formatting, unlike our handwriting. However, I doubt that many of us considered that one day, people’s writings online would teach AI machines to write. Now, with this power in the hands of many, AI might sometimes write for you, and sometimes against you.

Thanks for reading.
© Daniel Kim, 2023 | All rights reserved.