ChatGPT and I wrote this article
The OpenAI logo is seen on a large screen with the ChatGPT website displayed on a nearby mobile device in this illustration taken in Brussels, Belgium, Jan. 8, 2023. (Getty Images Photo)

This piece, co-authored by OpenAI's ChatGPT and me, showcases the potential of human and AI language model cooperation in the publishing industry. It is a relative look into the near future



Am I about to lose my job? OpenAI has signaled I am. On Tuesday, my colleague here in Daily Sabah, Alen Lepan, introduced me to something I was hearing a lot about. "It is not click bait," he said, "but something to revolutionize the publication industry," trying to convince me.

What he was talking about was OpenAI’s ChatGPT. It is "a revolution leader," according to him. At first, I was skeptical due to the hype surrounding artificial intelligence (AI) and related topics in the media sector. It could be possible the San Francisco-based company had invested a lot in media to manipulate the target audience through platforms – or there could be other weighty influences we are unaware of.

As a journalist, it is natural to approach new information with a healthy dose of skepticism. However, this mindset can also lead to paranoia and mistrust, which can negatively affect personal relationships. For example, when a family member or friend shares something with you, you may feel compelled to double-check and fact-check the information before responding. (Dear editors of The Economist, this part may sound unfamiliar to you, just keep reading the piece, there is no problem, everything is just fine.)

When Alen was trying to convince me to sign up for ChatGPT, my initial reaction was one of skepticism. I was hesitant to share my personal information, such as my email account, with the website, fearing that it would somehow access and misuse my personal data. But I eventually gave in and logged in to the website. Wow! What is this?

I immediately researched OpenAI, ChatGPT and other models, and while I found informative descriptions, I wanted to share ChatGPT's own explanation of what it is.

"ChatGPT is a large language model that uses deep learning to generate human-like text. It was trained on a dataset of billions of words and can be fine-tuned for a variety of natural language processing tasks, such as language translation, conversation generation, and question answering."

A screenshot from the ChatGPT website showing the answer given by the language model to the question of its definition.

ChatGPT demonstrated impressive capabilities when I put it to the test. I asked it to write a short story in the style of Fyodor Dostoyevsky, and it produced the following piece. Despite being an AI model, ChatGPT was able to emulate the distinct writing style and themes of the famous Russian novelist. It highlights the versatility and potential of language models in mimicking human-like writing.

"Once upon a time, there was a young man named Ivan. He was a man of great wealth and status, but he was also a man plagued by a deep sense of emptiness and despair. He spent his days indulging in lavish parties and material possessions, trying to fill the void in his soul.

"One day, Ivan met a poor, homeless man on the streets of the city. The man had nothing, yet he seemed to radiate a sense of contentment and peace. Ivan was struck by the contrast between the man's humble circumstances and his own luxurious life."

It sounds too Dostoyevsky-ish, doesn’t it?

A screenshot from the website of DALL-E, a new AI system developed by OpenAI to "create realistic images and art from a description in natural language," shows "Ivan," the Dostoyevsky short story character created by ChatGPT above.

I posed a challenge to ChatGPT by asking it to generate catchy headline alternatives for an article about the vibrant life of Salvador Dali. Without hesitation, it provided the following options:

"The colors of Dali: A look into the surreal life of a master," "A world of color: The life and art of Salvador Dali" and "Dali's canvas of dreams: An exploration of the colorful mind of a surrealist genius."

I was curious if it has the ability to make predictions, and I asked, "Will Fenerbahçe be the champion in the Turkish football league this year?"

"I am not able to predict the outcome of future events, as my knowledge cutoff is 2021. Additionally, predicting the outcome of a sports league is highly uncertain as it depends on many factors, such as the performance of individual players and teams, injuries, and luck," it answered.

I asked over and over again, from soft topics to hard politics, from ironic issues to funny things, and it answered. It was relatively successful.

This technology has the potential to cause significant changes in the publishing industry, including both job displacement and the creation of new opportunities. It looks like we have only around 10 years or less left before we are supposed to turn into superhuman beings to change the answer to the question I asked at the very beginning of the article. Otherwise, considering the fact that ChatGPT wrote a whole article in around 30 seconds – the time it took me to open a Word document and started thinking about the article’s framework and logic – the future seems obvious for the age of internet journalism.

I was originally planning to write about political correctness, cancel culture and new media but I was so captivated by ChatGPT's intelligence that I changed my topic. However, I do not have to limit myself to one topic when I can utilize both my own ideas and ChatGPT's writing skills in my article.

The article below is the result of an experiment in which I directed and tried to manipulate ChatGPT to explore the topic of political correctness, cancel culture, and the impact of social media and the internet. The title, "Navigating the complexities of political correctness and cancel culture in the age of social media and the internet" was suggested by ChatGPT too. This relative co-authorship showcases the potential of human and language model cooperation in the publishing industry. I asked the same questions several times, divided paragraphs and asked again. During this period, it improved its styling in the same matter, and in my final question to write an article on the topic, it wrote the piece below. Here it is, without any changes:

"Political correctness and cancel culture have become increasingly prominent in recent years, particularly with the rise of social media and the internet. Political correctness refers to the idea that certain words, phrases and behaviors should be avoided in order to avoid offending marginalized groups. This can include using inclusive language, avoiding stereotypes and being mindful of cultural sensitivity. Supporters of political correctness argue that it promotes respect and equality, while critics argue that it can be used to silence dissenting voices and restrict freedom of speech.

"Cancel culture, on the other hand, refers to the practice of boycotting or 'canceling' individuals or groups who are deemed to have said or done something offensive or problematic. This can include public shaming, social media campaigns, and calls for boycotts or firings. Supporters argue that cancel culture is a way for marginalized communities to hold those in power accountable for their actions, while critics argue that it can lead to a culture of fear and censorship.

"The rise of new media has played a significant role in the growth of political correctness and cancel culture. Social media platforms like Twitter, Facebook and Instagram have made it easier for individuals to share their thoughts and opinions with a wider audience. This has led to more public discourse and more opportunities for marginalized groups to speak out against injustice and discrimination. However, it has also led to more opportunities for harassment and bullying, as well as the spread of misinformation and false information.

"In conclusion, political correctness and cancel culture are complex issues that raise important questions about freedom of speech, equality and accountability. While these concepts may have good intentions, they can also lead to negative consequences. It's important to have a healthy dialogue and to find a balance between protecting marginalized groups and allowing for the free exchange of ideas."