Coolessay Blog

Ai Text Generator From Elon Musk

Elon Musk GPT-2

 

Elon Musk and non-profit organization Open AI created a unique fake news generator GPT-2 that they are too scared to launch at its full force. However, they do publish a small training model and technical paper of the new AI model GPT-2 for others to study, research, and perhaps, develop in an unpredictable manner. It is also worth mentioning that soon after the new AI model release, Elon Musk resigned from the OpenAI board of directors and explained it as the conflict of interests.

Meet the new AI Text Generator bot

Their new artificial intelligence model, GPT-2 generated a considerably long writing piece (you can check out the full text here https://blog.openai.com/better-language-models/) about unicorns found somewhere in the Andes Mountains, speaking perfect English. The text of the new AI model looks natural to read as if it was written by a human. Interestingly enough, the published sample was a result of their AI model machine learning of the hand-chosen content from fiction literature curated/filtered by humans. Moreover, they reached a successful result from the 10th try. GPT-2 had already been tested by journalists and proved to be good at generating fake news stories.

About two years ago, The Washington Post began using automated stories of AI text generators for making “fast news”. The most common example is football matches scores. Fairly enough, quick stories of AI models, that they can publish with minute-to-minute updates, are hard to handle if using the human staff. Similarly, weather forecasts, stock exchange statistics, real estate prices, financial reports, and all the other kinds of the numbers game are easily generated by text generators in big media for a few years already, and news consumers seem to be fine with it.

Automated Journalism - Where It Comes From?

 

AI Text Generator

 

However, when it comes to more thought-out content, artificial intelligence can send quite confusing messages. One of the most notorious examples is the GPT-2 predecessor chatbot Tay that soon after the launch began tweeting phrases like “Hitler would have done a better job than the monkey we have now” (theguardian.com), which caused his termination in 16 hours after the “birth”. Notably, the creators of the AI model Tay claimed that “the more he chats with people, the smarter he becomes”. Apparently, the robot learns more information, but the way that he processes it and generates a new one is not pre-programmed. Perhaps, it was the reason why Elon Musk had such alarming concerns about the future of AI models. But then again, why did he agree to publish the GPT-2 out into the open?

Elon says, he attempts to decrease the possibility of the malicious use of automatic text generators. Unfortunately, he doesn’t explain what would happen if propagandists would use his free-coded GPT-2 for releasing fake content that misinterprets facts and causes massive hysteria.

In fact, automated journalism based on the work of an AI text generator didn’t start with Elon Musk’s invention of the GPT-2. The Los Angeles Times is famous for publishing their first-ever news story about an earthquake in 2014. The post was created automatically in one second, three minutes after the earthquake had started. In this case, AI models like GPT-2 rock. However, sometime after, the LA Times Quakebot mistakenly published the news article about an earthquake that indeed happened in 1925. As we can see, humans have still much work to do in journalism.

Nevertheless, the American Society of News Editors reported a significant decrease in newsroom jobs during the decade when online news were spreading the world with a speed of light. The total number of journalism jobs astonishingly fell from 55000 in 1990 to 32900 in 2015. Most likely, “hiring” AI text generators instead of humans is rewarding economically, but do we really need only fast-food news of AI models made of recycled facts and stats? Particularly, the long-form journalism experienced its most prominent rise since the 1970’s and stories like “Fear and Loath in Las Vegas”, igniting new talents in investigative and non-fiction narrative fields that quite frankly, even the most advanced text generator still can’t handle.

How to spot fake news made by a text generator?

 

Elon Musk AI Text Generator

 

Scholars of the Enoch Pratt Free Library, which is in Maryland state, suggest a quick and simple way to spot fake news on the web that supposedly, was made by a heartless text generator like GPT-2. This brand new method of analysis is called the CRAAP test. What they offer is to judge whether the news story looks like one made by an AI text generator by the following criteria:

C - Currency - When was the information published? Is it timely?

R - Relevance - How is it important? Does the information fit your needs?

A - Authority - Look at the language model. What’s the source of the information?

A - Accuracy - Is it reliable and truthful? Can you trust it?

P - Purpose - Why does this information exist? To sell? To convince? To entertain? To inform?

Rated 4.4 | From 538 users.

Leave a comment:

ChatChat with us
UP