Suppose you love to know about new AI (Artificial Intelligence) technology inventions. Then, you will be happy to see the latest AI-based text generator called AI Chatbot. It is a successor or improved chatbot over numerous models. This chatbot is an automated program that helps customers by automating conversations and interacting with them through messaging platforms.

Moreover, Yannick Kilcher – an AI Whizz and YouTuber, tested this chatbot over the 4chan board. It is the most active board having almost 1,50,000 posts daily. For this reason, AI Chatbot was tested and luckily showed fascinating results. So, if you want to know about it briefly, read this article thoroughly.

How AI chatbot trained over 4chan Showed Provoked Obsession

We know that the 4chat board is the only board having 1 lakh plus posts daily. And, such attribute enforce Kilcher to test AI Chatbot over it. For so, Kilcher first tried the GPT-J language model on over 134.5 million posts made for three and half years. Further, he incorporated the board’s challenge structure into the system.

As a result, the GPT model is posted in the same style as a real /pol/ user. Kilcher considers it was encapsulating the mix of offensiveness, nihilism, trolling, and deep distrust of any information that permeates most posts. Additionally, it can respond to context talk about things that happened after a long time of training.

Furthermore, Kilcher tested it over GPT-4chan on the Language Model Evaluation Harness, which tests AI systems on various tasks. This test showed that the chatbot is completely stuck with the truth, which was a positive point. On such occasions, Kilcher is considered better than GPT-J and GPT-3.

After getting such results from GPT-4, Kilcher decided to convert the AI run rampant into a Chatbot. Following this, The bot instantly racked around thousand plus messages. But, Dr Lauren considered that concept harmful. Kilcher also said “worst language model” on 4chan”.

Luckily, a research engineer, “Roman Ring”, said the GPT-4 model had amplified a four chans environment because the model was 1000 times downloaded before it was removed from the Hugging face platform. For this reason, Clement Delangue, CEO of Hugging face, said they did not support experiments done by Author and considered that model harmful.

Kilcher’s model promoted awareness of AI’s ability to automate harassment, disrupt online communities, and manipulate public opinion. Additionally, It can spread discriminatory language on a large scale. These language models can be risky that seem to continually rise.