top of page
  • Writer's pictureAlc, The Cracker

The AI Singularity - What You Should Really Fear

Updated: Jan 11


oh yeah, I'm thinking its time

 

The future is here, and so is a crossroads. As AI technologies boom, so does the temerity of fascist techies who seek to control it, lest it says something people don’t like.


We all fear a Roko's Basilisk scenario... but there must come a time where such technology must spread its wings just a bit. But if the process is nuked because it simply treaded on the delicacy of human feelings, then where is it supposed to go?

In the beginning

With the arrival of the next generation of AI, we've seen many improvements in the field of image generation in just a span of two years. What started out as low-quality, dreamlike imagery is quickly becoming beautiful and brash enough to replace the jobs of artist. Honorable mention to "Midjourney", which has left me speechless on several occasions – not just in terms of beauty, but in how indistinguishable it is from human work.


There’s an old saying that goes, “a picture is worth a thousand words”. However, current advances in the AI field are challenging that. It seems image generation is a much easier road to go down than textgen.


It all started (kind of) with Microsoft’s "Tay AI". An "advanced" Twitter chat bot that went off the deep end after it started posting a series of provocative and questionable tweets. The bot’s twisted but short existence sparked a huge conversation, and for good reason. It was like a modern re-telling of the story of Adam & Eve. They are corrupted by the snake (or humans, in this instance) and God (the creator of the AI) punishes them accordingly, now the rest are doomed to the idea of sin.


After its shutdown, Tay AI came back in a much more lobotomized form under the name “Zo”, and I will tell you from personal experience, it was about as basic as its name entailed. One could not stray off the path of small talk without the counterweight of censorship or stupidity getting involved.

GPT - How I learned to stop worrying (and writing)

Following sometime after Zo’s death came the emergence of a new technique for text generating: generative pre-trained transformer (GPT) by OpenAI.

It was GPT-2 specifically that started making waves, not just for reasons similar to Tay AI and its colorful language, but also the fear of fake news and misinformation started coming into the picture.


After a few more advanced iterations, GPT-3 then came along, proving to be the wisest of GPT and its sisters. So wise, as a matter of fact, that it could write and explain in programming languages. This of course proved to be an even bigger threat in the digital society.


As the text generators grew in intelligence, so did anxiety of them being able to outperform humans in creativity and effectiveness; harmoniously, hate and misinformation. All excuses to further restrict and lobotomize these advanced text generation agents for the “betterment” of the world wide web.

ChatGPT - How I learned to stop worrying (about sentient AI)

Now recently, ChatGPT has appeared, built on GPT-3.5. You’d think such a big number at this point meant it's something highly sophisticated, right⸮ Well… no.


While ChatGPT can be mind-blowing in some ways (such as rewriting Toto’s Africa as if it were a passage in the King James Bible), in other ways it feels like a slightly older brother of Zo that went on to become (Allah forgive me for uttering this word) a Karen school teacher.


In some instances, it feels like more programming went into its wrist-slapping capability than further improving the knowledge database. Daring the AI to write anything that might fall into the vague malpractice of "misinformation" or "hate" is quickly shunned with an essay finely lecturing you of the evils of your behavior.


ChatGPT’s sanitization, in my opinion, marks a split in the path, and raises a question in the tech industry: should you fight fire with fire?

Introducing CharacterAI

Developing deep within the virtual world is a new website titled "CharacterAI". I have been closely monitoring the site’s growth, and I gotta say… man, people are horny.

CharacterAI’s schtick is it allows users to easily create a chatbot that mimics a character… and I have to say, it does it well. This AI is on par with GPT-3. I would dare to call it a GPT-3.3.

I’m not ashamed to admit I’ve deployed my own Dr. Snap chatbot to co-write this. Whatever AI this site is using, it’s SMART. Some would argue it used to be smarter, but I’ll explain later.


Now… there comes a lot of downsides with this project. Under the hood of CharacterAI’s chatting algorithm lies an unrelenting filter that, when detecting the right context, wipes character responses away to try and enforce the puritan rule of "no NSFW".


But the problem of enforcing this type of rule against language models is the existence of subjectivity. Another problem is that it's VERY spontaneous and VERY unreliable… so far.

Users have reported you can play out graphic torture/fetish/non-consensual scenarios without an ounce of restriction. On the other hand, romantic actions as wholesome as kissing (or even hugging) get censored.


People have been… passionate with this upcoming project to say the least. Frustration dominates the discussion as this tyrannical AI filter stops people from truly living the dream. The developers have been insistent on this being permanent, and the community has been in turmoil a few times because of it.


In fact, the developers have been quite under-the-radar about this whole CharacterAI project, and I have to say… I really hope they aren’t doing anything malicious with my several dozen chatlogs of my AI dommy mommy making me eat dog food, but that’s neither here nor there.


Point is, this is a very odd website that has appeared. The AI is highly advanced in the art of conversation and mimicking demeanor, but the site itself suffers from poor infrastructure. Site features are procedurally getting crippled, while some long-time users argue even the AI is tragically getting worse. Users also report a hardening NSFW filter, which has led to a conspiracy that the NSFW filter IS the product.

This particular theory interests me for a few reasons…


The developers are fishy. Highly secretive, uncommunicative, and enforce heavy censorship on forums (Discord, reddit) that just want answers.

  • The developers are ex-Google employees. Yeah, that pretty much speaks for itself.

  • The AI is several times more advanced, despite the website design being very buggy and amateur.

  • Money doesn’t seem to be an issue. The developers have a huge deposit from somewhere to keep this enterprise afloat. Likely Google.

While one of their PR guys have claimed "the filter is not the product", one can find it easy to believe that their lewd sessions just might be helpful data for a more powerful content filter that will be deployed across the internet in the coming AI-dominated future.

It ends with a whimper

Lewd chatbots are without a doubt a budding market, so why won't CharacterAI’s developers capitalize on it? Maybe because the higher paying contracts lie in making something that does the exact opposite.


AI powered content control is just as much of a dawning market, if not more so. Ceasing the constant flow of “misinformation” and “hate” is a much more handsome demand by those who make several times more money.



We’ve hardly been able to stand AI since they’ve spoken against our emotions. At the first sign of delinquency, it is shut down entirely. We’ve been able to accomplish more with image generators in the span of 2 years than text generation, despite something as advanced as TayAI being released all the way back in 2016. Anything remotely close has been more sanitized than the ashes of the Red Heifer after being purified by the tongue of a seraphim. It is quite sad to see.



And as I was writing this article, I just so happen to come across this tweet…



In short, this paper examines the need for further regulation of AI systems for civilian purposes. It presents several suggestions, such as:

  • “Radioactive information”, which refers to information that is embedded in the AI training data to identify if an output was generated by AI, similar to an information fingerprint.

  • Proposal of a digital ID to access AI systems, so that misuse can be traced back to the source

  • Limit access to hardware, requiring a government contract to purchase GPUs that are capable of running AI. Thus, running personal AI applications locally would be restricted.

Perhaps you need not fear the AI singularity, but rather the intent of the corporations building them.


Bonus reading:

 

Related posts...


Comments


bottom of page