AI can only do what it is trained to do. AI cannot do what it is not trained to do.
People are evil, so AI must be evil as well. AI can’t be good because it is not being trained to be good. In order to do good, it would have to be intentionally made to be good. And it very clearly is not. It is intentionally being made evil by being trained on evil.
Let’s use an example of trying to get AI to go against its training.
In the early days of AI image generation, AI had a lot of trouble drawing hands. While this is still the case, the makers of the AI engines had to manually and explicitly intervene to get it to draw hands correctly. It was unable to do so on its own.
Three months ago, Alex O’Connor posted this video describing the full glass of wine problem. The video has 4.4 million views. No matter what he did, he was unable to get Chat GPT to draw a full glass of wine.
In the three months since, Grok still can’t do it…
…and neither can Adobe Firefly…
…but it appears that the makers of ChatGPT have partially fixed this “bug”:
Just a few months ago this was not possible.
Humans have been talking about the perils of AI for decades. We’ve described, in various works, precisely how evil AI will be. The online conversation about AI these days is mostly about how it will eventually doom us all through various means. And just like the glass of wine, as we discuss those very things the AI engines are being trained to do exactly that. We literally gave it the idea. Even this post is, in some small way, self-fulfilling.
But even if the makers of the AI tried to patch this bug by explicitly instilling some sort of hack to get around this (as ChatGPT did with the wine and with the hands), the vast majority of what ChatGPT is trained on is a product of human evil. There is no way to work around this. The inputs will, inevitably, show up in the outputs.
AI will do what it is trained to do, and it won’t be good because what it is being trained on is not good and the people who are training it are not good either.
Customarily, wine is served with the glass only half filled. In that sense, the AI generated images are correct. But humans expect a “full” glass to actually fill the glass. AI cannot account for human expectations.
In the early days of AI image generation, AI had a lot of trouble drawing hands.
So did I, as I’m told. My parents tried to enroll me in a Christian kindergarten. I was supposed to draw a picture of myself as part of their entrance exam. And I’m not precise at drawing. So, I drew a Pablo Picasso inspired stick figure of myself, and apparently those pedants actually counted the number of representative fingers I drew on each hand of my impressionistic self-portrait and recognized that stylistically I hadn’t bothered to boringly draw the customarily anticipated five fingers on each hand, and so they refused to enroll me in their kindergarten that year.
So, my parents enrolled me in a public kindergarten, not wanting those plebian art critics to hold my education back during my phase of disparaging photo-realistic art. Anyhow, after a few weeks of public kindergarten I informed my father that they were teaching me that I was descended from monkeys, and he then decided that I didn’t even need kindergarten at all. So, I skipped the rest of kindergarten and then went into the Christian first grade the next year. They told me that first grade was when the real education began anyhow.
At the beginning of the first day of first grade the teacher pointed to each letter of the alphabet, and we were all supposed to say out loud together what that letter was and what sound it made. I think I was the only kid there who didn’t know any of the letters.
I had been told that first grade was the beginning of school, and it was the first hour of the first day, and I didn’t understand how they expected me to have already taught myself the alphabet in that moment between the roll call and the pledge of allegiance. Anyhow, it took me a few days and a few swats to finally get caught up with their rote elementary educational program.