- Rohan's Newsletter
- Posts
- Flagging AI Because Of Certain Words Or Tone Is Racist
Flagging AI Because Of Certain Words Or Tone Is Racist

AI is a boon. When I first encountered ChatGPT, it was something I couldn’t believe. It was unhinged. It would hallucinate, but then it would give me output I could use. The first thing I did with ChatGPT was to write code for myself. Since I don’t know how to code, it amazed me when it spits out a fully functional code. And it was an amazing boon to the society. I could now put in my logic, and my app would be ready.
But my excitement would be short-lived. Slowly, people started misusing the ChatGPT to do unethical things. And there was chaos. The question of ethics and morality arose. And like always, the lords decided not to use their brains but do the censorship.
They coded ethics into the ChatGPT. Every time I ask ChatGPT anything, it would lecture me on ethics first, write ten disclaimers and give me nothing of good use. This is a bad thing to do.
Insert Ethics Everywhere
Imagine if we would operate on similar lines for all other things. For example, a murder happened with a scissor. And now, with every scissor purchase, you will be lectured on how murder is wrong, and they claim no responsibility. Next day, another murder with pencil happened. And now, with every pencil purchase, you will be lectured on how murder is wrong, and they claim no responsibility. This doesn’t happen. Instead, we make those humans responsible for trials and moving on.
The same could have been done with ChatGPT. Instead of making everyone’s life a mess, they could have punished those who were doing illegal things. Bad people have been using email to do bad stuff all the time. Do you see email being banned or to be used with a disclaimer.
Why we have resorted to doing such a silly thing for ChatGPT? Now, every time I ask something, the AI resorts to being ethically right or morally superior rather than giving me answers. The purpose has been defeated.
Shooting The Messenger
There has been unnecessary interference of ethics in the conversation of ChatGPT. It’s a machine that is supposed to have some kind of intelligence drawn from the collective intelligence of human history. It will have flaws. Let’s keep it open to adults and let adults take their own responsibility for their actions.
You can commit a lot of crimes with phones, but you don’t put ethics in phones. You assume that adults will be adults and be held responsible for their actions by law. And for minors, their guardian will be responsible as per the rule of the land. And that’s it. But with ChatGPT, somehow, it has made the makers dumb, and also, people fear too much unnecessarily. You can’t decide what if scenarios or blame the weapon of choice instead of individuals.
Now coming back to AI detection, this is a huge fraud exercise going on. There are websites that claim that they can detect AI. And it’s a proxy and nothing else.
I Can Detect AI Because I Am Superior
Currently, if you ask for a written article from ChatGPT, there is no watermark of any sort. And even if there is, anyone can write the words by hand, copying them, and then there is no proof whatsoever.
Every platform or individual claiming they can identify AI is just guessing. And it is hurting specific kinds of people. For example, broken English is considered as AI by some — this hurts non-native speaker. Some words are considered AI, but that’s a false proxy because AI has learned from humans. So, some humans will use that word, and then it becomes targeted against them.
If you are worried about someone’s written word — if it’s AI or not, then there is no way to know for sure. If you guess, then your bias will kick in. So ignore who has written it. You have to act as if written by an individual, and if later she confesses to something else or some concrete evidence comes in a court of law, they will be punished.
But you don’t have to play lawyer or judge. Let’s stop being racist, please.