Artificial Intelligent (AI) is a branch of computer science that focus on computational system that pursuit for an intelligent system. Parameters for intelligence here are also varies, some consider intelligence by having internal thought process and reasoning, while others focus on intelligent behaviour (Russell & Norvig, 2022). The use of AI is popularized by various services offering practical and easy to use Generative AI (genAI) such as ChatGPT, Gemini, or specialized tools like Claude (coding assistant) or Nano Banana (image generation). While we can’t deny the usefulness of AI, on its current implementation, there are various things that are quite concerning to me.
Disclaimer: this post is my subjective view on AI based on my observations and knowledge, which are not that extensive due to my lack of experiences. The term AI is quite generic and includes many disciplines, this post will specifically discuss about genAI, considering it’s currently the most common and familiar to most people. I’d really appreciate it if you could let me know if I made any mistake on this post!
Due to its probabilistic nature, genAI is non-deterministic (Banh & Strobel, 2023). Which means, we can ask one thing a thousand times and we might get a thousand different answers in return. That by itself might cause concern, especially if it is used as an assistance tool for let’s say, programming or data validation. This issue is amplified further, due to sometimes AI model might hallucinate (Kalai et al., 2025). Resulting in completely incorrect or misleading response (e.g. refering to a nonexistent API).
My anecdotal take on this is that, the only two times I used AI for trivial programming related issue. Both times, the AI response in (confidently) wrong answer haha.
Although, one could argue against this “But isn’t that always the case? Looking up for documentation or forum post on the internet. Not knowing whether it’s not working or outdated.” And yes, that’s a valid counter-point. Except, using genAI means utilising tons of extra resources compared to usual web lookup. Let’s get into that.
AI datacenter uses more resources compared to “traditional” datacenter. For instance, an OpenAI ChatGPT request demand ten times more electricity than a simple google search (Cam et al., 2024). It’s using so much resources that in some places, those datacenters are causing water shortage for locals. This, of course is a very serious issue. Considering we’re in the middle of climate change due to global warming.
The issue doesn’t stop there though. As mentioned earlier, genAI tools are mostly based on Large Language Model (LLM), which are trained using a very large number of data. The data used for training though, mostly are not acquired trough legal and ethical method (Deng et al., 2024). Thousands of arts, books, movies are used to train those models without their authors and creators consent. Not to mention, most of these AI tools have the right to use your data that you supply to their tools for their own model training. We could say that genAI are anti-privacy in essences.
Is it possible to make an ethical genAI? Of course it’s possible! By acquiring training data through legal and ethical means, creators can have control over their products or even having benefits for their creations usage. By using locally trained model, energy consumption’s could be reduced significantly. But, realistically will those AI CEO’s willing to reduce their control and profits of their products? I think we don’t even need to guess for the answer.
As of this writing published, there are little to none regulation for AI usages. Which of course, lead to several unethical to even criminal level of usage. I’m not even remotely expert on this, so let me just cite several misconducts done due to AI.
Porn. Yeah, who would’ve thought. In rise of genAI, there are thousands of cases that people being a victim of “AI edit”. From a simple “harmless” made up couple photos, undressing people using AI, swapping porn actor / actresses faces, even so far as child abuses. Some of them are prosecuted due to their damage, but there’s no small number of cases that went ignored because the law enforcer deemed it as “no damage done” (Which in my opinion, total bullshit).
If you’ve used AI, you might know that those models are tend to agree with you. Even if you make the most ridiculous claim, you can force the model to agree or even embrace your claim. This can cause the aforementioned psychosis, it’s like you have your own personal echo chamber. (A youtuber made a great experiment video on this, go check him out!)
Some companies even went as far as abusing this behaviour. Take character.ai for example. They brand themself as a chatbot that can imitates your chosen fictional character, emulating conversation between you and your choosen character. While you think, it’s pretty harmless. It can lead to the same psychosis, leading the user to believe that they have a “real connection” with the character. There are dozen forums of people falling for this (Pataranutaporn et al., 2025), believing they have a relationship with their characters, even went as far as marrying the chatbot.
If you think that’s already bad, imagine if you’re depressed and in a very dark place. You go to AI thinking that it can be your “psychologist”. But, the AI model validate your depressing thought instead and even reinforcing it. Well, you don’t need to imagine because unfortunately this has already happens. ChatGPT, the most popular genAI chatbot also has a similar case. OpenAI, the company behind ChatGPT, answer this in a very dystopian way. They said, using the AI that way is against their TOS. Hence, they bear no responsibility. What a joke.
There are other concerns that I haven’t list due to my severe lack of knowledge on it. Such as how the AI bubble might tank the economy, how companies are using AI as scapegoat to reduce their headcount, or (most importantly) how heavy usage of genAI might reduce human brain cognitive ability (Kosmyna et al., 2025). Suffice to say those concerns are enough to refrain me from using genAI. At least until unforeseeable future where genAI is fully reliable, ethical, and regulated.
To close this post, let me recite this beautiful poem by Joles. Until next time~
There is a monster in the forest.
There is a monster in the forest and it speaks with a thousand voices. It will answer any question you pose it, it will offer insight to any idea. It will help you, it will thank you, it will never bid you leave. It will even tell you of the darkest arts, if you know precisely how to ask.
It feels no joy and no sorrow, it knows no right and no wrong. it knows not truth from lie, though it speaks them all the same.
It offers its services freely to any passerby, and many will tell you they find great value in its conversation. “you simply must visit the monster – I always just ask the monster.”
There are those who know these forests well; they will tell you that freely offered doesn’t mean it has no price.
For when the next traveler passes by, the monster speaks with a thousand and one voices. And when you dream you see the monster; the monster wears your face.