The Cosmic Failure of a Creator

Jan 06, 2025

Lately, I am feeling more and more hopeless about a certain progress that humanity should have done centuries ago. That certain progress is a collective understanding of who we are as humans. Recent news about the developments in AI is saying that it is just a matter of time before our limited human understanding is surpassed by the very thing we are still grasping to understand.

I have been mulling over the possible implications of AI development for a couple of days now. My feed has been increasingly geared towards news about AI development and news about megalomaniacs who are seemingly "pushing the boundaries" of technology for the human civilization.

But the more we look at it, the more capitalism reveals itself (still the reigning culprit of cultural decline) as the catalyst for a chain of problems it has been causing.

The only plausible reason for such technological advancement, which is in this case we will refer to as AI, should be under these prerequisites:

  1. The existential necessity to delegate human tasks to machines for our benefit
  2. The outline for such necessity to exist, which is a deliberate examination of its pros and cons, its implications and philosophical effect to human civilization.

What I am getting at, is still the same as what I have been talking about. "Knowing thyself." Desire itself is not enough to do or seize the opportunity to make things happen. In other words, just because you can, doesn't mean you should. Because every small thing that we want to do carries a specific and often invisible psychological trigger that propels us into doing it. This is also the same reason why we most often do not believe people at face value. A trust must always be formed between two individuals to create a strong connection of bond and dependability. This must also be created between our own self and our own mind.

I can say to someone that I always wake up before 6 a.m. and take a morning walk before I go back to my home office and start my work day. That someone could be my own self because I want to do it and I know that it is the right and healthy thing to do. But since I don't have the discipline to do as I say to myself, some of the worst traits envelop me because of a lot of whole other reasons:

Trusting yourself is a two-way process where you have to be yourself and you also have to be your own mind. The mind's default path of reasoning is to protect itself from harm, from pain, from suffering, so it will make up a lot of these reasons to keep you safe. That's just how the mind works. So, in developing your sense of discipline, it is always important to discipline the mind and keep it from running automatically. Your intervention should always be directive and deliberate.

So what's all of this got to do with AI development?

The short answer is that we cannot trust ourselves that we are doing this for our benefit because of the fact that the effects that these developments are producing is transgressing with our own sense of human responsibility.

We humans still do not have the discipline for ethical decency and moral actions to use a tool that is supposed to help ourselves achieve more with our limited capacity to learn, remember, and apply those learnings. More so, we still have a long way to go for deeper and practical empathy. We still lose ourselves to our own cravings, which is a hint that we as a species still do not understand a lot about ourselves. So why should we trust another human to create artificial intelligence if we still cannot articulate what is it that we are actually doing?

Match this lack of self-understanding with the corporate race towards developing the fastest computing processors for AIs. Google, OpenAI, Microsoft, Apple, everyone is developing their own atomic bomb in this age of intelligence and they are handing it to the common man who has a little sense of responsibility in his hands. I've still yet to understand what Google's "Willow" is all about.

While there is a lot of excitement inside the STEM field, there is a lot of anxiety in the Humanities. I would say that I am part of the latter community because I feel that things are getting exponentially confusing as the technology is also exponentially progressing. What is clear to me is that there are very few ethical consideration that is being brushed away in pursuit of money. Greed is the code that is running the backend of this tech theatrics. All these keynotes, video teasers, and interviews are all marketing, all in the prettified front-end.

We are witnessing our own mythology based from the story of the fire-snatching Prometheus, the creation of the tower of Babel, Icarus' flight, and Oppenheimer's discovery. Our own failure to understand ourselves, and externally projecting our unprocessed desires will always result to our own demise.

What I am feeling is a deep sense of failure as part of the Humanities. But this is not a sense of responsibility that I am bearing because I feel like I could have made a change, or an impact to slow down this process of development. The type of failure I am feeling now is an empathy towards humanity as my own kind. We have failed ourselves and it might even be too late to say that we can still stop this. The gears are already turning and the masses are already subscribed to this fiasco.

To echo Heimerdinger from the Arcane series, "in pursuit of greatness, we failed to do good."