AI

October 31, 2025
A navigation arrow to send you back to the site homepage

My Thoughts on AI

AI [1] is incredible, and it excites me that we are using computers to do complicated predictions for the betterment of the world. I am particularly convinced by what AI/ML has done and can do for the medical world, (cyber) security, agriculture or for material sciences. I think there is a useful cultural shift, where we understand that lots of classification problems can be solved quite well with a rigorous, data-driven approach.

However, I find the AI 'boom' concerning. Swathes of very ambitious people are eager to see how they can create something useful with AI. We are coming up with solutions like "Replacing your HR staff with generative AI", "Huge gambling machine" or "Massive army of AI drones". Huge tech companies and governments are pumping money into buiding infrastructure for ideas like this. So much in fact, that in the US, estimates put data center construction at 92% of GDP growth. It seems many entrepreneurs would love to see generative AI take a sledgehammer to any and all industries, in the hopes that their rebuilding would result in a tidy profit.

It leaves me wondering if anyone wants any of this? It's unclear to me how many of these seemingly brilliant ideas will benefit any user. We should ask what makes us happy: feeling safe, skilled, able and healthy, having healthy relationships, being part of a community, and feeling purposeful. A normal life can already fulfill these things: I have a job, I am a shoemaker. I perform skilled work and am well educated. I am able to get to work using my bicycle. I can easily travel and don't feel unsafe doing so. I look after my parents, I chat with my customers and the butcher and the lady at the grocery store. I am well respected as a shoemaker and my income helps pay taxes to keep the cycle lanes in order and to keep the shoes on peoples feet stuck together.

And yet there are those who want nothing more than to convince us it could be made better. And we'll believe them, because technology is of course good; it brought us the Polio vaccine and the Nintendo DS and so on. It would be better if my accountants were chatbots. It would be better if my communications with my letting agency were generated. It would be so much easier if my butcher trip was replaced by a 'meat subscription' that modelled what cuts of beef I would most like. It would be so efficient if the newspaper was replaced by a generated summary of the news that is most useful to me. It would save me so much money to get my logo designed by an image generator, rather than an artist. I would like to see my search results be aggregated into a couple maybe-correct, maybe-incorrect, authorless paragraphs. The best way to spend my evenings is on the fun video-sharing app that repeatedly selects a perfect fifteen-second video for exactly me until I have spent hours doing that without my even realising it. I want the stock market to be completely uncompetitive for people without petabytes of data to train huge models. I want gambling companies to have further edge against me without my knowing it. I don't want to read human prose. I don't want to write human prose. I don't want to see human photography or paintings. I don't want to hear human music.

This issue isn't specific to AI. It's more that the AI boom has made apparent to me a toxic startup culture. One, where people are empowered to collect enormous sums of venture capital money if they are willing to market a technology they themselves don't believe in. It is a performative culture, one of endless promises of profit, but no promises of bettering our lives. It is a culture where the idea matters far more than the execution, because the actual impact of the product is irrelevant to its founder. We scramble to find a use for this technology and in our desperation don't seem to realise that we are estimating a product's profit on the number of artists it can put out of work? [2]

I am quite sure that this culture will continue to cause immense damage, but I am still confident that the improvement of AI/ML techniques will bring about positive change. I'd like to see people using ML to make more informed decisions about city planning, budgeting or climate change policy. People are doing this, and we will see results, just there seems to be an immense amount of effort put into using novel technologies to strengthen destructive, harmful oligopolies.

Footnotes

  1. I have a quick note on definitions. 'AI' to me is any program that seems 'intelligent'. Therefore 'AI' encompasses purely rule-based chat bots as well, that a human can write by hand. 'Stockfish', a program from 2008 that plays chess better than any human, is also under the umbrella of 'AI'. Of course, ChatGPT is also AI. It's a bit loosey-goosey, but the point is that ChatGPT is just one branch of AI whose goal is to 'generate' novel content. ChatGPT, Sora, whatever are all called 'Gen-AI'.

  2. On another note, I find peoples' willingness to put artists out of work absolutely insane? When I consume art, I consume it as a commentary from the artist. There is no commentary from a machine that I care about. I think it is because I used to draw so much, but when you look at somebody's painting, everything means something. I draw mouths some way because I found Kim Jung Gi particularly inspiring and so copied him. My curved lines are really a few big straight lines because smooth lines make me feel some type of way and straight lines a different type of way. Or maybe my smooth lines just don't come out right. The way an artist chooses to draw men or women tells you something about their views on gender even if they don't mean to make a commentary. You can tell where someone learned to draw, online or in university, in China or in America. The journey for a normal person to go from not being able to draw to being a skilled artist is filled with inspirations and practice and careful observations about how they experience their world. That's what makes art interesting.