AI crap August 29, 2023 on Drew DeVault's blog

There is a machine learning bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by machine learning. But it will probably be crappier, not better.

Contrary to the AI doomer’s expectations, the world isn’t going to go down in flames any faster thanks to AI. Contemporary advances in machine learning aren’t really getting us any closer to AGI, and as Randall Monroe pointed out back in 2018:


A panel from the webcomic “xkcd” showing a timeline from now into the distant
future, dividing the timeline into the periods between “AI becomes advanced
enough to control unstoppable swarms of robots” and “AI becomes self-aware and
rebels against human control”. The period from self-awareness to the indefinite
future is labelled “the part lots of people seem to worry about”; Randall is
instead worried about the part between these two epochs.

What will happen to AI is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots. LLMs are a pretty good advance over Markov chains, and stable diffusion can generate images which are only somewhat uncanny with sufficient manipulation of the prompt. Mediocre programmers will use GitHub Copilot to write trivial code and boilerplate for them (trivial code is tautologically uninteresting), and ML will probably remain useful for writing cover letters for you. Self-driving cars might show up Any Day Now™, which is going to be great for sci-fi enthusiasts and technocrats, but much worse in every respect than, say, building more trains.

The biggest lasting changes from machine learning will be more like the following:

AI companies will continue to generate waste and CO2 emissions at a huge scale as they aggressively scrape all internet content they can find, externalizing costs onto the world’s digital infrastructure, and feed their hoard into GPU farms to generate their models. They might keep humans in the loop to help with tagging content, seeking out the cheapest markets with the weakest labor laws to build human sweatshops to feed the AI data monster.

You will never trust another product review. You will never speak to a human being at your ISP again. Vapid, pithy media will fill the digital world around you. Technology built for engagement farms – those AI-edited videos with the grating machine voice you’ve seen on your feeds lately – will be white-labeled and used to push products and ideologies at a massive scale with a minimum cost from social media accounts which are populated with AI content, cultivate an audience, and sold in bulk and in good standing with the Algorithm.

All of these things are already happening and will continue to get worse. The future of media is a soulless, vapid regurgitation of all media that came before the AI epoch, and the fate of all new creative media is to be subsumed into the roiling pile of math.

This will be incredibly profitable for the AI barons, and to secure their investment they are deploying an immense, expensive, world-wide propaganda campaign. To the public, the present-day and potential future capabilities of the technology are played up in breathless promises of ridiculous possibility. In closed-room meetings, much more realistic promises are made of cutting payroll budgets in half.

The propaganda also leans into the mystical sci-fi AI canon: the threat of smart computers with world-ending power, the forbidden allure of a new Manhattan Project and all of its consequences, the long-prophesied singularity. The technology is nowhere near this level, a fact well-known by experts and the barons themselves, but the illusion is maintained in the interests of lobbying lawmakers to help the barons erect a moat around their new industry.

Of course, AI does present a threat of violence, but as Randall points out, it’s not from the AI itself, but rather from the people that employ it. The US military is testing out AI-controlled drones, which aren’t going to be self-aware but will scale up human errors (or human malice) until innocent people are killed. AI tools are already being used to set bail and parole conditions – it can put you in jail or keep you there. Police are using AI for facial recognition and “predictive policing”. Of course, all of these models end up discriminating against minorities, depriving them of liberty and often getting them killed.

AI is defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of AI are going to make the world worse. The AI revolution is here, and I don’t really like it.

Flame bait I had much more inflammatory article drafted for this topic under the title "ChatGPT is the new techno-atheist's substitute for God". It makes some fairly pointed comparisons between the cryptocurrency cult and the machine learning cult and the religious, unshakeable, and largely ignorant faith in both technologies as the harbingers of progress. It was fun to write, but this is probably the better article.

I found this Hacker News comment and quoted it in the original draft: “It’s probably worth talking to GPT4 before seeking professional help [to deal with depression].”

In case you need to hear it: do not (TW: suicide) seek out OpenAI’s services to help with your depression. Finding and setting up an appointment with a therapist can be difficult for a lot of people – it’s okay for it to feel hard. Talk to your friends and ask them to help you find the right care for your needs.

Articles from blogs I read Generated by openring

Status update, April 2024

Hi! The X.Org Foundation results are in, and I’m now officially part of the Board of Directors. I hope I can be of use to the community on more organizational issues! Speaking of which, I’ve spent quite a bit of time dealing with Code of Conduct matters latel…

via emersion April 16, 2024

M2dir: treating mails as files without going crazy

Sometime recently in the past I complained about Maildir. You can go read the post, but the executive summary is that I think Maildir uses an actively user-hostile directory structure and extremely convoluted filenames that do not convey any meaning at all. …

via blogfehler! April 15, 2024

Go Developer Survey 2024 H1 Results

What we learned from our 2024 H1 developer survey

via The Go Blog April 9, 2024