many open books

Thinking machines have lurked in my imagination for as long as I can remember, seeping in through books and movies, even philosophy. I’ve written fiction about them. I probably always will. While I doubt that generative AI is a step on the path to artificial general intelligence (AGI), it remains an amazing achievement and I believe that, thoughtfully trained and used, it could represent a hugely powerful force for good. No technology grows up in isolation, however. Given the vast cultural, natural and labour resources that large language models require during their production, and the murky imperatives and objectives of the companies and governments that will deploy them, they can only be properly assessed in those wider contexts.

So what does that reality look like?

Work: robots and humans

In 2024 a San Francisco-based company named Artisan launched a now-notorious billboard campaign for its AI sales development representatives with the tagline Stop Hiring Humans. This, alongside messaging which included phrases like “Artisans Won’t Complain About Work-Life Balance”, was a deliberate provocation. In fact, Artisan insists that the campaign’s objective was merely to promote a conversation and that, in fact, they love humans, work-life balance and all. “The real goal for us is to automate the work that humans don’t enjoy, and to make every job more human.”

Artisan might really believe this. And, given, the right context, the dream might even be achievable. Certainly it’s one of the main assertions made by AI companies (and automation advocates since at least the 1830s). Just try searching for automation frees up workers on Google to pick from one of dozens of variations on the contemporary theme.

This TED-talkery aside, though, do any of us really believe that automation will simply be used to remove the drudgery from the lives of workers?

Given the economic incentives involved, it is more plausible that AI will be used to invert the dream. Instead of liberating sectors of the workforce from routine tasks, such tools already use surveillance and algorithmic management to turn humans into the servants of technology.

At the same time, as AI improves, employers will almost certainly use agents to replace workers rather than enhance their working lives. If business owners fail to cut costs and extend productivity when the option is available to them, how will they answer to their shareholders or compete with other companies who do make those savings?

If you think this sounds like an alarmist prediction, consider this very contemporary Bluesky post from novelist and copywriter Kameron Hurley:

I,too, was shocked at how fast LLM turn happened. They pushed us for 18 months. We’d gotten to, “Ok, here’s how we’re using it to save an hour on writing by generating some outlines and text blocks.” And then 6 months after that 70% of team was fired and we who remained became custom GPT editors.

https://bsky.app/profile/kameronhurley.com/post/3llgpxxlthk2d (must be logged in to read).

It is possible that, with their billboard campaign, Artisan were merely deploying irony to promote debate as they claim. It is, of course, entirely improbable that they were simultaneously winking at future customers.

Theft…

Meanwhile the AI companies wield bots to hoover up Web content for their models. This includes information generated by journalists and site users, but also copyrighted material not intended for exposure. Recently, Meta was caught training its model on content held on the LibGen site – a database of thousands of books. A database called Books3 was also used to train LLMs. OpenAI have been accused of accessing paywalled books from O’Reilly without permission.

The companies are seeking, and may get, special legal status with regard to copyright. This would regularise their behaviour and, in future, force content creators to opt out of having their work consumed and regurgitated without compensation. Whatever happens, it is already the case that individuals and small companies are penalised for copyright violation while the AI companies are given, or take, a pass.

…And laundering

The fact that LLMs are, to some extent, black boxes makes them excellent laundries for the intellectual labour of human creators. The hard work of, often under- or unpaid creatives goes into the hopper. The derived work that emerges at the other end cannot be provably shown to have been taken without attribution or remuneration thanks to the mysterious workings of the magic box in the middle.

Net citizenship

AI is hungry for content, and it isn’t fussy about where it finds it. Online, the AI bots are aggressive, sucking up the resources of site owners and often ignoring the rules for acceptable engagement left for them in special robot.txt files.

One of my primary contracts has involved me in much wrestling with badly behaved AI content bots over the last couple of years. These agents pop up and surf millions of pages, ignoring the directives we set in our machine readable policy page. When their IPs are banned, they simply resume from a new server. This behaviour puts a strain on resources and locks out legitimate users. According to the New Scientist, for example, the costs involved in handling the demand represent an existential threat to the Wikipedia project.

The effect of Google on online publishing has always been an issue. Try looking up a recipe online (and then scrolling for ever and ever and ever and ever to get to the actual instructions) to see the material effects of SEO on the substance and structure of some types of article. Still, the relationship has been more or less symbiotic over the years. Google provides traffic to publishers, and publishers provide content for Google to index.

The advent of Google’s AI overviews will further complicate this balance. A recent report suggests that though any sites included in an AI overview tend to benefit from inclusion, there are ‘significant harms’ for sites which do not feature. It is reasonable to conclude that this is because visitors are dissuaded from visiting sites by summaries containing information derived from them (see Laundering above).

Exploitation and bias

Meanwhile, since most of the AI firms are Silicon Valley start ups, you might think that life as an employee is all shuttle buses and free lunches. That’s probably true for rockstar US-based scientists and engineers. On the other hand, if you’re processing data for the AI companies you might be employed as a contract worker in East Africa for less than two dollars an hour. As a content moderator you might be exposed to the kind of material that can leave you psychologically scarred.

At the same time, large language models, trained on vast swathes of public data tend to reflect the biases already inherent in the culture. As Vox put it:

Facial recognition software has a long history of failing to recognize Black faces. Researchers and users have identified anti-Black biases in AI applications ranging from hiring to robots to loans.

AI companies largely recognise the fact of bias but do not necessarily agree about its nature. Although most commit to correct structural racism, for example, others – notably xAI, Elon Musk’s reactionary vehicle – deny its existence. xAI describes racism as a ‘social phobia’. According to Business Insider:

Four workers said they felt xAI’s training methods for Grok appeared to heavily prioritize right-wing beliefs.

“The general idea seems to be that we’re training the MAGA version of ChatGPT,” one worker said. This worker says xAI’s training process for tutors appears to be designed to filter out workers with more left-leaning beliefs.

Ecological impact

According to the United Nations Environment Programme, the rush to embrace generative AI is likely to inflict a huge ongoing environmental toll.

The proliferating data centres that house AI servers produce electronic waste. They are large consumers of water, which is becoming scarce in many places. They rely on critical minerals and rare elements, which are often mined unsustainably. And they use massive amounts of electricity, spurring the emission of planet-warming greenhouse gases.

Given that ecological disaster is now a matter of degree rather than probability and given that generative AI is often a solution in search of a problem (see also cybercurrencies and the metaverse), it may be sensible to spend a little more effort on an analysis of costs and benefits.

It’s obvious by now that this is not going to happen, however. In fact, none of the concerns described in this article are likely to get much serious actionable consideration. For that reason, I’m not sure how to place myself, and the work I do as a writer, coder and employer, in relation to generative AI.

The argument that it’s coming anyway seems morally suspect at best, although it’s already an unavoidable part of the workflow in some of the projects I work on. AI will likely remain ubiquitous and grow more so over time. While it is not intrinsically good or bad, neither is it value free. We should not be treating it as a magic fait accompli – as if it arrives fully formed and exists beyond our control and as if we do not have a responsibility to consider the ethics of its creation and use.

As part of this wrap up, I searched for ethical AI and found a promising hit. Then I discovered that the company touting their ethical credentials provide AI services for both the US armed forces and the oil industry. Back to the drawing board there. Also, the problem with any definitive statement of policy in relation to AI is that it risks being overtaken by events in five minutes flat. That isn’t to say that amazing advances are inevitably on their way – possibly quite the opposite. All we can be sure of is that the state and effects of AI will continue to change and to do so fast.


Yes! Please sign me up to this newsletter


Photo by Patrick Tomasso on Unsplash