Humans, nature and most systems are too fragile to keep up with technology's rapid progress

March 7, 2024

In previous articles I explained why technology and full throttle capitalism, in their current form, are both good and bad for us. Technological progress is awesome, and has led us to a golden age - but sometimes doing unintended (or intended) damage to humans and nature. In today’s times, we have a fast moving, basically self-driven innovative rocket ship, operating too quickly for humans and nature to adapt to the ship’s negative effects, whilst reinvesting in itself. That’s why, when making decisions in the work we do in tech companies, questions about human and environmental fragility should be a core part of our thinking. (And yes, this is all still quite a philosophical- I am slowly zoning in to practical examples of how we could be more human-oriented as workers in tech)

Crucially, for any company talking about surviving in the long term, this should be part of strategic thinking. Eventually, our negative impact on our practices catches up. Think Facebook losing users after Cambridge Analytica, quitting Instagram becoming the norm after the negative impacts of doomscrolling became well known, factories facing large taxes for not controlling emissions. Being a first mover in acknowledging the fragility of humans or the environment is a long term competitive advantage.

So let’s look at some short examples of fragility, whether for our own personal wellbeing, socioeconomics, the environment, or massive governments. For all of them, we will focus on those caused by the tech world.

Human fragility: The negative mental health impact of social networks

Social media is the new smoking. We’re addicted, know that regular use is bad for us* but can’t stop.

Social media usage is linked to lower life satisfaction, and even structural changes in the brain. The effect differs from social network to social network.

* Obviously each example is a generalised statement and your mileage may vary. Social media is great for keeping in contact with friends, learning about things going on in your community, or killing time on the toilet. If used perfectly, exposed only to the right groups, or very intentionally for small periods of time, there’s no doubt that the positives can outweigh the negatives. Otherwise we wouldn’t have started using them in the first place. However, we aren’t all that disciplined. For many of us, though, the negatives outweigh the positives.

The negatives here weren’t intentional. Companies chased user engagement for profits, and we can’t hold that against them. However, human fragility (probably) wasn’t considered in the design of these networks. As a result, many of us have been too slow to adapt to this new tech thrown at us, and have been left worse off.

Should fragility have been considered, we could have been faster to market with digital wellbeing features (such as screen time tracking, app usage limits, do not disturb modes), or developed preemptive warnings on the negatives of social media usage.

Socioeconomic fragility: Job losses due to AI

There’s no need to cover one of the most popular topics in the news. More than a third of business leaders say AI replaced workers in 2023, with estimates of up to 80 million jobs being replaced in the next 5 years. AI, with the surge of finally useful Large Language Models (LLMs) like ChatGPT, has already caused job losses.

Let’s be clear - this is not a new phenomenon - any automation has led to job losses in the past, and that’s an inevitable price we pay for technological progress - good as a whole for humanity, but often very bad for some individuals.

Automation helps the average employed person do less menial tasks, as we move one level up the technological pyramid. LLMs won’t be the only cause of this - for instance, self-driving trucks would be another massive hit on job security. Fewer and fewer low level animators are required to generate media as prompts can do it for them.

I won’t try argue whether each technology will offer a net benefit to society. We just can’t answer that without perfect data and knowledge of the future. However, this is another case of fragility - whilst the whole is doing quite well, individuals and groups are increasingly vulnerable at a more rapid rate than before.

I also won’t claim to be even remotely well versed on economics. The answer to this kind of fragility is probably some combination of increased grants for the unemployed, free reeducation or, in the long term, an entire redefinition of the typical working life. The problem is that these adaptations come a long time after the technology that caused the issues - because fragility wasn’t considered, or given enough weight (Admittedly, though, there has been a LOT of worry about AI since ChatGPT’s release, but it hasn’t resulted in anything for the average worker, due to the competitive advantages of implementing it in the workplace and laying off workers ASAP)

Environmental fragility

Skipping climate change, the obvious example, we have e-waste. Do you remember what Tablets are? Where is yours right now? The chances are, it’s probably waste.

New technologies arrive and replace old ones before we actually need to. Sure, you don’t really need that new iPhone, but wouldn’t it be nice to have the better camera if you wanted to take a better quality photo on your vacation?

E-Waste is the fastest growing form of domestic waste, and only 20% of it gets recycled. It’s not just about the consumption. Electronic products are of course not biodegradable, and more likely to contain toxic chemicals.

Here, the environment can’t adapt fast enough to new technologies. However, this wasn’t something considered when advertising tablets - the new handy device midway between cellphone and laptop! - profits came first. Now, but perhaps too late, companies are slowly reacting with recycling programs.

Political fragility: Facebook, Cambridge Analytica and the Trump election

Social media won the election for Trump. Cambridge Analytica was a British political consulting firm that became infamous for its role in the 2016 U.S. presidential election and the Brexit referendum. The company specialized in collecting vast amounts of data from social media platforms, notably Facebook, without the explicit consent of millions of users. This data was used to construct detailed psychological profiles of voters to target them with personalized political advertisements aimed at influencing their voting decisions. Cambridge Analytica’s methods raised significant concerns about privacy violations and the ethical use of personal information for political purposes. The firm claimed its techniques were pivotal in swaying public opinion during critical political events, although the effectiveness and extent of its influence have been subjects of debate and scrutiny.

Of course, this was more than just a natural impact of technology on our fragility. Some bad players needed to seize the opportunity and take advantage of voters. However, this displays how a combination of technologies intended for (using the term loosely) good, were used against us. The result changed the course of political history.

We’re still finding the answer for this, and it’s probably going to take a while still. We employ a mixture of user-led, manual and automated content moderation to flag fake or misleading content, but the content still proliferates.

Stopping technological progress is not the answer

All the above examples were, to some extent, inevitable results of new technology. For all of them, we learnt lessons after they became mainstream, and only then attempted to combat the negative effects. Each example was a disruption of how things normally worked, and disrupted a bit too well. We’ve attempted to put a band-aid on the problem.

For none of them (except maybe tablets for mainstream use, RIP), should the answer necessarily have been to pause the technology, such as was the argument for AI last year. Obviously that’s not going to work - other companies and other governments are going to use that time for their own advantage, to catch up or overtake. Rather the answer should always be some magical balance between technological progress, proactivity with regards to their impact on fragile systems, and government legislation.

What does the future bring?

The future is still a big question mark. There are a lot of new technologies on the brink of mainstream adoption, or finally finding their business case. Think about each of the following, what might be the unintended consequences?

And the impact of these technologies on the frail is up to us.

Companies often claim to be in it for the long term, with “100 year strategies” or the like. Yet, at the same time they operate in very small windows, aimed at appeasing shareholders at quarterly intervals, or private equity on even smaller intervals. Let’s acknowledge the crucial tension in that - being able to worry about fragility - the effect of your company on its users and environment, outside of regulatory environments, is a luxury few can afford. However, being a first mover in acknowledging the fragility of humans or the environment is a competitive advantage - in the long term.

However, if we’re true to our long term strategies, or our morals, the buck stops at us for building things right. Otherwise individual contributors in the office could always plant the responsibility on your manager for pressure on your timeline. Managers to the CEO. CEOs to the shareholders. Shareholders to the government, who are ultimately responsible for regulation and protecting everyone’s interests. Governments, or companies, at other governments, worrying about being left behind by environments with less regulation. So don’t make that mistake - be the voice of doing things properly - of acknowledging the impact of our tech on fragile systems.

© 2024 Ryan Anderson