An introduction to the Humanising Data Science series

February 13, 2024

Nobody knows what the future holds. The economy, climate change, the arrival of Artificial General Intelligence (AGI), when nuclear fusion will bring us clean energy, whether we’ll be laid off within a year, or even what’s for dinner.

Yet as data scientists, and often to the same extent, other tech workers and managers, our job is to predict the future to make sure the companies we work for are profitable in the long term. Data science is a huge field, but on average, in big tech, we are there to add micro-improvements to the bottom line. Machine Learning data scientists automate learnings in order to optimise marketing, advertising, sales. Product data scientists run experiments or analytics to improve a product, often a new stream of monetisation or growth. Analysts look at existing data to try to understand what has happened. But in the end, it’s all about the bottom line.

Growth is the key here. We are chasing it. But somewhere along the way, it feels like we’ve lost the plot. We chase KPIs, we chase engagement, we chase profits, we chase more money - but we rarely chase user happiness or how we’re actually affecting the user. Especially in big tech, we’ve abstracted the human end users to such an extent that we’ve forgotten about them.

I’m working in my third company as a data professional now, in my fifth department, and I rarely hear anyone outside of designers ask how the users are doing, or take the user perspective.

And here’s the crux of it: While the core task of a data scientist is to predict the future, we’re simultaneously unleashing products on the world that are, to some extent, damaging the future. Some of us build things that automate away other’s jobs or make them less profitable (Uber, OpenAI). Others built or maintain things that have made people a lot of people depressed (Meta). Some of us power huge corporate machines that are designed to use economies of scale to take profits away from small businesses (Amazon).

Humanising Data Science and Technology

I am not trying to shame us all into switching jobs, or careers. Rather, the point of Humanising Data Science is to bring us back to thinking about the human we are affecting, first.

Humanising Data Science is a loosely tied together set of thoughts, principles and ideas aimed at calling us to be better data scientists - to humanise data science - to bring our craft, and technology in general, back towards building awesome stuff that makes people happy, versus maximising short term shareholder profits at the expense of humanity. (And yes, I’ve used a lot of grandiose terms here: expense of humanity. damaging the future. Future articles will explain).

This isn’t a course (yet…). It doesn’t always follow a perfect logical order. But I’d like to think that it’s a core set of thoughts and examples that we should be thinking about to be better data scientists, developers, managers, or just people who use technology. Most of all, I’d like to get people thinking. People agreeing. Hell, I’d love to get people disagreeing - because I believe we can do a lot better if we have arguments about the things that matter.

Change and improvement has to come from the ranks of lower level workers in tech - the data scientists, mid level managers and developers like us - as much as it needs to come from the CEOs, the boards, the thought leaders. These articles are aimed at all of us. However, they’re best tailored for data scientists - because data science is now a core part of every tech company, and soon everyone’s life. In a world with so much fake news, with so many questions, with so much worry, good data science practice is paramount.


Although I don’t like a lot of what the controversial Steven Pinker has to say, he has a great definition of progress:

“Most people agree that life is better than death. Health is better than sickness. Sustenance is better than hunger. Wealth is better than poverty. Peace is better than war. Safety is better than danger. Freedom is better than tyranny. Equal rights are better than bigotry and discrimination. Literacy is better than illiteracy. Knowledge is better than ignorance. Intelligence is better than dull-wittedness. Happiness is better than misery. Opportunities to enjoy family, friends, culture, and nature are better than drudgery and monotony.”

“Humanising” data science and technology means striving for progress in all of the above. Of course, we could get very philosophical here. It’s easy to define what is good at this moment in time, just like it’s easy to write a report about your company’s current financial situation. But bring in questions about the long term, and things get complicated. Without perfect knowledge of the future, and without perfect measurement, we can’t measure or predict progress (by the above definition) perfectly, and that’s fine. But we have to try - and who else should be doing it, if not data scientists?

A primer for the rest of the articles

The next few articles I release will be rather political and philosophical, but should serve as a solid base for thinking about the decisions we make as data scientists, engineers and managers, in the grand scheme of things - how they affect the economy, politics, or the survival of our race. These are concepts that apply to entry level tech workers, as much as they do to CEOs and policy makers.

We’ll cover the topics of human fragility, golden ages, growth, degrowth, techno-optimism, companies’ relentless focus on profits, growth hacking, time windows, picking great metrics for your work, humanising experimentation and machine learning, AI, AGI and related policies, and more.

© 2024 Ryan Anderson