Brief thoughts - What will the AI bubble burst mean for the rank-and-file tech employees?
At this point most of us believe the AI bubble is going to burst. There are more than enough articles on it (my favourite).
At this point most of us believe the AI bubble is going to burst. There are more than enough articles on it (my favourite).
We live in a growth-greedy corporate world. When a new technology appears, the only thing we know how to do is host a multi-company race to implement the technology for corporate gains and throw investor money at it. CEOs are incentivised to sell an idealised future of what may be to get more of that sweet investor money. During this, their employees hurriedly scramble to look for fits for the new technology behind opaque curtains. We find a hammer first, and then look for the nails, and make investors pay for more hammers. This isn’t necessarily a bad thing - capitalism incentivises innovation, which often makes the world a better place. But this particular time, we’ve gone too far. We’ve propped up investor and industry confidence in this hammer.
Reddit, the social media platform, has over 500 million accounts - the majority being in the 18-29 year old demographic. It’s the 7th most popular website in Canada. It’s a big deal, particularly in the USA and Canada - a one stop shop filled with news, memes, public opinions and even the power to influence the stock market. But something dark is up in its Canadian corner.
The age of growth hacking is about to be over, and this has big repercussions for data scientists and our stakeholders. The attention economy is due a recession.
In previous articles I explained why technology and full throttle capitalism, in their current form, are both good and bad for us. Technological progress is awesome, and has led us to a golden age - but sometimes doing unintended (or intended) damage to humans and nature. In today’s times, we have a fast moving, basically self-driven innovative rocket ship, operating too quickly for humans and nature to adapt to the ship’s negative effects, whilst reinvesting in itself. That’s why, when making decisions in the work we do in tech companies, questions about human and environmental fragility should be a core part of our thinking. (And yes, this is all still quite a philosophical- I am slowly zoning in to practical examples of how we could be more human-oriented as workers in tech)
In my previous article, I encouraged pragmatism when building the technologies that power our future. While we are experiencing a golden age of technological advancement that has significantly improved the quality of life, this progress can come with irreversible damages and challenges due to the frailty of humans and ecosystems. Now, let’s explore one of the reasons why technology doesn’t lead us down the perfect, sunny path to utopia.
Public opinion on big tech is predominantly doom and gloom today. We point at corporate greed and late stage capitalism redistributing wealth to the 1%, with disregard to everything but shareholder value, at the expense of the populace. Since GPT and other Large Language Models (LLMs) our worries have worsened. What’s more, it seems that the first effects of climate change are being seen, worldwide.
This piece has been published in The Mail and Guardian
Over a year ago, and after several years of study, I recently took the path less traveled. I stepped “down” from a Product Manager role to a Data Scientist role. I say “down,” because many people used that word when referring to my move. They wondered why I was leaving my career of “making decisions” and “management” to a “lesser” career of being an “individual contributor.” My manager told me to my face that I was doing the wrong thing.
Many digital products, despite great intentions, easily devolve into a mess of seemingly impenetrable technical debt, impossible stakeholder requests and relentless stress for all involved. This deterioration is mostly due to the inherent difficulty of any singular person, team or organisation grasping the product’s complexity enough to make good decisions. And that is a fact of Big Tech life we can’t solve for. But can we make things a bit better? What can we do to design healthy and resilient complex systems that don’t crash and burn? Why, we copy the most complex system we know, of course! We copy the brain.
I ran 100 million covid-related tweets through my own emotion-detecting neural network. I won’t try to explain the reasons behind the results, since this is an area rife with correlation, Twitter’s US-bias and a of course a very complex system. However, some very interesting patterns emerge! Furthermore, it is clear that this algorithm could be applied to solve other problems: a generalised approach to automate detecting emotional shifts towards a particular topic on Twitter which can have powerful business applications.
I came across a well-prepared dataset provided by Google, with 58 000 ‘carefully curated’ Reddit comments, labeled with one or more of 27 emotions, e.g. anger, confusion, love. Google had used this to train a BERT model, in which they had varying success in emotion detection depending on the type of comment. I thought it would be a great example to learn how to tweak Convolutional Neural Networks (CNNs) and Embedding to a Natural Language Processing (NLP) problem, and obtained decent accuracy for some emotions, but not as good of course as Google’s BERT.
An ordered list of your biggest potential training gains, based on sports watch data from you and other athletes.
I have been disappointed with Netflix lately. After they delisted a few of my favourite shows (especially Doctor Who), and I struggled to find anything new and good to watch, I started wondering if the Netflix Overlords had made a strategic decision to offer cheaper, lower quality, but more variety of content to hopefully satisfy us through their machine learning algorithms finding just the right show for an individual, as opposed to focusing on making brilliant shows for all to enjoy. I was (somewhat) wrong!