It’s all AI all the time these days. A lot of the research coming out is incredibly exciting, and the new capabilities being launched run the gamut from the disappointingly banal to the downright eye-popping. Then there are the tectonic plates shifting in the energy sector.
Technology is evolving quickly at all levels, and I am loving every minute of it. The only thing more fascinating is watching how people are evolving in response -- and seeing how we can evolve further.
I, robot.
The prevailing rhetoric around AI has been that the more we give the robots to do, the more time humans will have to do what they love. The problem is, if the robots are given work that people need to do to make money, then people will change how they work to continue making that money. This means people are increasingly incentivized to become more like robots.
Each industrial revolution has changed how we think about work. So I don’t mean to suggest that people-as-robots is a big, new insight. It’s not. White collar and blue collar work has undergone multiple transformations over the last 100 years, thanks to technology. The change ushered in by artificial intelligence is different though. If you don’t believe me, consider which of these messages resonates more now versus before:
- Blue collar disruption or white collar and blue collar disruption happening simultaneously
- Do what you love or Do what makes you money
- Bring your whole self to work or Leave your whole self at home
Most people don’t love doing the things that reliably make a lot of money. These include but are not limited to: science, mathematics, engineering, finance, etc. Even if they do, they often don’t like doing it in cut-throat corporate environments where making money or maximizing shareholder value is the top priority. Take academics for example. These are people who enjoy solving hard problems and answering questions that have never been answered before. Their work is incredibly valuable to society, but you wouldn’t know it based on what they get paid.
The problem is, if the robots are given work that people need to do to make money, then people will change how they work to continue making that money.
A recent study analyzing attrition in science found that just shy of 50 percent of scientists leave academia within a decade after publishing their first paper. There are a number of reasons academics leave the field, but one big reason is money. As one departed academic put it:
“Having completed my PhD, I realized that rather than working 70-hour weeks as a postdoc in a US lab for minimum wage and having to sleep on the lab floor, I could get a cushy job in consulting earning 3 times more, working only 60-hour weeks and sleeping in a nice hotel bed,” he said.”
So, instead of publishing research for the benefit of the broader scientific community, many well-trained academics are doing that research behind a wall of non-disclosure agreements and patent filings. Who can blame them? They work hard – very hard – and they can get paid more in corporate than in academia for the same unit of time. A machine learning agent, when given this optimization challenge and feedback signals would make the same decision.
Order of positive operations
Having worked in academia and corporate settings, I know first hand how different they are. The removal of a profit motive leads you to make very different decisions than you would in a corporate environment.
This isn’t to say academia is this utopia devoid of monetary incentives. Academia has its own weird relationship to money. There are big donors, tuition-paying students, and endowment managers. You have to navigate those stakeholders and their motives all the time. That can be mildly taxing when it’s not downright frustrating.
Nevertheless, academia lets you maintain a much sharper focus on mission at a level much higher than simply making money. Personally, I found working in academia inspires more creative thought, greater risk-taking, and deeper work – even though it pays peanuts.
Some people leverage their academic appointments to make a lot of money as authors or as consultants to the private sector. That can be difficult if your work doesn’t lend itself to the easily-digestible soundbytes craved by today’s media machine. For example, if you’re in a university doing deep technical research in quantum computing, you may make good money as a consultant, but intellectual property may get tricky. It will be harder to turn your work into a New York Times Bestseller, a lucrative speaking tour, and a breakout podcast that serves as premium real estate for deep-pocketed advertisers. In short, when it comes to general audiences, quantum computing is super cool – but not Oprah’s book club cool.
When you work inside a company, there may be rhetoric that invokes a higher-order mission, but the real mission at all times is this: maximizing shareholder value or, if you’re a private company, delivering investor returns. Maximizing shareholder value is pretty simple: keep the profit line going up and to the right with the sharpest possible slope at all times. Ideally, companies do this while making the world radically better or ensuring positive social outcomes. Those outcomes take a back seat to maximizing profits because, without profits, a company can’t afford to have any impact, not to mention positive impact.
This is surprising to a lot of new hires in the corporate world, especially because the “good for the world” rhetoric can be emphasized to the point where it obscures the profit needed to bring that “good” about. New hires are also often sold on the mission when they decide to take a role – especially those coming out of academia or nonprofits.
The realization that the mission (“make the world better”) has an order-of-operations rider (“...after we’ve delivered investor returns/maximized shareholder value we need to make the world better”) can be a really tough pill to swallow. That pill gets even bigger and more jagged when new hires realize that it’s best they go about achieving the mission and its rider without bringing the part of themselves to work that leads them to share their opinions or feelings about the situation with others. If you’re feeling some kind of a way, check your pay stub to change that feeling and get back to work – or leave.
We are the robots we fear.
This is why I don’t believe the AI apocalypse is what we think it is. The predominant narrative is that the robots we build are going to take over the world and violently destroy all the humans Terminator-style. It’s either that or they’ll steal all of our jobs, leaving all but a handful of us with empty hands and empty pockets.
These narratives have always struck me as good fodder for movies and hot takes, but I've never found them compelling beyond that. In fact, I believe the outcome could be even worse. What if we don’t even need to build the robots to the point where we’re all broke and bored or where the robots determine we shouldn’t exist? What if, instead, we become the robots we fear?
In other words, what if we proactively reduce ourselves down to the point where we continuously optimize our behaviors and every aspect of our lives for making more money. What if we erase our personalities and emotions so we don’t risk bringing any part of ourselves to work or anywhere else? In essence, what if we make ourselves artificially intelligent: no emotions we’re not paid to feel, no expressions we’re not paid to have, and no goal other than to optimize for the given outcome (maximum profit)?
If the slow, self-induced destruction of our very selves in the interest of maximizing profits isn’t an apocalypse, I don’t know what is.
Our most powerful weapon
This future isn't inevitable. We can choose a different path -- one where we elevate the health and wellbeing of ourselves and our communities, the thriving of our natural environment, and a culture of invention through open collaboration above profit. We decide whether money has value and what value it has relative to everyone and everything around us.
Our agency makes stories of our great robot overlords and the power of the almighty dollar just that -- stories.
The problem with the AI apocalypse and the primacy of money narratives is that they assume there's a point past which we have no control to make decisions of our own. These narratives also assume we are not accountable for the decisions we've made in the past or the decisions we make in the future.
Our agency makes stories of our great robot overlords and the power of the almighty dollar just that -- stories. We can choose different ones. Those new, more imaginative stories -- where value exists in more forms than money -- could make us wealthier beyond our wildest imaginings.
Copyright notice: No part of this content may be used to train any model of any kind in any way without the author's express permission.