AI on the Global Economy
- Gwyneth Muir Atkinson
- Jan 4
- 8 min read
“It’s always a nice feeling when your shopping cart greets you like an old friend, [...] Visual sensors embedded in the handlebar have already completed a scan of my face and matched it with a rich, AI-driven profile of my habits, [...] My refrigerator and cabinets have already detected which items we’re short on this week, and they automatically ordered the non-perishable staples,” (Dr. Lee, 119). Usually, AI is not seen as blatantly as it is in Dr. Kai-Fu Lee’s novel, AI Superpowers: China, Silicon Valley, and the New World Order. But whether you see it or not, Artificial Intelligence is now integrated into our world. Advertisements, shopping suggestions, and our social media feeds are just a few examples of how AI algorithms curate highly personalized content for us. However, AI is by no means magic fairy dust. AI replaces workplace jobs such as assembly line workers, Uber drivers, farmers, and even tutors. The author of Co-Intelligence: Living and Working with AI, Ethan Mollick, and the author of AI Superpowers: China, Silicon Valley, and the New World Order, Dr. Kai-Fu Lee, recognize both the benefits and the hazards of AI, how greatly AI has affected today’s society, and how it will only continue to affect us. Both authors then propose their unique solutions to the crises of unemployment and purposelessness that AI brings. Mollick proposes how one should train human beings to be the “human in the loop”, or to immerse an individual within a single discipline, thereby making them experts in that field, instead of replacing human beings with AI algorithms and robots. Hypothetically, this will keep AI in-check by using an expert to ensure that AI never hallucinates (when it draws baseless conclusions because it doesn’t have enough data to give a sure answer). Dr. Lee understands that AI will create a wealth of knowledge previously unknown to humans. In consequence, he believes that humans will need to adapt by reducing their work hours, by constantly retraining humans for strictly-human jobs, or by redistributing the monetary gains from AI to society’s members. Dr. Lee does, in fact, envision a world where AI does most careers in peoples’ stead, but to ease peoples’ feelings of unrest, he proposes a “social investment stipend” that would be given to people who use their time to create a kinder society: doing care work, community service, or work in education. The people’s stipend would not come with healthcare or unemployment benefits, but would be respectable income, and, most importantly, give purpose – something which humans may soon be lacking as fields of expertise disappear. Recent evidence provided by the two authors, Wharton professor Mr. Mollick and AI-specialist Dr. Lee, lead me to conclude that there are three likely possibilities of change on the horizon in the global economy: the widespread development of super-companies, a stricter governmental and law system surrounding AI usage, and a higher demand for actual human work.

Image via SciTechDaily
AI’s capabilities are profound, and will undoubtedly overturn myriads of work positions that, previously, could solely be completed by a human. However, in our age of AI, human input becomes rare when we have superintelligent algorithms readily available to do the work instead. As human output becomes rare, an actual person’s work will therefore become a highly valued resource. The law of demand dictates that when the quantity of a supplied good is lowered, the demand of said good will increase. Meaning that, when humans work less, and as AI becomes the average worker, human work will be precious compared to the work AI produces. Coincidingly, when AI does our work for us, it will create less working hours for humans, allowing people to free their schedules and spend their time doing whatever creative endeavors they value. In this upcoming era of less time spent working, Dr. Lee proposes his vision: “As a venture-capital investor, I see a particularly strong role for a new kind of impact investing. I foresee a venture ecosystem emerging that views the creation of humanistic service-sector jobs as a good in and of itself. It will steer money into human-focused service projects that can scale up and hire large numbers of people: lactation consultants for postnatal care, trained coaches for youth sports, gatherers of family oral histories, nature guides at natural parks, or conversation partners for the elderly” (216). He sees an economic opportunity to profit from people’s work in community services. Again, human emotion and connection will become rare, therefore will have higher value, and thus will earn people respectable livings. These job opportunities would create a new series of positions in a newly profitable venture: community service. Soon, businesses will function fluidly and independently outside of human physical labor, leaving humans the opportunity to look for purpose elsewhere, in the activities that are uniquely human: social interaction. The economy may grow exponentially, but people will shift their focus-purpose to human relationships and will be rewarded for it.
AI is trained with enormous amounts of data, but where can one find the great amounts of data that are needed to train AI databases? That is the dilemma of many companies, especially when large corporations seem to dominate the collection and production of data. Since super-corporations own the majority of the available data, they will have the strongest databases, and consequently the largest AI centers and systems. In our economy, whoever can use AI to its utmost potential will have the upper hand in a business’s functioning and overall profitability. The rigid necessity of having AI in your business creates issues for smaller businesses that don’t have the resources to create large-scale AI data collection and training systems. Today, it’s a small drawback in a business, but soon, it will become an unbridgeable canyon between companies’ capabilities. The companies that remain standing at the end will become super-companies. Such corporations already exist, a perfect example being Amazon. When smaller businesses simply cannot compete with larger ones’ performances, super-companies will gain the vast majority of the market share of consumers, putting many smaller businesses out of work. This creates a smaller pool for job opportunities in our workforce. People can only work at corporations that actually exist, after all. Soon, people will have limited options of companies that they can go into their respective fields. This shift doesn’t equate to fields of expertise disappearing, but instead, it means that within different specializations, there will be a select few choices of companies to pick from, in lieu of the plethora of choices an employee has today. Dr. Lee states, “We could see the rapid emergence of a new corporate oligarchy, a class of AI-powered industry champions whose data edge over the competition until they are entirely untouchable. American antitrust laws are often difficult to enforce in this situation, because of the requirement in U.S. law that plaintiffs prove the monopoly is actually harming consumers” (171). His idea of a corporate oligarchy means there would be just a few companies left that are uber-powerful. When the canyon of AI sorts the viable companies from the rest in the pile, a small group of formidable corporations will rule over the working world. Mollick comments on Amazon, a tech giant and billion-dollar company on its effective use of AI: “Amazon integrated AI into forecasting demand, optimizing its warehouse layouts, and delivering its goods. It also intelligently organizes and rearranges shelves based on real-time demand data, ensuring that popular products are easily accessible for quick shipping” (07). Some companies are so far ahead in their levels of productivity from using AI algorithms, that for the other companies that haven’t integrated AI into their systems, it will be increasingly strenuous to work against the towering barriers of entry that are today’s global economy.
AI is a tool and a service which has unequal opportunities for growth, serving as a sort of Swiss-army knife for its skill across nearly every discipline. Within such a vast range of possibility, though, lies the frightening possibility for misuse. How can people prevent AI from being used as a tool for harm or misguided intentions? At some point, governments must create laws, boundaries, and punishments for AI’s misuse. Dr. Lee brings an excellent concern to light, stating that “Superintelligence would be the product of human creation, not natural evolution, and thus wouldn’t have the same instincts for survival, reproduction, or domination that motivate humans or animals. Instead, it would likely just seek to achieve the goals given to it in the most efficient way possible […] A superintelligent agent could easily, even accidentally, wipe us off of the face of the earth” (Dr. Lee 141). AI doesn’t have any of the elements that make a human a human, and so it won’t have a moral conscience. Meaning, an AI wouldn’t see any problem in killing the entire human population if it meant completing the goal of solving pollution. Mollick sheds light on just what AI actually is by explaining, “AI is a tool. Alignment is what determines whether or not it's harmful or helpful” (Mollick 42). Integrating law into the alignment of AI will not only prevent the AI from being misused, but it would also enact a fitting punishment on the individual who used AI in a way that was improper. Mollick then makes a claim for how this issue should be tackled in the future: “The path forward requires a broad societal response, with coordination among companies, governments, researchers, and civil society. We need agreed-upon norms and standards for AI’s ethical development and use, shaped through an inclusive process representing diverse voices” (Mollick 44). A broad societal response entails a series of government-regulated laws regarding AI and guidelines as well as actual punishments.
However, this would require governments around the world to create systems of tracking AI use back to the user. At the moment, there are AI detectors, but most have an approximately 80% accuracy rate, and can only tell the analyzer whether or not it was AI-generated, not who the user was. Diversity and bias in AI’s algorithms are also a pressing issue, as AI can target or hold opinions based on its data pool. Society would need alignment laws targeting bias in AI. Largely, though, it just means our society as a whole has to change its views on race and gender, because after all, AI is just our mirror. Not everyone means well, and therefore aligning AI to humanity’s benefit is a priority, and one that our legal system needs to uphold.
Whether AI leads you around a grocery store or automatically buys your groceries for you, the global economy is in constant motion and has always adapted to breakthroughs in technology in science. Undoubtedly our economy will become more productive, reaching wealth previously impossible. However, rising from these ashes will be corporate oligarchy: companies that are so powerful they eclipse ones who had disadvantages in the beginning, or those who didn’t adapt quickly enough. Real human beings will be valued as AI begins to finish our work for us, for something people cannot live without is true connection to another human. Not only will our economy change, but whatever interacts with our modern economy will need to change as well, such as the law and our governments. AI is a tool with a flip side. It’s the mirror to society. Not everyone has pure intentions, and therefore aligning AI to humanity’s benefit is a priority, and one the law systems need to uphold. AI won’t just change our economy, it will also change the way we live our daily lives, and you’d be surprised in the many days it already does.
Works Cited
Dr. Kai-Fu Lee. AI Superpowers: China, Silicon Valley, and the New World Order. First
Mariner Books edition 2021, HarperCollins Publishers, 25 September 2018.
Ethan Mollick. Co-Intelligence: Living and Working with AI. Portfolio / Penguin, 2 April
2024.





Comments