The Impact of AI on Society and Everyday Life (2024)

The Impact of AI on Society and Everyday Life (1)

Artificial Intelligence is changing the world and the impact will be massive: on the way we work, we live, collaborate, decide, and act as a society.

But how can individuals and societies benefit from AI? What are the global problems that can now be addressed more effectively by leveraging AI? What are the risks of AI at the societal level? What is the ‘technological singularity’ and how could it affect us? How should individuals and societies get prepared for AI?

Michael Wu PhD, Nell Watson, Anthony Mills, Alf Rehn, Dr. Marily Nika, Nazar Zaki, Angeliki Dedopoulou, Jon Skirnir Agustsson, and Boyka Simeonova share their insights.

Chief AI Strategist — PROS Inc. • USA

Throughout the first three industrial revolutions, humans have learned to leverage machines to automate various tasks. We use machines to augment our limited physical strength, endurance, memory, and computing capability. However, until recently, there were no machines to augment our decision-making capability. Hence, most of the high-paying jobs in the post-industrial era involve skilled labor that requires substantial decision-making. Only the most mundane and mindless tasks are automated by mechanical machines. Yet, many of these machines still require human operators to make decisions, whether it’s as complex as driving a truck, or simply deciding when to switch a machine on and off.

Perhaps, we’ll need a new economy that is driven by maximizing happiness rather than profit. - Dr. Michael Wu

Today, as AI-based technologies become more pervasive, machines can augment our cognitive capacity and automate our complex decision-making processes for the first time. This will dramatically change the way we work, leading to the Fourth Industrial Revolution[1]. Many tasks that were reserved for humans and require some level of human decisions, can be automated as long as we can collect enough data to train an AI to mimic those human decisions and actions.

One of the most significant benefits of AI is the huge efficiency it brings. Since many tasks can now be automated completely and without humans being the bottleneck, they can be executed much faster. Moreover, since AI does not need to eat or sleep, it can work 24/7, leading to further productivity increase. As with any machine automation, AI can eliminate careless human errors, and provide greater consistency in our complex decision-making processes.

Individuals can benefit tremendously from AI because it can eliminate the mundane and repetitive tasks that nobody likes to do. Whether it’s something as simple as deciding which movie to watch, adding items to our shopping list or having them delivered automatically, or getting home safely and quickly, AI can automate these tasks, allowing us to spend our valuable time on more important things. Not only do we get convenience and save time, but also, we get better and more personalized experiences.

Now let’s expand our scope and look at AI’s impact on businesses. Today, many enterprises, especially large ones, have many inefficient or simply broken business processes (e.g. in customer service). These inefficiencies have many negative side-effects on the business, as they often result in higher operating costs (e.g. hiring more staff). Furthermore, when these processes touch their customers, the poor customer experience can erode brand equity and customer loyalty.

Large-scale job displacement in the short term is a problem we must address. - Dr. Michael Wu

However, as with individuals, businesses can also realize a dramatic efficiency gain from AI. Fixing the inefficiencies in business can indirectly cut costs and improve customer experience. But beyond that, businesses can also improve their customer experience directly using personalization AI (e.g. recommender systems) and create more engaging brand interactions via conversational AI (e.g. chatbots and virtual assistants).

Unlike consumer AI tools that automate simple everyday decisions, business AI can be trained to automate decisions that are often highly technical, domain-specific, and have a much lower tolerance for error. Business AI is much less known to the consumers because they are often used by highly specialized experts. They are used to augment human experts, to not only automate but also optimize their high-stake decisions that often have a direct impact on the company’s top line (e.g. real-time dynamic pricing). Hence AI can also help enterprises improve margins, and revenues, and drive greater profitability.

Now let’s further expand our scope and examine AI’s impact on our society. As companies and individuals strive to realize greater efficiency from AI, our society as a whole will also function more efficiently. Since the first industrial revolution, we have spent less and less time at work. If this trend continues, maybe in the not too distant future, AI automation could allow our society to function so efficiently that it can support a Universal Basic Income (UBI)[2]. Perhaps, we will no longer need to work for survival, but instead, we work because we want to, for the passion, the experience, and the sense of fulfillment.

Clearly, we are not there yet! Today, our AI systems are only capable of learning from specific data sources and automating point decisions in a narrow domain (i.e. Artificial Narrow Intelligence, ANI[3]). However, since technological progress occurs at an exponential rate, it won’t be too long until AI matches human intelligence (i.e. Artificial General Intelligence, AGI[4]) or even surpasses it (i.e. Artificial Superintelligence, ASI[5]). When this happens, ASI could potentially rewrite themselves to make them even more intelligent. This positive feedback of intelligence would grow indefinitely, leading to more and more world-changing innovations at an increasing rate. Humans simply cannot adapt to those rapid and dramatic changes, let alone the existential threat of an ASI. This uncontrollable technological explosion is often referred to as technological singularity[6].

Although the looming singularity is frightening, It’s unfruitful to speculate about a knowingly unpredictable future that’s far away. Stemming from the mass adoption of AI, there are already many societal challenges that we must deal with long before we reach the singularity. As AI automates more human work in a market society driven by competition and profit maximization, it’s inevitable that companies will reduce their human workforce to cut costs. What will humans do then? Perhaps, we’ll need a new economy in the future that is driven by maximizing happiness rather than profit.

Since AI advancements progress at an exponential rate, it will be challenging to retrain and upskill the human workforce fast enough for them to keep stable jobs. Although technological innovation always creates more jobs in the long term, large-scale job displacement in the short term is a problem we must address. Moreover, if the pace of change is fast enough, our current education policy, where we front-load education early in an individual’s life, may no longer be practical. So we may also need a new education system.

According to the renowned sociologist, Gerhard Lenski[7], as technology enables more efficient production, it will lead to a greater surplus. This not only supports a larger society but also allows members of a society to specialize more, thus creating greater inequality. Since the efficiency gained from AI is huge, the inequality it creates is also extreme. This is already very apparent in the income disparity between tech and non-tech workers across the globe. Despite the appeal of UBI, it will likely further increase inequality as it would go to everyone equally regardless of their income. Some inequality is good, as it not only motivates people but also enables large-scale projects that require huge investments. However, too much inequality is definitely bad, as it leads to more crime, reduces social mobility, and undermines the fairness and trust of social institutions.

Since the efficiency gained from AI is huge, the inequality it creates is also extreme. - Dr. Michael Wu

What about the looming singularity and the existential threat? If you must squeeze a comment out of me on this matter, consider this: All AI systems learn from data. But these training data are created by humans, as they are digital records of our past actions and encapsulate our past decisions. So AI is really learning from us, humans, and AI will mimic our decision processes.

Therefore, if we do run into a situation where our interests are in conflict with AI, the best way to ensure that AI doesn’t destroy us is for us to be better role models for AI now. That means we, as a human race, must learn to not kill each other whenever we run into conflicts. In short, the best way to ensure our own survival is for us to be better humans. We must learn to be more compassionate, more empathic, more environmentally conscious, etc. So our decisions and action can be used to train an AGI (or ASI) that mimics these ‘better-human’ qualities.

This may sound impossible in today’s society because we must compete and struggle for survival, which often brings out the worst of our human nature. However, in an AI-augmented future, we may not need to work for survival, and our economy may no longer be driven by competition. So with the help of AI, maybe we can be better humans before we reach the singularity.

Dr. Michael Wu is the Chief AI Strategist at PROS (NYSE: PRO). He’s been appointed as a Senior Research Fellow at the Ecole des Ponts Business School for his work in Data Science, and he serves as an advisor and lecturer for UC Berkeley Extension’s AI programs. Prior to PROS, Michael was the Chief Scientist at Lithium for a decade. His R&D won him recognition as an Influential Leader by CRM Magazine. Michael has served as a DOE fellow at the Los Alamos National Lab. Prior to industry, Michael received his triple major undergraduate degree in Applied Math, Physics, and Molecular & Cell Biology; and his Ph.D. from UC Berkeley’s Biophysics program.

Tech Ethicist, Researcher, Reformer — IEEE Standards Association • Northern Ireland

It’s clear that AI is going to be 10–100x more influential in the 2020s than in the previous decade. Recent developments in ‘Transformers’ aka ‘Foundation Models’ or ‘Large Language Models’ are a tremendous step forward from Deep Learning. These new models can ingest a very broad range of data (spreadsheets, poetry, romance novels, industrial process monitoring, chat logs) and various types of data, such as text, audio, video, etc. They also have the capacity to solve thousands of different problems with one model, in comparison to Deep Learning systems which may be quite effective but only in a narrow range.

Our strange world is only going to get weirder. - Eleanor ‘Nell’ Watson

This new technology is also able to deal with abstract concepts in new ways. Simply by asking for something to be ‘more polite’ or ‘less formal’, these models can make an appropriate interpretation. This means that one can use everyday, natural language to specify generally what they want, and then refine it closer to perfection. For example, OpenAI’s Codex system is being used to turn natural language into a working video game, in just a few minutes, with all of the associated code immediately ready to be compiled and shared.

Many aspects of programming and development are about to be significantly deskilled, or perhaps bifurcated. People will be creating in simple ways, and a smaller group of experts will be debugging the things that the AI system cannot handle. This wave of creativity will be as powerfully disruptive in the 2020s as the Graphical User Interface and desktop publishing have been in the 1990s.

In recent years we have moved towards a world of services that dematerialize many of our former objects such as media collections. Many new ventures have emerged that leverage the power of mobile internet to make it easier to rent objects for a short time. The covid crisis has obliged many people to cross the digital divide, who otherwise might not have bothered to do so. While this is bringing the world closer together in some ways, we must spare some concern for those who still didn’t manage to make the transition to the online world, and who may be increasingly excluded as a result.

Many aspects of software development will be significantly deskilled, or perhaps bifurcated. - Eleanor ‘Nell’ Watson

The embrace of digital has consolidated even more power within the hands of Big Tech and the technocratic elite, whilst putting people at the mercy of our digital feudal lords who can exclude us on a whim. This has heightened the need for effective ethics for AI and other technologies that are increasingly entwined with our personal and professional lives.

Moreover, there are risks to consumers from the apparent convenience of our digital world. By no longer owning something, one becomes in essence a renter, and one can be removed at any time, with very little reason given or a chance to challenge such exclusion. If you own things, it’s very hard to be taken away from you simply because someone didn’t like the things you happened to say. Over time, I think that a desire for ownership will come back into fashion, especially as a status symbol in and of itself. “I am a freeborn individual, not a peasant on someone else’s fief.”

We also live in a culture of financialization, where stock price becomes the metric to optimize for, instead of actually making things that work and provide value to customers, and by extension things that support civilization as a whole. It’s clear that our economic world is built primarily for efficiency, and not resilience. There is very little slack in a just-in-time economy, and so when something inevitably goes wrong, the entire system can get gridlocked.

As governments and corporations, we should do more to prepare for inevitable setbacks that could destroy industries and cause widespread suffering. We should hold back from becoming overleveraged, and ensure that we have reserves and contingencies in place to deal with a world that is increasingly fast, chaotic, and challenging to respond to. Our strange world is only going to get weirder.

Eleanor ‘Nell’ Watson is an interdisciplinary researcher in emerging technologies such as machine vision and A.I. ethics. Her work primarily focuses on protecting human rights and putting ethics, safety, and the values of the human spirit into technologies such as Artificial Intelligence.

Founder & CEO, Executive Director — Legacy Innovation Group & Global Innovation Institute • UsA

In the years ahead Artificial Intelligence is poised to have profound impacts on society — in ways we are only just now starting to understand. These impacts will manifest in four areas: holistic interconnection, ubiquitous awareness, substitutionary automation, and knowledge creation.

1. Holistic interconnection means that everything in our lives will eventually be digitally enabled and thereafter interconnected in a true Internet of Everything (IoE) manner. This will permit AI systems to intelligently monitor all aspects of our lives (24/7), including us as individuals and all the infrastructure we use on a regular basis — our homes, appliances, entertainment devices, cars, laptops, mobile devices, health aids, and so on. Such holistic interconnection serves as the backbone for realizing truly ‘smart’ persons, smart homes, smart communities, smart cities, and ultimately smart nations. Eventually, everything will be able to communicate with everything else — and AI will ensure this is done in ways that benefit all.

2. Ubiquitous awareness means that AI systems — built atop holistic interconnection — will become fully aware of each person, of personal and societal infrastructure, and of how these are all interacting with each other — and will then make decisions and take actions on our behalf that benefit society in a range of ways. One can imagine the situation where — upon approaching their office building or a shopping mall — that facility becomes fully ‘aware’ of their presence (including their identity — and that of everyone else there), where they currently are in the environment, where their assets (car, laptop, etc.) are in the environment, and how the environment can best accommodate their needs by learning new insights about them, like their preferred office lighting and temperature, or the promotions being run at stores they frequent, and so on — all in a way that optimizes the whole, like overall energy consumption for example. Eventually, everywhere we go, our environments will be completely aware of us, and will, via AI, optimize the environment for us. In many ways, AI will come to know more about us and our patterns than we ourselves understand (in some places it already does). One important implication of holistic interconnection and ubiquitous awareness is that society’s notion of ‘privacy’ will have to change — to one that is far more comfortable with having individual data shared openly across systems. In due time, this societal norm will shift, and the conversations around privacy in future generations will look very different from those of the present generation.

3. Substitutionary automation means that AI will empower numerous automated systems to become fully autonomous in their operation, and consequently be able to deliver value without the need for human oversight or intervention. Clear examples of this are fully autonomous vehicles and transportation systems, fully automated business processes, and fully autonomous professional services (like legal and accounting services for example). Substitutionary automation means that many tasks that presently consume (waste) our time — like routine driving, routine data processing, and so on — can all be relinquished to automated systems and consequently free us up to focus our time, energy, and efforts on more creative and novel tasks — tasks for which the human mind is best suited.

4. Knowledge creation refers to something that AI has already started to do, namely synthesize new knowledge that did not exist previously (usually via adaptive pattern recognition) — interconnecting points of insight that were previously unconnected. It is this area of AI knowledge creation that is poised to grow exponentially over the coming decades. And not only will it accelerate, it will — via a self-reinforcing cycle — actually start to generate its own queries and learning loops, so that it is not just synthesizing new answers to pre-existing questions, but rather actually synthesizing new questions needing to be answered. This will permit AI to address even better such looming human challenges as climate change, food security, economic stability, poverty eradication, disease eradication, and so on — areas in which next-generation AI holds incredible promise, especially when coupled with powerful new computing methods like Quantum computing.

Most AI scholars agree that, as this acceleration continues, there will come to be a point in time at which AI is generating new knowledge faster than humans can absorb and apply it, at which point AI will surpass human (natural) intelligence, and only AI will then be able to use this new knowledge. This is the singularity, which will most likely occur somewhere around the mid-Twenty-First Century. One key ramification of the singularity is that it creates a prediction wall, beyond which we can no longer forecast what the future will look like — because we have no idea what AI will end up doing past that point. The singularity thus presents us with a serious unknown ahead.

There are, of course, key risks with AI. While AI can certainly be used for good to optimize our lives, it can also be used for equally destructive purposes, such as in learning how to wage the most effective wars and cyberattacks against different groups. There is also the ultimate risk, which is that AI itself will become both sentient (fully self-aware) and malevolent (rather than benevolent) toward humanity, thus unleashing some form of ‘war’ against humanity — to either subdue it or eradicate it.

Society’s notion of ‘privacy’ will have to change. - Anthony Mills

The first risk — that of humans misusing AI — is a challenging one to address. World bodies like the United Nations for example are — with the assistance of AI Ethicists — already working to develop ethical guidelines for the appropriate uses of AI, and consequences for the systematic misuse of AI. The second risk — that of AI itself overriding human oversight and acting malevolently toward us — is one that can most likely be addressed through discrete control mechanisms in which power to AI systems is cut. Of course, one could imagine the dystopian situation in which such AI systems foresee those human interventions and devise means (including autonomous war machines under their control) to prevent humans from being able to employ such overrides. Many of these risk-mitigation practices will be worked out as we proceed, and will have to be approached very cautiously.

Anthony Mills is a globally sought-after thought leader on emerging markets, proactive growth strategies, corporate innovation, workplace experience, entrepreneurship, product design, and Design Thinking. His work has had a profound and lasting impact on businesses all over the world.

Professor of innovation, design, and management — University of Southern Denmark • Denmark

As AI (and algorithmic logics in general) becomes omnipresent, the societal implications are getting more profound by the week. Whereas some still see AI as a specialist tool, e.g. as something for pharmaceutical researchers or document management experts, its larger impact will affect any and all human activities. We may not yet be in a world in which AIs decide on everything, from what innovations to invest in and what social programs to fund, but we are far closer to this than most people realize. Whereas AI-driven decisions were just a flight of fancy five years ago, today more decisions than you may be comfortable knowing are, at least in part, driven by algorithmic logic.

It is of critical importance that we retain the human capacity to imagine and dream. - Alf Rehn

When discussing how individuals and societies will be affected by AI, it is important to balance the benefits with the potential risks. The short-term benefits are ample and easily understood — AIs can take over dreary, repetitive jobs and free people to realize their potential, while getting algorithms involved in decision-making can limit both the errors and the biases that humans are prone to introducing. By leaving decisions to an algorithm, we can make sure that the innate human limitations — biases, insufficient information, moods — aren’t affecting decisions in an overt fashion. By introducing algorithmic logic, we can make sure that human frailty isn’t driving the big decisions that society needs to take.

That said, we often forget that AI has both short and long-term impacts on society. Looking at things in the short term, it may well look like AI is nothing but a net positive for society. AI can help us sort out issues such as suboptimal urban planning, or deal with racial bias in sentencing decisions. It can help clarify the impact of credit scores, or ensure that the mood of a doctor doesn’t affect a medical diagnosis. What unites these cases is that it is very easy to spot bias or errors in the way the AI functions. An AI that does urban planning in a way that marginalizes certain ethnic groups will be found out and an AI that misdiagnoses cancer will be caught. These are all cases of what I have called ‘short bias’, errors that algorithmic logic can get caught in through insufficient data or bad training.

But what about those cases where an AI influences decisions that have long trajectories, and where the impact might not be known for years or decades? Imagine that an AI is programmed to figure out which of four new research paths in energy production should be supported and financed. One is known and tested, two are cutting edge but with great potential, and the last one is highly speculative. Unless the AI has been programmed to take great risks, it is likely to suggest that the speculative program is cut. Yet, we know that many speculative ideas — antibiotics, the internet, and female suffrage come to mind — have turned out to be some of the best ideas we’ve ever had.

What is at play here is something I have given the name ‘long bias’, i.e. the issue of potential long-term negative consequences from AI decisions that are difficult to discern in the here and now. AI is exceptionally good at handling issues where the parameters are known — whether a cat is a cat, or whether a tumor is a tumor. These are also issues where humans can quickly spot the errors of an AI. When it comes to more complex phenomena, such as ‘innovation’ or ‘progress’, the limitations of algorithmic logic can become quite consequential. Making the wrong bet on a speculative technology (and let’s be clear, there was a time when the car was just that) can affect society not just in the here and now, but for a very long time afterwards. Cutting off an innovation trajectory before it has had a chance to develop is not merely to say no in the here and now; it is to kill every innovation that might have been, and an AI would not care.

In this sense, AI is a double-edged sword. It can be used to make decisions at a speed that no human can match, with more information than any group of humans could process. This is all well and good. On the other hand, by taking away the capabilities of imagination and bravery that humans excel at, we may be salting the earth for technologies we’ve not even considered yet. AIs work with data, and all data is historical — as the investment banks say, “past performance is no guarantee of future results”.

With this in mind, it is far too early to be wishing for a technological singularity, a state of affairs where infinitely wise AIs can guide us in our technology exploration. On the contrary, when it comes to innovation it is of critical importance that we retain the human capacity to imagine and dream, and ensure that we are not letting data do all the driving. AI can help us solve massively complex problems, but the keyword here is ‘help’. The human capacity “to see a world in a grain of sand/ and a heaven in a wild flower”[8] needs to be protected, to ensure that AI only augments our capacity to innovate, rather than defining the same.

Professor Alf Rehn is a globally recognized thought-leader in innovation and creativity, and is in addition a keynote speaker, author, and strategic advisor. See alfrehn.com

AI Product Leader — Tech companies in the Bay Area • UsA

I am very excited to experience the world embrace and discover Artificial Intelligence in more and more parts of their lives. The benefits of AI in our society are tremendous and cannot be listed in a few paragraphs, but here are three categories that I feel AI impacts the most.

1. Enhancing our Throughput as professionals. Have you heard of the term throughput before? Investopedia defines it as “the amount of a product or service that a company can produce and deliver to a client within a specified period of time”. According to Accenture, AI might increase productivity by 40% by 2035. This statement makes perfect sense to me, as Artificial Intelligence and Machine Learning empower us both personally and professionally, to avoid tedious day-to-day tasks. Imagine a world where you could only focus on the most strategic, most creative, and most impactful tasks at work, instead of spending your time i.e. troubleshooting a permission issue on your work laptop or crafting the right email to the right person with the right wording. Virtual Assistants are already here and similarly to the movie ‘Her’, they make life so much easier. We are headed towards a world where we will be able to funnel our brainpower to the tasks that matter to us the most. We may even get told by an AI what tasks should matter the most, according to our personal goals.

2. Enhancing Our Life. I worked for many years on Speech technologies for smart devices at home. Being able to use your voice and instruct your home devices to perform certain day-to-day tasks (i.e. playing music at home, setting a timer, retrieving an email or playing a podcast, or even turnings lights on or off) instead of needing to use a keyboard or a phone, creates a sense of convenience and luxury that was previously unimaginable. AI can also help automatically monitor your home for intruders and also reduce energy usage.

3. Healthcare. AI can improve, simplify and even save lives. Machines never get tired and thus are less likely to make a mistake compared to humans. There are many studies about how AI can reduce error and diagnose health issues effectively and efficiently, for example being able to diagnose cancer earlier than traditional ways. Moreover, key technologies such as Natural Language Processing (NLP) have numerous applications in healthcare as they can classify, retrieve important documentation, and provide actionable insights in a matter of seconds.

Marily Nika is an AI Product Leader based in San Francisco working for Google, previously for Meta (Facebook). She holds a Ph.D. in Computing Science from Imperial College London and is currently an Executive Fellow at Harvard Business School. Outside of her day role, Marily acts as an advisor to early-stage startups and also empowers the women in the tech community in various ways.

Professor and Director — Big Data Analytics Center, United Arab Emirates University • UAE

The world of AI is rapidly expanding, with new innovations and breakthroughs happening every day. AI is changing the way we live our lives in many ways and has the potential to be a game-changer for many industries. Some of the benefits of AI are that it can take on repetitive tasks, it can handle tasks that are complex and require human intelligence, and it can help humans make better decisions. AI is already being used in many industries such as healthcare, education, finance, law enforcement, and transportation.

The next generation of AI is expected to do more than just provide insights and suggestions. It will be able to make decisions for us in a way that is more accurate than ever before. This might sound scary but it could also have some very positive consequences too — like when AI helps doctors diagnose patients who have rare diseases or when an AI detects a person in trouble and alerts them or the authorities in time to prevent a crime from happening.

We need to ensure that AI does not invade our privacy or violate our rights as humans. - Nazar Zaki

AI has the potential to do a lot of good for society, but there are also many risks associated with it. It’s often seen as an object of fear, with people worried about what AI might do to the human race. Some people worry about what will happen if AI becomes too powerful and can make decisions on its own without human interference. Other people worry about how AI can be used as a weapon, such as in warfare.

Other challenges of AI include the lack of ethics. We need to make sure that AI is not being used for evil purposes. AI is also not perfect and it has its flaws. For example, there are many cases where AI has been found biased towards certain races, genders, and other social identities. There is also the issue of privacy: We need to ensure that AI does not invade our privacy or violate our rights as humans.

The idea of AI taking over the world is not new, but the idea that humans will be able to control it is. However, it is not just about being able to control it; it is also about being able to understand how it works and what its limitations are. We need to trust AI if we want it to trust us back. First and foremost, we need to make sure that we are, not only aware of the risks that AI poses but also able to address these risks — and here are some ways in which we can do so:

1. We can work on having a clear understanding of what AI is and what it is capable of so that we know how it will affect us in the future.

2. We can work on creating laws and regulations around AI so that there is some accountability for their actions.

3. We can also limit who has access to certain types of information so as not to put anyone at risk.

Finally, AI will continue to make our lives better. It is becoming more and more integrated into our lives with every passing day. It is hard to predict what the future will hold for AI, but it is safe to say that it will continue to grow and evolve. AI has the potential to help us solve many of the world’s most pressing problems, including controlling pandemics, poverty, climate change, and hunger.

We must be careful not to create a future where we are all working for robots or AI beings. - Nazar Zaki

The future of AI is bright and we should not worry about it too much. AI will help us get better at everything we do and make our lives easier in many ways. The only thing that we should worry about is how to prepare for this change and how to handle it when it happens. There is no doubt that AI will pose some ethical dilemmas in the future and we need to look at how to manage these risks as early as possible. The bottom line is that we need to be aware of these potential risks and work to mitigate them.

Nazar Zaki is a Professor of Computer Science and founder and Director of the Big Data Analytics Center with a mission to ingrain a sustained impact through groundbreaking Data analytics research and services. Nazar’s research focuses on data mining, machine learning, graph mining, and bioinformatics.

Public Policy Manager, AI & Fintech — Meta • Belgium

Like electricity enables useful things such as light, TV, and fridge for us, Artificial Intelligence is also a ubiquitous technology that can improve our lives. AI could help society to improve healthcare, education, facilitate access to information and significantly improve the efficiency of our workplaces. It can take over dangerous or repetitive tasks and make working environments safer. Furthermore, AI can contribute to the creation of new types of jobs that are demanded in a continuous digital labor market. Often though AI raises societal concerns around safety and security, privacy, inequality, discrimination, and bias in fundamental rights and democracy. Depending on the data it uses, AI could lead to biased decisions when it comes to ethnicity, gender, or age in the context of hiring processes, banking, or even the justice system. For these reasons, deployers of AI systems should ensure equality, diversity, inclusion, and responsible use of AI to avoid potential pitfalls to society.

Artificial Intelligence can solve societal challenges and reduce climate change. - Angeliki Dedopoulou

If AI is used responsibly, it can create significant benefits to the global arena. It can solve societal challenges and reduce climate change, and reinforce initiatives like the European Green Deal and Paris Agreement. AI can also contribute to the realization of the sustainable and development goals of the United Nations and it can play a crucial role in curbing global issues such as:

- Control of epidemics. During a global pandemic, governments’ initial objective is to minimize the spread of the disease. If AI is fed by historical data, it can recognize patterns and trends, and then via predictive analysis, it can lead to necessary measures to eliminate the spread of the virus. AI can also be used to accelerate the development of vaccines related to viruses.

- Management and control of pollution. Pollution is a global challenge that concerns all countries around the world. AI can be used for the protection of the environment and pollution control. More specifically, AI systems contribute to the detection of energy emission reductions, the removal of CO2, the monitoring and prediction of extreme weather conditions, and support the development of greener transportation networks.

- Global food crisis prevention. According to the United Nations, 840 million people might be affected by hunger by 2030[9] . Research has shown that with the combination of smart agriculture and machine learning, this number could be significantly reduced. AI-based solutions can create systems that warn governments on food shortages and prepare them for better food supply and management. AI solutions can also help farmers to produce more food with less land. For example, AI-enabled operations are estimated to use roughly 90% less water and produce over 20 times more food per acre than traditional fields[10].

- Water pollution management. UN’s Sustainable Development Goal 6 seeks to ensure that people have access to clean water and adequate sanitation services worldwide. AI can be used to reduce pollutants in the water and detect the amount and composition of toxic contaminants. AI can also increase the efficiency of waste management systems.

For businesses, AI can enable important sectors such as tourism, construction, agriculture, green and circular economy. It can also improve the quality of products, increase production levels, and contribute to energy savings. For example:

- When customer service representatives in a hotel are not available, AI bots may respond to questions and provide useful information.

- In the construction sector, AI can improve project efficiency and the safety of workers in construction sites.

- Smart farming is helping the agriculture sector to be more profitable. AI-powered mechanisms can monitor aspects such as grain mass flow, the quality of harvested grains, and moisture content.

- AI in combination with intelligent grid systems and deep predictive models can manage the demand and supply of renewable energy.

To conclude, AI has a huge potential to benefit societies and boost many sectors of the economy. However, governments need to ensure that AI prioritizes humans, reinforce human’s trust, protect human rights, and promote creativity and empathy. Policymakers should create a global dialogue, explain, educate, and boost transparency of AI, adapt training and education curricula to the new Artificial Intelligence society, and promote, develop and encourage the public and private sectors to adopt human-centered and trustworthy AI.

Angeliki Dedopoulou is Public Policy Manager for AI & Fintech at Meta (formerly known as Facebook). Before joining Meta’s EU Public Affairs team, she was a Senior Manager of EU Public Affairs at Huawei, responsible for the policy area of AI, Blockchain, Digital Skills, and Green-related policy topics. She was also an adviser for the European Commission for over 5 years on DG Employment, Social Affairs, and Inclusion.

VP Artificial Intelligence and Data Research — Nox Medical • Iceland

I feel that the focus of AI state of the art is shifting away from newer models or new ways of training models, towards how we better address issues of bias, social impacts of our AI models, and how we phrase our optimization questions. We seem to have reached a point in time where most progress in AI will come from focusing on data and thinking about how we evaluate the AI models’ performance rather than improving AI architectures or hardware. I am not saying that there is no room for improvement there, but I feel that the real breakthroughs will happen around how we improve our data and the questions we want AI to answer.

I am a big fan of the work Andrew Ng[11] does with the website deeplearning.ai[12] both in providing great educational material in AI and more importantly to highlight the importance of data-driven development of AI models. My background is in instrumentation and measurement technology and, therefore, I am a big believer that if you do not collect the right data there is no way of getting a sensible output no matter how much AI you throw at the problem. There are also other great references out there on how to get more valuable input from human labelers such as the book by Robert Monarch titled Human-in-the-Loop Machine Learning[13], describing data-driven methods of selecting better training and validation data. Finally, Stuart J. Russell proposes that we rethink how we optimize AI algorithms in his book Human Compatible[14].

AI and automation are already everywhere and we see great societal benefits from AI, such as increased productivity and fewer human errors. But, as AI becomes more ubiquitous, we also see how systematic errors and biases in AI models can start to cause real social and economical problems. Furthermore, we have started asking very deep ethical questions when we delegate decision-making to autonomous machines. We have also started running into barriers where our legal and societal frameworks do not manage the state of the art in AI.

We currently see in our society that if AI algorithms are not optimized for the correct objectives they can start spreading misinformation and polarizing people on social media. This may be caused by optimizing an AI model for the wrong thing such as for some engagement metric and not asking what such an optimization might lead to. In an era where we are exposed to enormous amounts of information, it becomes difficult for individuals to keep track of what is correct and there is space for AI to spread misinformation to large groups of people.

We also have many examples where bias in training data results in racist or biased outcomes from AI models. This becomes especially troublesome when we use AI models to assist in or even make life-changing decisions for individuals. In these cases, the AI models may perform well on average or for most people. However, there may be few individuals who are severely negatively affected by the model outputs. There are famous examples of this when AI models have been used in the financial and legal sectors. In my sector, AI medical devices, we are also faced with this problem where rare clinical conditions may be missed when AI is used inappropriately.

There are also interesting societal and legal questions that have started to arise. We have started to think about who is responsible when an AI system causes an accident or harm, and how we best react when AI systems take over important systems in our society and take them out of our control, for example crashing the stock market.

With all of this in mind, one quickly realizes that we need to think about AI from many different angles. There are the technical and application aspects, but there are also ethical, societal, and legal ones that must be considered. Thus, we need a diverse group of people working on AI and diversity is our best chance of being successful. For people who are interested in being involved in AI development and adoption, I think there are many great opportunities to contribute to, not only model development, but also to seek answers to the challenges mentioned above.

Jon S. Agustsson is an experienced AI and Research leader working in the medical device and medical research industry-leading an interdisciplinary team in Data Science, Physics, Electrical Engineering, and Research. Passionate engineering, inventing, and building new things, with multiple patents and scientific publications.

Assistant Professor in Information Management — Loughborough UniversityUK

“The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which.”[15]. As per Stephen Hawking’s quote, the effects of AI remain unclear and unknown. However, increasing digitalization and AI utilization could radically transform work and society, knowledge, learning, and power (re)distribution. Given the dangers of lost individual knowledge through the increased use of AI and algorithmic decision-making, the overreliance on AI and algorithms might hamper learning, decision-making, and innovation. For example, AI has assumptions about knowledge, particularly tacit knowledge, which are currently highly problematic and require considerable improvement prior to the reliable use of AI and its predetermined codified (and encoded) knowledge. At this point, it is helpful to differentiate between knowledge, information, and data.

AI could lead to emancipation through empowerment, autonomy, inclusion, participation, and collaboration. - Boyka Simeonova

Information, and data, are an ingredient to knowledge but do not represent knowledge. Data are facts, information is the processed data and knowledge is the interpreted and actionable information. Knowledge can be explicit and tacit. Explicit knowledge can be easily captured, codified, processed, stored, and distributed. Tacit knowledge cannot easily get captured, codified, processed, stored, and distributed — tacit knowledge is accrued through experience and is explained as an ongoing accomplishment through practice and participation.

The problem with knowledge encoded in the AI is that it is narrow and brittle, and AI systems are only reliable in a narrow topic and domain, which are predetermined, and when the topic and domain are challenged or changed, AI systems “fall off the knowledge cliff”[16]. Therefore, while AI might help for narrow, routine, predetermined tasks, it is (as yet) unreliable and inaccurate to help with complex problems and decision-making, where automation and the use of AI are yet currently impossible because tacit knowledge cannot be easily codified[17]. AI can analyse volumes of data, however, the knowledge aspect needs further development.

Despite the danger of AI systems falling off the knowledge cliff, the use of AI, automated and algorithmic decision-making has increased in organizations and societies. For example, the use of Decision Support Systems and ‘big data’ has limited the power of individuals in strategic decision-making and has (in some instances) replaced their tacit knowledge, experience, and expertise on the assumption that their calculated rationality leads to superior outcomes. For example, Fernando Alonso lost the 2010 Formula 1 Grand Prix Championship because the race simulation algorithm provided a poor decision, and the Chief Race Strategist did not have the power to participate in the decision-making or to change (or overrule) the decision of the algorithm, which led to Alonso and Ferrari losing the championship and to the Chief Race Strategist losing their role[18].

The issue of tacit knowledge encoded in AI is also demonstrated in the challenges around the development of the decision-making algorithms for autonomous cars where decisions need to be predetermined and context and interpretation of a situation are currently limited. Therefore, how AI is used needs considerable thought and consideration as in its current state, it does not have enough knowledge capabilities, and its current use hampers knowledge, learning, decision-making, innovation, and society.

In the context of a digital economy, AI and automation could advance the power of the influential through control, surveillance, monitoring, discrimination, information asymmetries, manipulation, ‘algorithmification’, and ‘datafication’. Such uses of AI, examples of which currently dominate, lead to exploitation, exclusion, marginalization, discrimination, and manipulation.

For example, Cambridge Analytica, which ran the American presidential digital campaign, arguably manipulated the opinions of people and their votes or voting intentions or behaviors, through the provision of filtered information to influence their votes. AI has been used to exploit the practices of people and their opinions, resulting in manipulating the vote. Other examples of exploitation are the automation and ‘algorithmification’ of influential technology organizations that collect and exploit data, eliminate competition, and coerce organizations to follow their algorithms[19]. Therefore, influential technology organizations may further consolidate and increase their power because of their technology leadership and the opportunities for exploitation practices.

AI could lead to emancipation through empowerment, autonomy, inclusion, participation, and collaboration. However, such examples are scarce, and the emancipatory use of AI, or the emancipatory outcomes of the use of AI are limited. Organizations, developers, governments, workers, and societies need to collaborate on determining how these AI systems are developed and used to enable emancipation and empowerment.

Boyka Simeonova is an Assistant Professor in Information Management at Loughborough University. Boyka is the Director of the Knowledge and the Digital Economy Research Network at the university and the Deputy Director of the Centre for Information Management.

[1] Fourth Industrial Revolution — Wikiwand

[2] Universal basic income — Wikiwand

[3] Weak AI — Wikiwand

[4] Artificial general intelligence — Wikiwand

[5] Superintelligence — Wikiwand

[6] Technological singularity — Wikiwand

[7] Gerhard Lenski — Wikiwand

[8] Auguries of Innocence — Wikipedia

[9] Food | United Nations

[10] This is how AI could feed the world’s hungry while sustaining the planet (weforum.org)

[11] Andrew Ng — Wikipedia

[12] Home — DeepLearning.AI

[13] Human-in-the-Loop Machine Learning: Active Learning and Annotation for Human-Centered AI

[14] Human Compatible: AI and the Problem of Control

[15] Stephen Hawking warns of dangerous AI — BBC News

[16] Forsythe, D.E. (1993). The construction of work in AI. Science, Technology, & Values, 18(4), 460–480

[17] Simeonova, B., & Galliers, R.D. (2022). Power, knowledge and digitalisation: A qualitative research agenda. In Simeonova B. & Galliers R.D. (Eds.), Cambridge Handbook of Qualitative Digital Research. Cambridge University Press.

[18] Aversa, P., Cabantous, L., & Haefliger, S. (2018). When decision support systems fail: Insights for strategic information systems from Formula 1. The Journal of Strategic Information Systems, 27(3), 221–236

[19] Naidoo (2019). Surveillance giants. Amnesty International.

Excerpt from 60 Leaders on AI (2022) — the book that brings together unique insights on the topic of Artificial Intelligence — 230 pages presenting the latest technological advances along with business, societal, and ethical aspects of AI. Created and distributed on principles of open collaboration and knowledge sharing: Created by many, offered to all; at no cost.

The Impact of AI on Society and Everyday Life (2024)
Top Articles
Latest Posts
Article information

Author: Dan Stracke

Last Updated:

Views: 6280

Rating: 4.2 / 5 (43 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Dan Stracke

Birthday: 1992-08-25

Address: 2253 Brown Springs, East Alla, OH 38634-0309

Phone: +398735162064

Job: Investor Government Associate

Hobby: Shopping, LARPing, Scrapbooking, Surfing, Slacklining, Dance, Glassblowing

Introduction: My name is Dan Stracke, I am a homely, gleaming, glamorous, inquisitive, homely, gorgeous, light person who loves writing and wants to share my knowledge and understanding with you.