It’s easy to see why AI adoption is skyrocketing in the business world. Artificial intelligence empowers organizations to reach new levels of efficiency, performance, and profitability. It has already proven to be a powerful tool for maximizing insights and driving innovations.
But integrating AI into business operations also raises some concerns. One such concern is the impact it has on environmental, social, and governance (ESG) practices.
Consumer concerns about the impact that business has on the environment and culture have grown considerably over the past decade. Recent studies show that 83 percent of consumers believe companies should invest in ESG best practices. To meet consumer expectations, businesses have sought to adapt practices to support healthy environmental and societal initiatives.
The growth of AI requires that businesses carefully consider how the new technology could impact their ESG efforts.
AI and environmentalism
“One of the key concerns surrounding generative AI tools is their environmental impact,” says Ed Watal, founder of Intellibus. “Its growth is not only raising new concerns about environmental fallout but also bringing to the fore concerns that have long existed about the intersection of tech and the environment. The power of AI tools and their widespread adoption dramatically magnifies those concerns.”
Watal is an AI thought leader and technology investor whose key projects include BigParser — an ethical AI platform and data commons for the world. In addition to leading Intellibus, which helps organizations engineer intelligent business platforms, Watal is the lead faculty of AI Masterclass, which is a joint operation between NYU SPS and Intellibus.
AI relies on data centers to store the massive datasets used to train it, as well as to empower the high-speed networking needed to mobilize the data. Some experts predict that the huge data needs of generative AI will require 50 times more processing power within the next five years.
The energy needed to power data centers is an ongoing ESG concern. Experts anticipate that data center energy consumption could reach 7.5 percent of overall energy consumption by 2030. Generative AI is expected to account for approximately 1 percent of that consumption.
“While energy consumption by data centers is definitely an environmental concern, it is not the only one AI triggers,” Watal says. “Cooling systems in data facilities are also a huge energy draw. In fact, 70 percent of all data center energy consumption goes to cooling and water. A landmark order involving the Google Data Center in The Dalles, Oregon, brought to light that 25% of the water supply of that town was being consumed by the data center.”
AI and social issues
AI’s impact on social issues primarily centers on the potential for bias in the decisions it drives. If trained on biased data, generative AI can perpetuate and amplify historical prejudices and structural inequalities. Recent reports show nearly 75 percent of businesses are not taking steps to reduce this type of bias.
Hiring bias is a key concern that has emerged as AI integration has grown.
“Use of generative AI tools to power recruiting processes is increasingly becoming the norm,” says Watal. “Companies are using AI algorithms for all parts of the hiring process, including resume parsing, candidate screening, and even final decision-making recommendations. Because explainability, which involves providing a logical explanation of the rationale behind AI-driven decisions, is still a challenge, many businesses are unable to determine the presence or extent of racial, social, gender, or economic bias in the generative AI-powered screening or recommendations algorithms for hiring.”
AI and governance needs
Corporate governance involves the policies, processes, and controls businesses implement to ensure their operations are responsible and ethical. The development of AI policies that ensure accountability, transparency, fairness, and security has emerged as a key responsibility of corporate governance.
“Sound corporate governance requires organizations to have a careful understanding of the datasets used to train AI,” Watal explains. “Many generative AI platforms were trained on public repository internet data called the Common Crawl. A study of AI models trained on Common Crawl indicates the presence of social bias and negative sentiments that could lead to representational harm of specific groups.”
Governance should also include policies for employee engagement with generative AI. Organizations like Apple, JP Morgan, Verizon, and Amazon have all banned tools like ChatGPT at work. Others have limited the amount of data that can be provided to generative AI in the workplace.
“Employees using generative AI tools without proper authorization or controls can lead to data leakage,” Watal warns. “Operational errors by AI researchers at Microsoft led to 38TB of data being accidentally exposed. Samsung employees accidentally exposed confidential data to generative AI platforms on three separate occasions.”
Despite these concerns, AI use continues to grow at a rapid pace. The challenge now before businesses is finding a way to harness the power of AI without violating their ESG responsibilities. Finding a balance starts with acknowledging the dangers of generative AI and committing to take a precautionary approach to its deployment.