Leveraging the B Corp Framework to Build Ethical AI

How Stakeholder Standards Can Help Businesses Navigate the Risks of Artificial Intelligence

Emily Charry Tissier
B The Change

--

For some, artificial intelligence (AI) is a way to save the world; for others, it’s a black box they can’t trust. At its core, though, AI simply refers to the branch of computer science that strives to understand and replicate human intelligence in machines.

Like any significant progress in science and technology, AI promises substantial benefits. AI algorithms can monitor animal populations, detect tumors in lungs, reduce companies’ operational costs, make cities smarter, help us make management decisions, model and plan for future scenarios, and perform many other tasks.

However, as with any technology, where there are benefits, there are risks. For example, intelligent machines could disrupt the job market with excessive job automation, influence politics by spreading misinformation, exacerbate social inequalities, and violate our privacy.

In the wrong hands, AI systems could very well start to diverge from the goals that humans had set in the first place, especially when the people implementing these systems don’t understand the risks and limited scope of AI.

For instance, feeding biased data into an AI algorithm or failing to consider all the ethical implications of a system could make a well-intended model very destructive. Recent examples include Amazon’s hiring AI tool that discriminated against women because it had been trained on primarily male resumes; Microsoft’s use of Twitter to train a chatbot called Tay whose statements soon turned racist and inflammatory; and recently, AI researchers spoke out against a predictive crime software that was built upon racially biased facial recognition algorithms.

A new impact economy is being built, one where businesses prioritize and consider their impact on all the stakeholders they impact — including communities, workers, customers, and the environment. Download this free report to learn how the stakeholder model as practiced by B Corps is gaining global traction and validation.

Although popular and expert opinions agree on the urgency of creating standards for responsible AI development, government regulation has lagged behind. Some companies, meanwhile, have adopted a variety of voluntary initiatives (such as the Montréal Declaration for a Responsible Development of Artificial Intelligence) that spell out best practices for AI development. While such initiatives are promising first steps, one limitation is that they are not legally binding.

The B Corporation Certification process is ideally situated to help the world’s new and existing corporations navigate this juncture and ultimately create a world where powerful advancements in AI benefit all stakeholders, not just shareholders.

Certified B Corporations are held legally accountable to standards on a wide array of environmental, community, and internal governance issues, all of which could benefit from advancements in AI. The B Corp Certification process provides a well-defined ethical and environmental framework for AI development and use, ensuring that companies benefit society at large, rather than creators and shareholders alone.

A small but diverse group of AI-oriented companies have signed on to the B Corp standards to help make AI for good a reality. For example, Delft Imaging is a Dutch medical diagnostics company and B Corp helping battle tuberculosis in developing countries. In the hospitality industry, UK-based B Corp Winnow helps commercial kitchens reduce food waste. Other B Corps leveraging AI include SkyHive, a Canadian company helping employers hire and reskill more effectively, and OneSeventeen Media, an American B Corp using AI to improve mental health and social-emotional well-being for kids. This suite of companies committing to hold themselves legally responsible for ethical and sustainable business is paving the way for a more transparent and accountable AI future. While this is a start, currently less than 1% of B Corps are AI companies — we need to work together to change this statistic.

As a B Corp, Whale Seeker has chosen to pursue its mission of developing AI that benefits communities and the environment, within a sustainable and ethically sound company.

To us, this means only working with clients or partners whose projects we view as aligned with our values and mission. It also means measuring our impact and setting objectives that work toward specific Sustainable Development Goals, in addition to abiding by the Montréal Declaration for Responsible AI Development.

With the global AI market expected to grow at a compound annual rate of 40.2% from 2021 to 2028, amplifying the conversation around the benefits of the B Corp framework for the AI industry is imperative. At Whale Seeker, we know firsthand it’s possible to be for-good and for-profit, and we believe that the B Corp framework sets up the foundations for growing sustainably and minimizing AI risks.

B The Change gathers and shares the voices from within the movement of people using business as a force for good and the community of Certified B Corporations. The opinions expressed do not necessarily reflect those of the nonprofit B Lab.

--

--