Artificial intelligence (AI) has accelerated significantly in recent years, writes Robin Fisher, Senior Area Vice President of Salesforce Emerging Markets. Intended to make our daily activities more convenient and straightforward, the technology has gradually been integrated into the business and industrial worlds, as well as everyday life in South Africa and around the world.

Chatbots have grown in popularity to assist agents in providing better customer service, while AI in marketing helps determine things like the best time to communicate with a customer via email or social media.

The growing presence of AI compels us to consider its political, economic, social, and, most importantly, ethical implications.

According to a recent report by Microsoft and tax advisory services firm, EY, nearly half (46%) of South African companies are already actively piloting AI. Almost all (96%) local businesses anticipate significant financial benefits from implementing AI solutions to optimise their operations in the future.

It is therefore crucial to lay the groundwork to avoid situations in which machines make decisions that affect individuals in the future, such as creating biases that single out or exclude individuals based on race or gender. The most prominent challenge organisations face when implementing AI-driven solutions is to avoid using consumer data to make decisions that amplify biases and cause harm in the “real world”.

Failure to adopt ethical frameworks to address issues that may arise in terms of personal data collection and processing can damage a business’ reputation and cause direct and possibly irreparable harm to consumers.

Here are three ways companies can start to incorporate an ethical culture into AI:

  • Create a diverse team. This action prevents bias since employees have different backgrounds and experiences. Creating a diverse team enables employees to learn about other ways of doing things and boosts creativity. Additionally, it is important to involve stakeholders at every stage of the product development lifecycle to mitigate the impact of systemic social inequalities in AI data.
  • Be transparent. To be ethical, you must be honest with yourself, your customers, and the community. This includes understanding your values, determining who benefits and who pays, providing users with control over their data, and accepting the opinions of others. If protecting user privacy is a core company value, employees (not just senior executives) should be aware of it.  Additionally, customers and the general public should know how individual security systems affect privacy. Finally, it is essential to provide users with the ability to correct or delete data collected about them. We must understand that access to customer data is a privilege, not a right.
  • Eliminate exclusion. We can do this by being more inclusive and avoiding bias at all costs. To do so, it is critical to exercise caution when making decisions based solely on demographics. Even when customisation or targeting based on stereotypes might be precise (e.g. targeting makeup ads to women), potential customers (e.g, men, non-binary or transgender people interested in makeup) may be overlooked or unknowingly offended. Once we identify this bias in business processes or decision-making, we must eliminate it before using that data to train other AI systems. Businesses can accomplish this by focusing on three key areas: employee education, product development, and customer training.

For business success and society’s good, we need AI to be accurate, which means eliminating as many biases as possible. Organisations are responsible for ensuring fair and accurate data sets; it’s an ongoing effort that requires awareness, investment, and commitment; but is undoubtedly necessary.

Share This