Tech Matters: Challenges in the Ethical Governance of AI- Transparency and Accountability

Tech Matters: Challenges in the Ethical Governance of AI- Transparency and Accountability
20 Mar 2020

TECH MATTERS

 

CHALLENGES IN ETHICAL GOVERNANCE OF AI – TRANSPARENCY AND ACCOUNTABILITY

In the Tech Matters Series, Shaun Leong, Partner based in the Singapore office of global law practice Eversheds Sutherland, Eversheds Harry Elias, considers, in collaboration with leading corporate personalities from the global tech ecosystem, the latest in technology law and disputes, trending challenges faced by technology companies, and other technology related issues.

In our first piece on the governance of Artificial Intelligence (“AI”) published in November 2019 titled “The Ethical Path to Singularity – Governance of Artificial Intelligence; and Eye on the Digital World”, we explored the governance frameworks proposed in various countries. 

For this piece, we are honoured to have collaborated with Ms Prerna Gandhi, Senior Corporate Counsel of Amazon Singapore, to examine the current trends in AI governance, and the challenges in the ethical governance of AI.

Note: The views and opinions expressed in this article are those of the authors and do not

necessarily reflect the views of Amazon or Eversheds Harry Elias LLP.

What’s New in the Singapore Model AI Governance Framework (“Model Framework”)

In January 2020, Singapore released a second edition of its Model Framework. This second edition includes additional considerations and improves upon the first edition for better applicability.

Alongside this second edition, real-world examples were fleshed out in an accompanying Compendium of Use Cases that demonstrate how organisations across different disciplines implemented or aligned their AI governance practices with the sections of the Model Framework, and how these organisations have benefitted from having accountable AI governance practices.

A self-assessment tool was made available to organisations to get a sense of how aligned their practices are with the Singapore AI framework.

These updates serve to better the Model Framework and to keep pace with AI’s development. 

The abovementioned documents can be downloaded here.

Other Sources of AI Governance

‘Grassroots’ level of governance

In addition to government-led initiatives, organisations have taken charge of the governing process.

Companies involved in AI technology have come together as primary actors and have come up with initiatives that involve other stakeholders such as non-profit organisations, academia, research institutions and the civil society.

One such initiative is the Partnership on AI which brings together some of the biggest AI players worldwide, including Amazon. The founding mission of Partnership on AI was to benefit people and society. It conducts research, organises discussions, shares insights, provides thought leadership, consults and communicates with relevant stakeholders, and creates educational material that advances the understanding of AI technologies.

Such initiatives recognise that it is unrealistic for any blanket regulations to apply across the board to companies in different sectors and of different sizes. Through these active collaborations, the governance of AI has seen remarkable progress.

International Alignment of Governance

While standardised regulations are generally frowned upon, there has been emerging realisation that self-governance may not be adequate and international alignment may be needed. As global technology firms tend to operate in more than one jurisdiction, international consensus on AI governance will allow for reduced costs and less technical challenges.

An example of international common ground is the recognition and adoption of certain ethical principles to guide different jurisdictions in their eventual development and regulation of AI across diverse sectors. Singapore’s Model Framework provides a comprehensive collection of foundational AI ethical principles in its annexes, which it encourages organisations to incorporate into their own corporate principles. Recently in February 2020, the US Department of Defence also officially adopted a number of ethical principles for the use of AI. There are remarkable similarities in the type of ethical principles emphasised by the different jurisdictions, with common principles such as accountability, fairness and transparency.

However, for the reasons set out below, AI governance is still far from maturity and more work needs to be put in to establish a regulatory framework that does not impede on technological advancement for the greater good.

Challenges Faced in Ethical AI Governance

The key challenges in AI governance that we have gathered can be organised on four separate levels:

  • First is the innate nature of AI technology that makes it difficult for the applicability of ethical governance;
  • Second is the lack of teeth inherent in companies’ accountability frameworks;
  • Third is the potential breakdown of collaboration between companies; and 
  • Fourth is the conflicting interests between the development of technology and who this technology is ultimately sold to.

Nature of AI

The technological makeup of AI itself sometimes places direct challenges against ethical principles.

To illustrate, one such example is the contrast between Black Box AI and White Box AI. Black Box AI refers to opaque machine learning algorithms that offer few to no clear clues about how conclusions are reached. On the contrary, a White Box AI’s learning algorithms are traceable, which helps to provide transparency to its users.

Just in November 2019, Black Box AI drew attention because of its use by a widely-used cashless payment provider, where a high-profile user realised that his credit limit on the platform was 20 times that of his wife even though the two parties filed joint tax returns and his wife had a better credit score than him. Such preferential treatment was not justifiable as the Black Box AI could not provide the process by which the decision was reached.

Investigations are underway as to whether the algorithm had gone against the existing basic ethical AI principle of equal treatment of all its users.

In addition, there are unintentional human biases within AI technology. These human biases include racial or gender biases and mainly stem from humans’ inherent biases which are reflected in the training samples. For instance, natural language processing, an important component of voice recognition systems have been found to show gender biases.

Corporate Accountability Framework

Corporate principles for ethical governance of AI also typically lack teeth, transparency, accountability and enforcement.

The process of evaluation is rarely public and companies typically adopt a “take my word for it” attitude. There is no insight into the decision-making and neither is there any obvious authority to reverse such a decision.

An inescapable contributing factor to this lack of teeth is the glaring dearth of any concretised regulations that would be the impetus for companies or organisations to hold themselves accountable. While it is impossible to tell what the future will bring in terms of formalised government regulation of AI use, companies have to consider what this might entail.

On one hand, companies will have to allocate additional resources and introduce potentially complicated processes to comply with such regulations. On the other hand, companies that prioritise compliance and transparency stand to benefit significantly by enhancing their reputation and brand image in the eyes of the public.

In this era where companies’ assets are becoming increasingly intangible in nature, a company’s brand can be made or broken by public perception of its core ethical principles.

Collaboration into Divergence/Competition

Companies have inherently different principles and operating values. These differences can be traced back to cultural roots and can affect a company’s view on ethical governance of AI. Companies based in China have collaborated with the Chinese government to develop facial recognition and predictive policing. However, in the West, such close ties with government agencies are often frowned upon for fear of breaches of human rights.

The premise of capitalism can also fuel the eventual fall-out between companies. Companies are made to compete on the commercial market and will inevitably aim to dominate the AI industry. As such, all forms of collaboration proliferating in the industry may fall apart overnight.

Conflicting Interests – State Interests versus Company Interests versus Ethical Principles

The different actors in the AI industry have differing interests and the current frameworks have a strong focus on transparency and accountability but do address the underlying issue of conflicting interests.

Conclusion

Overall, ethical governance frameworks in AI are relatively young and nothing is set in stone. Collaboration will definitely maximise the economies of scale and will contribute to the long-term development of ethical governing frameworks.

 

For further information, contact:

Ms Prerna Gandhi

Senior Corporate Counsel, Amazon Singapore

Shaun Leong, FCIArb

Partner, Eversheds Harry Elias

Arbitrator, BAIAC

ShaunLeong@eversheds-harryelias.com

Selina Yap

Legal Associate, Eversheds Harry Elias

SelinaYap@eversheds-harryelias.com

 

For more information, please contact our Business Development Manager, Ricky Soetikno at rickysoetikno@eversheds-harryelias.com