Earlier this month, the Singapore Government announced the formation of an Advisory Council on the Ethical Use of Artificial Intelligence (AI) and Data as part of a wider push to support Singapore as a global hub for AI development and innovation. The council will be chaired by former Attorney-General VK Rajah, and will consist of representatives from technology companies and users of AI.
What is the role of the Advisory Council?
The Advisory Council will lead discussions and provide guidance to the Singapore Government on the responsible development and deployment of AI. It will work with key stakeholder groups on ethical issues arising from the use of AI. This will include working with industry to understand issues arising in the private sector; working with consumer advocates to understand consumer expectations in respect of AI; and working with the investment community to increase awareness of the need to incorporate ethical considerations in their AI investment decisions.
Why is Singapore forming such an Advisory Council?
AI is becoming an increasingly integral part of life in Singapore as the Government executes its “Smart Nation” initiative.
For example, local bank OCBC has developed an AI-based automated chat system called Emma that can communicate with customers and work out home loans; scientists at A*star’s Genome Institute of Singapore are using AI to pinpoint the roots of gastric cancer by scanning the entire genomes of a few hundred gastric cancer tumours; and researchers from the Saw Swee Hock School of Public Health and Singapore’s National Environment Agency has developed an AI agent to forecast dengue incidence up to four months ahead by learning the seasonal patterns of dengue cases over the last decade. These are just a few recent use cases as, with top-down support from the Government, Singapore embarks on an effort to position itself as a global centre of excellence in AI. Putting in place the Advisory Council, as part of a wider set of initiatives in the AI space, is the start of an effort to build a framework for trust in AI.
What does “ethics” mean in this context?
In making the announcement, the Infocomm Media Development Authority (IMDA) provided its own definition of “ethics” in the context of AI:
“Ethics encompasses issues surrounding fairness, transparency and the ability to explain an AI’s decision.”
This is a concept that will no doubt develop in the coming years but by providing a definition and, in particular, emphasising a need for AI to be able to explain itself, the IMDA appears to be setting out in general terms what it considers to be “ethical” in the context of AI.
What else is Singapore doing to ensure the ethical use of AI?
The establishment of the Advisory Council is part of several new AI-related initiatives announced by the Government recently.
Singapore’s privacy regulator, the Personal Data Protection Commission (PDPC), issued a discussion paper that proposes an accountability-based framework in relation to the commercial deployment of AI. Amongst other things, the PDPC proposes that, in order for AI to benefit businesses and society at large, a set of principles needs to be in place to promote trust and understanding in the use of AI technologies. For example, according to the PDPC, decisions made by or with the assistance of AI should be explainable, transparent and fair so that affected individuals will have trust and confidence in these decisions, something that aligns with the IMDA’s definition of “ethics”. The idea is that the PDPC’s discussion paper will frame the Advisory Council’s deliberations.
In addition, the IMDA announced the establishment of a research programme on “the Governance of AI and Data Use” at Singapore Management University, with the goal of advancing and informing scholarly research on AI governance issues over the next five years.
Separately, Singapore’s financial services regulator, the Monetary Authority of Singapore (MAS), announced in April 2018 that it is working with stakeholders to develop guidance for the ethical use of AI in the financial industry. This guide will set out principles and best practices for use of AI in the financial industry, and to reduce risks of data misuse.
Is this part of a broader global trend?
This announcement by the Singapore Government follows similar moves in other countries that have also started to develop guidance on AI usage. For example, Germany has drawn up ethical guidelines to govern the use of driverless cars, the UK has set up a new Centre for Data Ethics and Innovation to study and develop best practices for AI regulation, and the EU has recently appointed 52 experts to the new High Level Group on Artificial Intelligence to support the implementation of the EU Communication on Artificial Intelligence published in April 2018. By moving quickly and decisively, Singapore will be looking to give itself an edge in the trusted use of AI.
What does this mean for organisations doing business in Singapore?
As AI becomes increasingly ubiquitous, and as the “ethical” framework evolves, it seems likely that some level of regulation or guidance, whether binding or not, will be introduced in due course. Organisations that are active in the development or use of AI will be watching closely to ensure that any new frameworks introduced do not unduly stifle innovation in this rapidly-evolving space. They will also be looking for an opportunity to participate in discussions with the Advisory Council, the PDPC, the MAS, the IMDA and Singapore’s policy-makers in general to ensure that their voices are heard in the development of these new frameworks.
Earlier this month, the Singapore Government announced the formation of an Advisory Council on the Ethical Use of Artificial Intelligence (AI) and Data as part of a wider push to support Singapore as a global hub for AI development and innovation. The council will be chaired by former Attorney-General VK Rajah, and will consist of representatives from technology companies and users of AI.
What is the role of the Advisory Council?
The Advisory Council will lead discussions and provide guidance to the Singapore Government on the responsible development and deployment of AI. It will work with key stakeholder groups on ethical issues arising from the use of AI. This will include working with industry to understand issues arising in the private sector; working with consumer advocates to understand consumer expectations in respect of AI; and working with the investment community to increase awareness of the need to incorporate ethical considerations in their AI investment decisions.
Why is Singapore forming such an Advisory Council?
AI is becoming an increasingly integral part of life in Singapore as the Government executes its “Smart Nation” initiative.
For example, local bank OCBC has developed an AI-based automated chat system called Emma that can communicate with customers and work out home loans; scientists at A*star’s Genome Institute of Singapore are using AI to pinpoint the roots of gastric cancer by scanning the entire genomes of a few hundred gastric cancer tumours; and researchers from the Saw Swee Hock School of Public Health and Singapore’s National Environment Agency has developed an AI agent to forecast dengue incidence up to four months ahead by learning the seasonal patterns of dengue cases over the last decade. These are just a few recent use cases as, with top-down support from the Government, Singapore embarks on an effort to position itself as a global centre of excellence in AI. Putting in place the Advisory Council, as part of a wider set of initiatives in the AI space, is the start of an effort to build a framework for trust in AI.
What does “ethics” mean in this context?
In making the announcement, the Infocomm Media Development Authority (IMDA) provided its own definition of “ethics” in the context of AI:
“Ethics encompasses issues surrounding fairness, transparency and the ability to explain an AI’s decision.”
This is a concept that will no doubt develop in the coming years but by providing a definition and, in particular, emphasising a need for AI to be able to explain itself, the IMDA appears to be setting out in general terms what it considers to be “ethical” in the context of AI.
What else is Singapore doing to ensure the ethical use of AI?
The establishment of the Advisory Council is part of several new AI-related initiatives announced by the Government recently.
Singapore’s privacy regulator, the Personal Data Protection Commission (PDPC), issued a discussion paper that proposes an accountability-based framework in relation to the commercial deployment of AI. Amongst other things, the PDPC proposes that, in order for AI to benefit businesses and society at large, a set of principles needs to be in place to promote trust and understanding in the use of AI technologies. For example, according to the PDPC, decisions made by or with the assistance of AI should be explainable, transparent and fair so that affected individuals will have trust and confidence in these decisions, something that aligns with the IMDA’s definition of “ethics”. The idea is that the PDPC’s discussion paper will frame the Advisory Council’s deliberations.
In addition, the IMDA announced the establishment of a research programme on “the Governance of AI and Data Use” at Singapore Management University, with the goal of advancing and informing scholarly research on AI governance issues over the next five years.
Separately, Singapore’s financial services regulator, the Monetary Authority of Singapore (MAS), announced in April 2018 that it is working with stakeholders to develop guidance for the ethical use of AI in the financial industry. This guide will set out principles and best practices for use of AI in the financial industry, and to reduce risks of data misuse.
Is this part of a broader global trend?
This announcement by the Singapore Government follows similar moves in other countries that have also started to develop guidance on AI usage. For example, Germany has drawn up ethical guidelines to govern the use of driverless cars, the UK has set up a new Centre for Data Ethics and Innovation to study and develop best practices for AI regulation, and the EU has recently appointed 52 experts to the new High Level Group on Artificial Intelligence to support the implementation of the EU Communication on Artificial Intelligence published in April 2018. By moving quickly and decisively, Singapore will be looking to give itself an edge in the trusted use of AI.
What does this mean for organisations doing business in Singapore?
As AI becomes increasingly ubiquitous, and as the “ethical” framework evolves, it seems likely that some level of regulation or guidance, whether binding or not, will be introduced in due course. Organisations that are active in the development or use of AI will be watching closely to ensure that any new frameworks introduced do not unduly stifle innovation in this rapidly-evolving space. They will also be looking for an opportunity to participate in discussions with the Advisory Council, the PDPC, the MAS, the IMDA and Singapore’s policy-makers in general to ensure that their voices are heard in the development of these new frameworks.