What Governments Need to Understand About Ethical AI
![](https://article-imgs.scribdassets.com/60zhd8v287nw1e2/images/fileV1GEAOZS.jpg)
![eurobizrevuk1809_article_073_01_02](https://article-imgs.scribdassets.com/60zhd8v287nw1e2/images/filePNW9IPND.jpg)
![eurobizrevuk1809_article_073_01_03](https://article-imgs.scribdassets.com/60zhd8v287nw1e2/images/fileOD6SZ1IM.jpg)
The increasing application of artificial intelligence across the value chain reflects that such a technological development provides competitive advantages to enterprises. However, as one can observe, its meteoric rise paves the way for greater ethical risks – which means a more effective governance should be put in place. Here are a few propositions governments can consider when looking at the scope of the problem associated with AI.
Primum Non Nocere. First do no harm. So goes the more modern version of the Hippocratic Oath, taken by doctors despite knowing more than likely they will be involved in a patient’s death. The involvement may be from a mistaken diagnosis, exhaustion, or a variety of other influences, leading to a natural concern about how many of these mistakes could be avoided. AI is taking up the challenge, and shows promise, but just as with doctors, if you give AI the power of decision making along with the power of analysis, it will more than likely be involved in a patient’s death. If so, is it the responsibility of the doctor? The hospital? The engineer? The firm?
Answers to such questions depends on how the governance is arranged – whether or not a doctor is at the end of each AI-provided analysis, checking whether or not it’s correct; whether or not the decision-making paths of each AI driven diagnosis can be followed. It is paramount to remember, that current attempts to automate and reproduce intelligence are
You’re reading a preview, subscribe to read more.
Start your free 30 days