The development of artificial intelligence should change our lives in different ways in the coming years. As with any new technology, there are a ton of challenges for those who will use it.
Here are some of the challenges that organizations here and around the world will have to address in order for artificial intelligence to develop properly and be accepted by the public.
Operationalization of AI
In recent years, billions of dollars have been invested in the research and development of artificial intelligence solutions.
However, according to a study published in 2019, a large proportion of organizations that have invested considerable amounts of money are struggling to reap the expected benefits.
Although technologically impressive, many of the solutions developed struggle to bring added value to companies and meet a clear business need.
For Mehdi Merai, CEO and co-founder of the Montreal company Dataperformers, the operationalization of artificial intelligence is the industry’s main challenge for 2021 and the following years.
According to him, in recent years, a lot of effort has been put into researching and developing very advanced artificial intelligence solutions. However, these solutions are not always utilized in a business context.
For M. Merai, more than ever, it is time to move from abstract to concrete.
Don’t aim to create the perfect solution before you market it. Instead, companies need short-term solutions to deal with specific issues. They can rarely afford to wait several years before obtaining them.”
According to him, anything to do with logistics chains automation could benefit from artificial intelligence solutions in the short term.
AI Governance and Ethics
Artificial intelligence will have impacts on a mass of actors in society. As a result, the organizations that operate it will need a governance framework. This is a major challenge as the issues to be addressed are broad.
This includes determining who should be responsible for artificial intelligence processes within an organization, what their specific roles will be, and what their responsibilities will be.
The question of who will be accountable if an AI solution causes significant damage is a sensitive topic that mixes ethical and legal issues.
And since there are massive amounts of data at the root of artificial intelligence, organizations must ask themselves several questions related to the management of these data.
For example, how and where this data will be stored. Quality must also be ensured. This can be done in particular by minimizing the famous bias in decision algorithms, a subject that has made the headlines several times in recent years.
A governance plan must also include different performance measures to be able to assess its effectiveness and correct shortcomings.
Trust in Artificial Intelligence
In recent years, different topics related to artificial intelligence have struck the imagination of the public for good and bad reasons.
For all these reasons, it is imperative that the public can trust in solutions involving artificial intelligence.
The Model AI Governance Framework, published by the Personal Data Protection Commission (PDPC) of Singapore, states that any decision-making process involving artificial intelligence should be easily explainable, transparent and fair. This would be one of the keys to increasing citizens’ trust in these technologies.
In order to contribute to the sense of trust, organizations will have to implement different solutions.
In particular, they will have to provide as much information as possible to their customers. For example, it is important for an Internet user to know that they are chatting with an automated conversational agent (chatbot) rather than with a human. They must also know how the information recorded during this exchange will be used.
Finally, all communications to the public must be conveyed in language that is comprehensible to all, even if the subject is sometimes complex and abstract.