The use of Artificial Intelligence (AI) brings many opportunities to business, but it also presents its share of challenges. In fact, the larger the company, the greater the benefits and challenges of AI, as its uses become more complex.

Among the many challenges that exist, we have selected a few that we will address under the following headings: 

Technological issues

Validity of results

AI impresses with its ability to process a titanic amount of data and extract a result in a matter of seconds.  

But the validity of the results obtained already depends, on the one hand, on the quality and precision of the data that constitute its raw material and, on the other hand, on the sampling available. In fact, an insufficient quantity of data on a subject, for example on the transfer of learning, or a non-representative sampling of data, for example where women are not proportionally represented, will distort the results, giving rise to errors and biases that may go unnoticed by the user. 

This highlights both the limits of the use of artificial intelligence and the importance of human capital, which, far from submitting to the indiscriminate use of AI, is invited to analyze, validate and control the data and processes involved in the use of AI in order to reap its full benefits.  

Rapid evolution

AI technology is evolving very rapidly, producing an explosion of new tools.  

Choosing among the array of existing AI tools to meet an organization’s needs is already a major challenge for leaders and managers, who must also ensure that these tools integrate well with each other and complement each other. For example, an organization needing to offer part of its distance learning in asynchronous format might have to select and integrate AI tools capable of creating summaries, translating text, simulating the human voice or proposing new images according to a defined graphic style, all with a view to developing courses. But the very rapid evolution of technology multiplies the difficulty by pushing leaders and managers to constantly have to re-evaluate their choices of AI tools that have suddenly become inadequate or obsolete.  

And finally, AI tools need to be frequently updated to remain effective, and are not immune to bugs that can hamper human operations. 

Human issues

Human issues

Faced with such high-performance AI systems and tools, employees and managers alike may feel distrustful of AI technology perceived as an unfair competitor challenging the value of their tasks and their productivity. There’s no denying it, job redesign is a reality in just about every sector.  

But like it or not, AI is here to stay. And even when roles and responsibilities remain essentially the same, the use of AI, by disrupting the ways in which tasks are performed, consequently engenders the redistribution of effort allocated to various types of tasks. For example, a trainer will still have the same roles and responsibilities within the organization, but the use of AI could mean that he or she puts less effort into developing course material and more into accompanying the people to be trained. That’s a good thing, you may say, and it’s true. But it doesn’t take away from the complexity of mastering the AI tools needed to develop courses and, above all, changing one’s approach to coaching people who also use AI independently to learn.    

And it has to be said that not all organizations or their workforce are well prepared for the adjustments this implies.  

Training needs

The speed and depth of the changes brought about by the use of AI, in relation to the redefinition of roles and responsibilities and the execution of tasks, are creating training needs for all types of positions. This is also likely to be the case for our trainer, who will need training himself or herself, first to master the tools, and then to better develop and support. 

Ethical issues

The ethical issues raised by the use of AI merit thoughtful consideration, with a view to establishing a clear course of action even before AI is implemented in an organization.


A single question asked in a given AI tool can, with disconcerting ease, validate all kinds of human perceptions, which are never free from bias. For example, when it comes to performance appraisal, a manager could use AI to help employees develop their skills as much as to justify their non-retention for incompetence.

So how can we detect and minimize the risks of potential unethical use of AI?

Consent and confidentiality

AI tools are also capable of collecting personal information and even inferring it from known data. Not only must this information be protected by law, but the employee must be able to consent to its collection and use.

Let’s assume that the interactions of an employee chatting with a chatbot tutor during a training course are collected and analyzed by the training team and the management team. Has the collection been consented to by all parties? Are the reasons for collecting the data known and agreed by all parties? Are they justifiable, as in the case of a manager who measures against expected competencies?


For that matter, did the employee even know that said tutor is a chatbot and not a real person? AI can blur the boundary between perception and reality to such an extent that it’s easy to abuse user trust. 

Above and beyond compliance with existing and still emerging laws relating to the use of AI, in all the scenarios presented above, the potential for unethical use of AI requires the implementation of safeguards such as the establishment of transparency rules. 

Our solution

We offer you personalized support to review with you the technological, human and ethical issues raised by the use of artificial intelligence, as well as a tailor-made technological solution. 

Contact us to find out more.


Find out about our conferences and new developments

* indicates required

Other articles of interestView all

April 12, 2024

Governance of AI

To know more
April 10, 2024

Governance of AI

To know more
April 5, 2024

Artificial intelligence opportunities

To know more