By akademiotoelektronik, 16/03/2022

Artificial intelligence mirrors the world around us, which is a challenge

Developers and data scientists are human, of course, but the systems they create are not – they are merely coded reflections of the human reasoning that drives them. Ensuring that artificial intelligence (AI) systems deliver fair and unbiased results, while ensuring the right business decisions, requires a holistic approach involving most of the business. IT staff and data scientists cannot – and should not – be expected to act alone when it comes to artificial intelligence.

There is a growing desire to extend artificial intelligence beyond the testbeds and limits of systems development to integrate it into the business world. For example, at a recent panel at the AI ​​Summit, held in New York in December 2021, panelists agreed that business leaders and managers should not only question the quality of decisions made by AI, but also become more actively involved in their formulation.

So how do you address any biases or inaccuracies? It is clear that this is a challenge that must be taken up by all of the company's leaders. IT, which until now has borne most of the burden of AI, cannot do it alone. Industry experts advocate opening up AI development to more human engagement. “Toss the burden on IT leaders and staff is to falsely generalize a set of significant company-wide ethical, legal and reputational issues to a technical issue,” said Reid Blackman, CEO of Virtue and advisor to Bizconnect. “Bias in AI is not just a technical problem; they are nested in all departments. »

advertisement

Fight against bias

To date, not enough has been done to combat AI bias, continues Reid Blackman. “Despite the attention paid to biased algorithms, efforts to address this issue have been quite minimal. And eliminating biases and inaccuracies in AI takes time. “Most organizations understand that the success of AI depends on building trust with the end users of these systems, which ultimately requires fair and unbiased AI algorithms,” says meanwhile Peter Oggel, CTO and Senior Vice President of Technology Operations at Irdeto.

L'intelligence artificielle reflète le monde qui nous entoure, ce qui constitue un défi

More needs to be done beyond the boundaries of data centers or analyst sites. "Data scientists don't have the training, experience, and business needs to determine which of the incompatible equity metrics is appropriate," says Reid Blackman. “Furthermore, they often don't have the clout to raise their concerns with the relevant senior executives or relevant subject matter experts. »

It's time to do more "to look at those results not only when a product is live, but also during testing and after any major project," said Patrick Finn, president and general manager of the Americas at Blue Prism. “They also need to train technical and business staff on how to mitigate bias within AI and their human teams, to empower them to participate in improving the use of AI. in their organization. This is both a top-down and bottom-up effort, fueled by human ingenuity: removing obvious biases so that the AI ​​does not take them in and therefore does not slow down the work or worsen someone's results. Those who don't think fairly about AI aren't using it the right way. »

Defining the notion of fairness

To solve this challenge, "you have to go beyond validating AI systems against a few parameters", explains Peter Oggel. “If you think about it, how do you define the notion of fairness? A given issue may have multiple points of view, each with a different definition of what is considered fair. Technically, it is possible to calculate metrics for datasets and algorithms that say something about fairness, but against what should this be measured? »

More investment needs to be made “in researching biases and understanding how to remove them from AI systems. The results of this research should be incorporated into a framework of standards, policies, guidelines and best practices that organizations can follow. Without clear answers to these questions and many more, companies' efforts to eliminate bias will be in vain,” concludes Peter Oggle.

AI-related biases are often “involuntary and subconscious,” he adds. "Educating staff about the issue will go some way to combating bias, but it's equally important to ensure the diversity of your data science and engineering teams, provide clear policies and ensure a adequate supervision. »

Shorter term measures

While opening up projects and priorities to the business takes time, there are shorter-term steps that can be taken at the development and implementation level. Harish Doddi, CEO of Datatron, advises asking the following questions when developing AI models:

During development, “machine learning models are tied to certain assumptions, rules, and expectations” that can yield different results when put into production, says Harish Doddi. “This is where governance is essential. Part of this governance is a catalog to keep track of all model versions. “The catalog must be able to keep track and document the framework in which the models are developed, as well as their lineage. »

Companies “must better ensure that commercial considerations do not override ethical considerations. It's not an easy balancing act,” explains Peter Oggle. “Some approaches automatically monitor how models behave over time on a fixed set of prototypical data points. This verifies that the models behave as expected and respect certain common sense constraints and known risks of bias. Additionally, performing manual checks of sample data on a regular basis to see how a model's predictions align with what we expect or hope to obtain can help spot emerging and unexpected issues. »

Source: ZDNet.com

Tags: