A humanoid robot with AI is demonstrated at CES in Las Vegas. Proponents of explainable AI say it has helped increase the effectiveness of its application in fields such as health care and sales. AFP
A humanoid robot with AI is demonstrated at CES in Las Vegas. Proponents of explainable AI say it has helped increase the effectiveness of its application in fields such as health care and sales. AFP
A humanoid robot with AI is demonstrated at CES in Las Vegas. Proponents of explainable AI say it has helped increase the effectiveness of its application in fields such as health care and sales. AFP
A humanoid robot with AI is demonstrated at CES in Las Vegas. Proponents of explainable AI say it has helped increase the effectiveness of its application in fields such as health care and sales. AFP

Here's how AI is explaining itself to people


  • English
  • Arabic

Microsoft's LinkedIn boosted subscription revenue by 8 per cent after arming its sales team with artificial intelligence software that not only predicts clients at risk of cancelling, but also explains how it arrived at its conclusion.

The system, introduced last July and described in a LinkedIn blog post, marks a breakthrough in getting AI to “show its work” in a helpful way.

While AI scientists have no problem designing systems that make accurate predictions on all sorts of business outcomes, they are discovering that to make those tools more effective for human operators, the AI may need to explain itself through another algorithm.

The emerging field of “Explainable AI”, or XAI, has spurred big investment in Silicon Valley as start-ups and cloud companies compete to make opaque software more understandable. It has also stoked discussion in Washington and Brussels where regulators want to ensure automated decision-making is done fairly and transparently.

AI technology can perpetuate societal biases like those around race, gender and culture. Some AI scientists view explanations as a crucial part of mitigating those problematic outcomes.

US consumer protection regulators including the Federal Trade Commission have warned over the past two years that AI that is not explainable could be investigated. The EU next year could pass the Artificial Intelligence Act, a set of comprehensive requirements including that users be able to interpret automated predictions.

Proponents of explainable AI say it has helped increase the effectiveness of its application in fields such as health care and sales. Google Cloud sells explainable AI services that, for instance, tell clients trying to sharpen their systems which pixels, and soon, which training examples mattered most in predicting the subject of a photo.

But critics say the explanations of why AI predicted what it did are too unreliable because the technology to interpret the machines is not good enough.

LinkedIn and others developing explainable AI acknowledge that each step in the process — analysing predictions, generating explanations, confirming their accuracy and making them actionable for users — still has room for improvement.

But after two years of trial and error in a relatively low-stakes application, LinkedIn says its technology has yielded practical value. Its proof is the 8 per cent increase in renewal bookings during the current fiscal year above normally expected growth. LinkedIn declined to specify the benefit in dollars, but described it as sizeable.

Before, LinkedIn salespeople relied on their own intuition and some spotty automated alerts about clients' adoption of services. Now, the AI quickly handles research and analysis. Called CrystalCandle by LinkedIn, it calls out unnoticed trends and its reasoning helps salespeople hone their tactics to keep at-risk customers on board and pitch others on upgrades.

LinkedIn says explanation-based recommendations have expanded to more than 5,000 of its sales employees spanning recruiting, advertising, marketing and education offerings.

LinkedIn boosted subscription revenue by 8 per cent after offering its sales team with artificial intelligence software that explains itself. AFP
LinkedIn boosted subscription revenue by 8 per cent after offering its sales team with artificial intelligence software that explains itself. AFP

“It has helped experienced salespeople by arming them with specific insights to navigate conversations with prospects. It’s also helped new salespeople dive in right away,” said Parvez Ahammad, LinkedIn's director of machine learning and head of data science applied research.

In 2020, LinkedIn had first provided predictions without explanations. A score with about 80 per cent accuracy indicates the likelihood a client soon due for renewal will upgrade, hold steady or cancel.

Salespeople were not fully won over. The team selling LinkedIn's Talent Solutions recruiting and hiring software were unclear on how to adapt their strategy, especially when the odds of a client not renewing were no better than a coin toss.

Last July, they started seeing a short, auto-generated paragraph that highlights the factors influencing the score.

For instance, the AI decided a customer was likely to upgrade because the company grew by 240 workers over the past year and candidates had become 146 per cent more responsive in the past month.

In addition, an index that measures a client's overall success with LinkedIn recruiting tools surged 25 per cent in the last three months.

I view interpretability as ultimately enabling a conversation between machines and humans
Been Kim,
AI researcher at Google

Based on the explanations, sales representatives now direct clients to training, support and services that improve their experience and keep them spending, said Lekha Doshi, LinkedIn's vice president of global operations.

But some AI experts question whether explanations are necessary. They could even do harm, engendering a false sense of security in AI or prompting design sacrifices that make predictions less accurate, researchers say.

Fei-Fei Li, co-director of Stanford University's Institute for Human-Centred Artificial Intelligence, said people use products such as Tylenol and Google Maps whose inner workings are not neatly understood. In such cases, rigorous testing and monitoring have dispelled most doubts about their efficacy.

Similarly, AI systems overall could be deemed fair even if individual decisions are inscrutable, said Daniel Roy, an associate professor of statistics at the University of Toronto.

LinkedIn says an algorithm's integrity cannot be evaluated without understanding its thinking.

It also maintains that tools like its CrystalCandle could help AI users in other fields. Doctors could learn why AI predicts someone is more at risk of a disease or people could be told why AI recommended they be denied a credit card.

The hope is that explanations reveal whether a system aligns with concepts and values one wants to promote, said Been Kim, an AI researcher at Google.

“I view interpretability as ultimately enabling a conversation between machines and humans,” she said.

BULKWHIZ PROFILE

Date started: February 2017

Founders: Amira Rashad (CEO), Yusuf Saber (CTO), Mahmoud Sayedahmed (adviser), Reda Bouraoui (adviser)

Based: Dubai, UAE

Sector: E-commerce 

Size: 50 employees

Funding: approximately $6m

Investors: Beco Capital, Enabling Future and Wain in the UAE; China's MSA Capital; 500 Startups; Faith Capital and Savour Ventures in Kuwait

Gulf Under 19s final

Dubai College A 50-12 Dubai College B

The%20specs
%3Cp%3E%3Cstrong%3EEngine%3A%20%3C%2Fstrong%3E3.5-litre%20twin-turbo%20V6%20%0D%3Cbr%3E%3Cstrong%3EPower%3A%20%3C%2Fstrong%3E456hp%20at%205%2C000rpm%0D%3Cbr%3E%3Cstrong%3ETorque%3A%20%3C%2Fstrong%3E691Nm%20at%203%2C500rpm%0D%3Cbr%3E%3Cstrong%3ETransmission%3A%20%3C%2Fstrong%3E10-speed%20auto%20%0D%3Cbr%3E%3Cstrong%3EFuel%20consumption%3A%20%3C%2Fstrong%3E14.6L%2F100km%0D%3Cbr%3E%3Cstrong%3EPrice%3A%20%3C%2Fstrong%3Efrom%20Dh349%2C545%0D%3Cbr%3E%3Cstrong%3EOn%20sale%3A%20%3C%2Fstrong%3Enow%3C%2Fp%3E%0A
The Perfect Couple

Starring: Nicole Kidman, Liev Schreiber, Jack Reynor

Creator: Jenna Lamia

Rating: 3/5

COMPANY%20PROFILE
%3Cp%3E%3Cstrong%3ECompany%20name%3A%3C%2Fstrong%3E%20OneOrder%3Cbr%3E%3Cstrong%3EStarted%3A%3C%2Fstrong%3E%20March%202022%3Cbr%3E%3Cstrong%3EFounders%3A%3C%2Fstrong%3E%20Tamer%20Amer%20and%20Karim%20Maurice%3Cbr%3E%3Cstrong%3EBased%3A%3C%2Fstrong%3E%20Cairo%3Cbr%3E%3Cstrong%3ENumber%20of%20staff%3A%20%3C%2Fstrong%3E82%3Cbr%3E%3Cstrong%3EInvestment%20stage%3A%3C%2Fstrong%3E%20Series%20A%3C%2Fp%3E%0A
The%20team
%3Cp%3E%0DFashion%20director%3A%20Sarah%20Maisey%0D%3Cbr%3EPhotographer%3A%20Greg%20Adamski%0D%3Cbr%3EHair%20and%20make-up%3A%20Ania%20Poniatowska%0D%3Cbr%3EModels%3A%20Nyajouk%20and%20Kristine%20at%20MMG%2C%20and%20Mitchell%0D%3Cbr%3EStylist%E2%80%99s%20assistants%3A%20Nihala%20Naval%20and%20Sneha%20Maria%20Siby%0D%3Cbr%3EVideographer%3A%20Nilanjana%20Gupta%3C%2Fp%3E%0A
WHAT IS A BLACK HOLE?

1. Black holes are objects whose gravity is so strong not even light can escape their pull

2. They can be created when massive stars collapse under their own weight

3. Large black holes can also be formed when smaller ones collide and merge

4. The biggest black holes lurk at the centre of many galaxies, including our own

5. Astronomers believe that when the universe was very young, black holes affected how galaxies formed

DRIVERS' CHAMPIONSHIP STANDINGS

1. Sebastian Vettel (Ferrari) 171 points
2. Lewis Hamilton (Mercedes-GP) 151
3. Valtteri Bottas (Mercedes-GP) 136
4. Daniel Ricciardo (Red Bull Racing) 107
5. Kimi Raikkonen (Ferrari) 83
6. Sergio Perez (Force India) 50
7. Max Verstappen (Red Bull Racing) 45
8. Esteban Ocon (Force India) 39
9. Carlos Sainz (Torro Rosso) 29
10. Felipe Massa (Williams) 22

First Person
Richard Flanagan
Chatto & Windus 

Updated: April 09, 2022, 4:30 AM