Fact-checking a finite, limited universe… should prioritise claims risking public interests: Logically CEO

"One of the biggest challenges in the misinformation space is the speed and scale with which content is distributed online," tells Lyric Jain, founder and chief executive officer of Logically.

Social media companies across the world are under intense public and regulatory scrutiny for coming up short on checking misinformation, fake news and hate speech on their platforms. At such a time, companies like Logically, which use AI (artificial intelligence) and ML (machine learning) along with human reviewers become all the more important. That, however, is easier said than done, since the speed at which the misinformation and fake news work is very challenging to tackle at times, Lyric Jain, founder and chief executive officer of Logically, told The Indian Express in an interview. Edited Excerpts:

How does fact-checking using AI and ML work?

One of the biggest challenges in the misinformation space is the speed and scale with which content is distributed online. And that is where come these automated methods of detecting misinformation and fact-checking becomes really powerful.

So the way it works is we look at an underlying claim, break that down into trying to understand what specific assertion is there, and then gathering evidence from public record to understand what context could help understand whether something is true or misleading. That is the way automated fact-checking is used to amplify the work of humans, during the first few minutes or hours.

What are the objective parameters on how disinformation is judged?

One important classification here is between disinformation and misinformation. Misinformation is a gray area. But When it comes to disinformation, there is intent. Usually what we look at isn’t content, it is the tactics and techniques that the agent behind the disinformation campaign is using. Are they using bots or coordinated behavior of using inauthentic accounts? Those are the kinds of methods that we actually look out for. That is where it becomes quite objective, because we are looking out for methods, not so much the content. You first just look at the methods and don’t really focus on the message.

Fact-checking is a very finite, limited universe. We cannot fact-check future predictions or opinions. I think what we should be prioritising is where any of these claims might be leading to risks to of public interests such as health, safety, election integrity, national security.

Would it be fair to say that fact-check is still limited to checking the pattern rather than the content?

Not so much. What I was describing was around disinformation. For fact-checking, we try and assess content. We try and understand a claim and based on that claim, we try and query various open search engines such as Google and Bing, as well as closed search engines like UN and government data sets.

There are strings of queries automatically generated and evidence is then gathered. All pieces of evidence are compared to each other versus the original claim, for assessing which arguments are the most credible. That automation works around 70 per cent of the time, which is a lot higher than say 2-3 years ago. But for the 30 per cent of the time, when there is a highly novel claim, and there is not a lot of information available, is when you need efficient and reliable expert, human-led fact checking process.

AI and ML are also being used to create fake news and disinformation. How do you fight such campaigns?

Not enough people realise that there is another side to the table. They get a lot of investment from nation states and private sector industries as well. We try and predict based on the technologies people seem to be using.

At the moment, there seems to be a lot of hype around deep fake and synthetic videos. We found that state-sponsored campaigns use synthetic texts. We replicated that technology and ended up building a defence to it. Now we can detect when those campaigns are being built using those specific techniques.

To train an AI to detect fake news is very hard as it works only 70-80 per cent of the time.

What are the other challenges you face on a day-to-day basis?

As a technology company, we have access to data. The challenge is what do we do with all this data? What kind of measures are going to be effective but also proportionate because we do not want to suppress free speech. I think that’s the area where the challenge is new and moving.

Figuring out when a particular takedown is effective and when does it turn someone into a digital monster is also a challenge. Those are the areas we are working quite intensively. It remains something of an open challenge.

Source: Read Full Article