AI Tools and Disinformation
Original: 11/01/21
Revised: no
One of the most pressing issues with AI right now is the use of AI to falsify things, complemented by the lack of AI tools to provide effective deterrents against disinformation. This pressing concern has acquired significantly more weight with the propagation of disinformation and conspiracy theories centered around two things, both in 2020. First, the Big Lie that the 2020 election was not lawfully won by Joe Biden. Second, the disinformation about covid vaccines and potential therapeutic treatments. Both have been debunked repeatedly by the most authoritative organizations and by the professionals who have been directly involved with the security of the elections or the safety of the vaccines. Both are dangerously corrosive to our collective well-being and they show that the biggest threats facing the US right now are not external.
Although I do not agree with many opinions expressed in the following clip, I believe the two individuals in it are genuine and have the ability to say important things in a simple engaging manner. To me, the value of the clip is in the appeal to calmness and attempted objectivity; the clip also shows a gentler and more apolitical way of thinking about disinformation and raises the important issue of our individual attitudes toward the common good versus free will and personal liberty. That pull-push is hard. Just like these two individuals, I have no medical expertize and I suggest no medical treatments. I do believe though that the vaccines are the way to go, and although I am hesitant about enforcement versus persuasive information, I do not believe that risking sickness in order to get natural (and supposedly stronger) immunity is worth it. So, nothing medical here, just a way to highlight the pull-push of personal responsibility versus personal freedom. This pull-push will also be quite relevant to how we use AI and in particular how we use it to combat disinformation.
What are the facts about ivermectin's approval by the FDA? First, the drug has indeed been approved by the FDA for use in humans, so the whole "horse dewormer" saga is humorous but untrue. However, and this is the essential however ... it has not been approved for treating covid! OK, so after having said all that, let me also paste a clip, this time with a medical professional, adding a scientific component to this issue. At the same time, notice that Rogan is quite articulate and his argument, humorous at the first take, is a serious (and I believe genuine) one. This offers us an example of the difference between misinformation and disinformation. Rogan's idea may be classified as misinformation, but certainly not disinformation, i.e. the willful distortion of the truth. Keep this example in mind, we'll spell out this difference later.
Any progress in this area of AI (combating disinformation), being central to our future well-being, is to be commended. Social media especially has been used for making claims that are tortured at best, misleading on average, and dangerous at worst. People often wonder if posted claims can be verified expeditiously and effectively. Neural networks have awesome performance for fact verification, but do it only in a black-box fashion, giving Yes/No answers, without explainability. But Yes/No without explanations would not work, people need to see why, so explanations must be given, and they must be given in natural language.
On the other hand we do have formal (mathematical) proof assistants that can output the full array of deductions for a given factual sentence relative to a knowledge domain, but they are extremely slow. So we need systems that combine these two techniques. ProoFVer is a fact verification system based on natural logic, with explainability and good performance. It is joint work between the University of Cambridge and Facebook. As of now, these systems are only used to assist humans with fact verification. But time will come when they will do it all, without humans in the decision loop.
Here is an article about ProoFVer: Cambridge U & Facebook’s ProoFVer: High-Performance Natural Logic-Based Fact Verification With Explainability. And a much longer discussion about disinformation and some ways to combat it, keeping in mind that AI tools are becoming the tools of preference with which to sow disinformation: