A few months ago, I noted the belated efforts of social media networks to reduce the flow of medical misinformation, and just last month I wrote about how Google had apparently changed its search algorithm to deprioritize health-related misinformation in its search results, resulting in certain quacks complaining that their website’s search traffic had taken massive hits. Also in June, all purpose health scammer and conspiracy theory creator Mike Adams was banned from Facebook, with a predictably amusing reaction on his part to the interruption of his grift due to social media blacklisting. So it was with great interest that I read this press release from Facebook that someone sent me:
In our ongoing efforts to improve the quality of information in News Feed, we consider ranking changes based on how they affect people, publishers and our community as a whole. We know that people don’t like posts that are sensational or spammy, and misleading health content is particularly bad for our community. So, last month we made two ranking updates to reduce (1) posts with exaggerated or sensational health claims and (2) posts attempting to sell products or services based on health-related claims.
- For the first update, we consider if a post about health exaggerates or misleads — for example, making a sensational claim about a miracle cure.
- For the second update, we consider if a post promotes a product or service based on a health-related claim — for example, promoting a medication or pill claiming to help you lose weight.
We handled this in a similar way to how we’ve previously reduced low-quality content like clickbait: by identifying phrases that were commonly used in these posts to predict which posts might include sensational health claims or promotion of products with health-related claims, and then showing these lower in News Feed.
Posts with sensational health claims or solicitation using health-related claims will have reduced distribution. Pages should avoid posts about health that exaggerate or mislead people and posts that try to sell products using health-related claims. If a Page stops posting this content, their posts will no longer be affected by this change.
I couldn’t help but remember as I read this that when I noted Facebook’s attempts to crack down on antivaccine misinformation by deprioritizing it in search results that I had considered it a good start, but suggested that limiting these measures to just antivaccine misinformation, although a good start, was not good enough. After all, Facebook also serves as the organizing nexus for many antivaccine groups, including their harassment of pro-science voices online. I also had a number of questions. First, however, how did we get here?
One of the greatest changes in the online experience that I’ve experienced in my nearly 30 years online has been the rise of social media platforms. When I first started online, it was basically BBS, email, and Usenet. Later, by the mid-1990s there were websites (most of which had no commenting sections), and I didn’t get into blogs until the early 2000s. These days, various messaging apps and platforms appear to be supplanting email, and the vast majority of people too young to have been online 15 years or more ago have no clue what Usenet was. (Does anyone even still use it?) Basically, you can think of Usenet as Reddit-like social media before there was social media. Tt was (is) a massive worldwide mass of discussion forums. What was very different from what we have now is that Usenet was decentralized, without a dedicated central server and administrator, and pretty much uncontrolled by anyone, other than Internet service providers, who decided which subset of the 100,000+ newsgroups they’d allow their users to access and how much storage space they would devote to each newsgroup. Oh, sure, people could set up moderated newsgroups for whom members had to be approved, but most of Usenet was the Wild West. In contrast, today, social media is centralized and controlled by a few companies: Facebook, Twitter, Google (which owns YouTube and, of course, controls the vast majority of the search engine business all of us depend on to find information online), and a handful of other, lesser players plus the comment sections of various websites and blogs (which, increasingly, are being run by software from Facebook or other players like Disqus and whose main sites tend to be run by WordPress or a couple of other companies) and some specialized web-based discussion forums.
There have been several consequences of this centralization of social media. One consequence is that it’s become much easier for people to post content that can rack up thousands (or millions) of views. With Facebook and YouTube, for instance, you can post video, image, or sound files for free and don’t have to worry about hosting your own site or paying for your own bandwith. Apple and other services let you post audio files for podcasts for free. Even better, YouTube and Facebook provide ways for you to monetize your content by running ads, with the company getting its cut of course. Another consequence is that clicks mean everything, because monetization depends on getting people to read, listen to, or watch your online content. In addition, because these platforms make it far easier to share media than ever before, it’s very easy for information (and misinformation) to “go viral” and spread exponentially, as more and more people share and reshare it. Old-timers might remember how complicated it was to share binary files to Usenet. (Anyone remember uuencode?) The binary file had to be encoded into ASCII, and then you had to have a program to decode the ASCII back to binary to retrieve the file. (Most of these files were pictures or sound files; video formats were not well standardized yet, and video files were just too massive.) It was worse than that, though. Because of character limits, the ASCII-encoded binary file often had to be split into many Usenet posts and then reassembled. Fun times, indeed.
Of course, the huge problem that’s arisen is that the ease with which media, be it written, images, sound, or video, can be shared and monetized has been a boon for quacks and antivaxers, who routinely use Facebook, Twitter, YouTube, and other social media to spread their health misinformation and hawk their quackery, while monetizing their content—not to mention to harass their opponents as well. (It’s not just health misinformation, as the rise of Alex Jones and his ilk demonstrated.) Add to that the way Google has worked is to rank websites by the number and reputation of incoming links. It was basically popularity and usefulness contest, with the most popular content that is useful according to the metrics that Google uses to determine usefulness showing up on the first page. As a result, a whole lot of quack and antivaccine websites showed up way too high on Google search results for a whole lot of health topics, including vaccines; that is, at least until Google tweaked its algorithm and started enforcing the quality guidelines that it had for its human reviewers a month ago.
Which brings me back to the Facebook announcement. All I could think about was: How is Facebook going to implement this? Its announcement says that its method will identify phrases that are commonly used in posts promoting health misinformation to predict which posts might include sensational health claims or promotion of products with health-related claims and use them to rank these stories lower in the newsfeed. It all sounds good, but how? For a system like this to work, you either have to know these phrases already, in which case I’d wonder who is telling Facebook engineers and coders what these phrases are, or you have to have a collection of quack and antivax websites that Facebook engineers can analyze to identify common phrases that are much more common in such websites. Either way, Facebook needs people to do this, and these people need to be experts in what is and isn’t reliable health information. Does it have these people? If so, who are they?
The thing is, the number of health care professionals who are experts in identifying dubious health claims is a pretty small percentage of the the population of health care professionals. The percentage of physicians, for instance, who are skeptics and able to identify quack websites is fairly low. Of course, it’s likely that Facebook is only going after the most egregious examples, the sort of content that pretty much any physician or nurse should be able to identify, which is helpful, but would still leave a lot of less obvious health misinformation on its platform. Maybe that’s enough. Maybe it’s the best that can be done.
On the other hand, let’s look at a couple of examples, first: MTHFR gene mutations. You can find a whole lot of websites claiming that MTHFR mutations predispose to vaccine injury. That’s not obviously quacky, and I daresay that most physicians who haven’t looked into the issue will immediately recognize the claims made about these mutations as the pseudoscience that they are. Ditto claims about mitochondrial disorders as predisposing to “vaccine-induced autism.” How about cancer quackery? Clínica 0-19, for instance, uses interventional radiology, chemotherapy, and a dubious immunotherapy to treat brainstem tumors. If you don’t know anything about the treatment other than what you read online, you might think it sounds reasonable, even if you are a physician. Even Stanislaw Burzynski might be hard for a lot of physicians to identify as a quack, particularly given all of his clinical trials. Most doctors assume that, if the FDA sanctioned a clinical trial of a treatment, there must be something to it, which, by the way, quack stem cell clinics have too. Basically, what I’m saying is that a certain skillset is needed.
Of course, being as algorithm-obsessed as it is, it wouldn’t surprise me if Facebook is trying to do this alone with AI and without much in the way of input from knowledgeable medical professionals. It could just be relying on users to flag pages, links, and websites, which would be unlikely to work very well. I guess time will tell.
I also know that it will be interesting to see the reaction of prominent antivaccine quacks. For instance, Mike Adams, being Mike Adams, published a typical one of his rants. It doesn’t have to do with Facebook, but it’s amusing nonetheless, as he claims that by the end of 2020 Google Chrome will “block all anti-cancer, “anti-vax” and anti-GMO websites at the browser level“:
By late 2020, Google’s Chrome browser will automatically block all so-called anti-cancer, “anti-vax” and anti-GMO websites as part of Google’s collapse into a Monsanto/Pharma criminal cartel. Users who want to visit websites that expose the scientifically-validated risks and potential harm of vaccines, chemotherapy, glyphosate or GMOs will have to switch to alternative browsers and search engines, since the Google.com search engine is already in the process of eliminating all such websites from its search results. Within a year or so, the Google Chrome browser won’t even allow a user to visit sites like NaturalNews.com without changing the browser’s default settings. The only websites accessible through Chrome will be those which are “approved” to promote mass medication, chemotherapy, pesticides, vaccines, fluoride, 5G cell towers and other poisons that enrich powerful, globalist corporations while dumbing down the population
Wait? Of course, he doesn’t link to any primary source for his claim; so I highly doubt that it’s true. Even if it is true, Adams just admitted that users will still be able to access all those sites if they change a default setting. So you’ll have to change the browser’s settings or use a different browser to see low quality quack content? Oh, the humanity! (That is, if it’s even true, which I doubt.) I also really doubt this bit:
According to our source, Google’s Chrome browser will also report back to Google when a logged in users attempts to access one of these sites, adding a “social penalty score” to that user, mirroring communist China’s social credit scoring system. This social scoring system will be later used by Google to deny services to users who are considered “untrustworthy” by Google.
This doesn’t even make sense. I’m not naive enough to think that Google would never consider such a system, but to what end? Just knowing what people are looking for and linking it with other identifying information are what Google does, but that doesn’t stop Adams from out and out making stuff up speculating that Google and Facebook will team up to blackball users based on this.
However cranks like Mike Adams are reacting, there’s little doubt that, after ignoring the problem for so long that it might be too late to fix it, social media platforms have finally been reluctantly goaded into action to clean up their platforms. Facebook’s decision to deprioritize low quality information is just the latest example. Will these actions by Facebook, Google, and other tech giants to clean up their platform and block their platforms’ use to facilitate the spread of antivaccine views and other dangerous health misinformation? That remains to be seen.
In the meantime, if only there were a dedicated group of people who pay attention to misinformation—be it in medicine, science, financial scams, claims of paranormal, conspiracy theories, or any other topic—people who are skeptics and know how to identify misinformation, scams, and conspiracy theories. If only there were meetings for such groups every year—(cough! cough!) NECSS next week and CSICon in October—of such people. If only there were groups—(cough! cough!) Guerilla Skeptics on Wikipedia—who have been combatting misinformation on important information sources on the web for years. If only there were actual physicians combatting quackery and misinformation to whom Facebook, Google, et al could turn.