A call to action was heard after it was revealed that Russia had used Facebook advertising to try to influence the outcome of the 2016 US presidential election. As a result of our research during the past three years, independent researchers have been able to design stronger defenses for the public. It has been difficult to perform this basic science because we have encountered fierce opposition from our primary study site, Facebook. Another indicator of the social media company’s disdain for investigation was the closure of our own accounts last month.
In August, Facebook’s vice president of integrity proclaimed that the social network was “the most transparent platform on the internet,” a claim that has since been disputed. As a matter of fact, it has created practically insurmountable obstacles to academics who are looking for data that can be shared and independently verified. For the record, it is true that Facebook does provide a searchable ad library and allows authorized individuals to obtain restricted political ad material for study purposes. Using Facebook’s business analytics tools, researchers have also been able to gain some insight into the popularity of unpaid content However, the platform not only restricts access to these tools, but it also aggressively attempts to shut down independent data collection operations.
An argument between social media platforms and the people who research them is not all that this is about. The spread of online disinformation has been dubbed a “infodemic,” which, like a virus, develops, replicates, and has a negative impact on society. COVID-19 spreads because people are afraid to use face masks or be vaccinated, due to online misinformation. It raises doubts about the validity of our electoral process. These damages can only be reduced if academics can get their hands on social media usage data and the algorithms that shape it. While this research is being hindered by Facebook’s restrictions,
When academics are granted access to data from Facebook, they must sign agreements that severely restrict their use of the data and their sharing of it. Facebook’s FORT (Facebook Open Research and Transparency) program, which allows researchers to investigate ad-targeting data, is a good illustration of this dilemma. Even though this technology has been widely advertised, Facebook has restricted its availability to a three-month period running up to the 2020 elections. It is necessary for researchers to agree to work in a closed environment, and they are not permitted to download the data so that it can be used by others. These discoveries can’t be reproduced by others, which is a vital part of scientific research.
These restrictions are a challenge for many scientists. There were issues with FORT that caused a project to be scrapped by Princeton University disinformation researchers. Facebook’s right to assess research before to publication was of particular concern. In the 2020 elections, the researchers believed that this law would prevent them from sharing knowledge about ad targeting.
Second, our team is a fantastic example of how Facebook aggressively attempts to fight independent sources of data about its platform. A citizen science browser extension called Ad Observer was launched last year to allow Facebook users to provide limited and anonymous information about the advertisements they see. Using our Cybersecurity for Democracy project, the extension sends information about the ad, such as who paid for it and how long it ran, to the extension. As a result, scholars and journalists have highlighted the issue of advertising targeting as a means of spreading disinformation. People who share the adverts they see on our site do not have their personal information collected by us because of ethical concerns. Furthermore, there is no scientific justification for us to do so—all of the information we need to answer our study questions can be found in publicly available data. An independent audit of social media platforms is essential because even this minimal data regarding ads has been quite useful for our research. To uncover QAnon conspiracy theories and far-right militias, as well as to illustrate that Facebook fails to recognize around 10% of political advertising that appear on its platform, we used the data collected by our volunteers. We also make our data available to other researchers, so that they, too, can benefit from it.
Last month, our personal Facebook accounts were shut down as a result. Cybersecurity for Democracy is unable to use the platform’s limited transparency information because of this. Because of an agreement made with the Federal Trade Commission respecting customer privacy, the firm claimed that its actions were required. According to the FTC’s rapid response, Facebook was incorrect to ban our research under the guise of its consent decree: “The consent decree does not bar Facebook from providing exceptions for good-faith research in the public interest. Surveillance-based ad campaigns, in particular, are supported by the Federal Trade Commission (FTC).
Facebook’s decision to suspend our accounts has not been reversed, and it has other researchers in its sights as well. According to a statement from the German-based AlgorithmWatch Initiative, which monitors how Instagram (owned by Facebook) treats political posts and other content, the project has been canceled. One of Facebook’s researchers expressed privacy concerns, according to AlgorithmWatch in a statement.
Is there anything else we need to know about this? It’s clear that we believe that Facebook should reinstate our accounts and cease harassing other honest researchers. Scientists, on the other hand, cannot rely solely on the platforms we follow for long-term transparency. There should be better legal protections for researchers and journalists who conduct research on social media sites in a privacy-protecting manner. Proposals to tighten these safeguards are already being discussed in both the United States and the European Union. Congressional action is now required.