Do Your Own Research: Liber-net's Misinfo Grant Database
A conversation with Andrew Lowenthal about what the database shows and its limitations. Plus, one bizarre grant for barbers and hairstylists
Andrew Lowenthal is a familiar name to many Racket readers. He worked with Matt on the Twitter Files and played an instrumental role in developing Racket’s top 50 list of organizations that are part of the Censorship-Industrial Complex.
We’ve reconnected with Lowenthal in the last few weeks because of an impressive new searchable database developed by his nonprofit group, liber-net, on U.S. government grants awarded to groups to fight mis/disinformation. We first reported on the database here.
It breaks down government grants in the mis/disinformation space going back to 2010. It gives the amounts, of course, but also descriptions and commentary about the grants with links to official information on USAspending.gov, plus a ratings system of one to five flags for each grant. Roughly 100 of 867 grants since 2016 have at least four flags.
Lowenthal also writes about what he’s learning from the database on his Network Affects Substack. Last week, for example, he analyzed which organizations and countries received the most U.S. mis/disinfo grants.
For journalists, this database is a treasure trove of information. It doesn’t tell you the whole story about the grants, but it provides a thread for journalists and anyone else to do their own research. I asked Lowenthal about that:
I think there's actually several dozen stories in there for anyone. And this would be a invitation to anyone out there… what was this project? Who are these people? What were they doing during this time? Each one actually deserves a thousand words on it, or at least half of them do, because there’s some really bizarre kind of thinking around a lot of the activities that people were undertaking.
One example of bizarre: a $637,000 NIH grant to the Icahn School of Medicine at Mount Sinai last year to address health misinformation in marginalized communities. A component of the grant is to develop an app “especially designed for hairstylists and barbers who are known influencers in communities of color. The app will give these community influencers easy access to understandable, reliable, and timely health infographics they can share with clients.”
Just what you want when you sit down for a hair cut: your barber hitting you up with government-funded health care talking points. The grant was given a rating of five flags.
I reached out to the medical school’s project leader and the NIH representative listed on the grant, but did not hear back.
Here’s more of my conversation with Lowenthal about his database and about him, edited for clarity and brevity.
Greg Collard: How did you get involved in this type of work?
Andrew Lowenthal: I’ve been involved in it for a really long time because I essentially founded and ran an NGO that worked on these issues for almost two decades. But then there was this drift that I saw starting a little bit before [the first term of] Trump and Brexit, and then once Trump and Brexit happened, it really went into full swing. Essentially all the work politics, speech repression and the safety culture, etc. A lot of eggshells got laid like landmines around the spaces that I was in, and once Covid hit, it really took off. And a lot of spaces that were meant to be defending free speech and expression, not only did they say nothing during the censorship during Covid, but it turns out they participated in the censorship.
GC: So you experienced censorship in the organization where you worked?
AL: Well, not in my organization, because I was the director. But I was self-censoring for sure because if I had been honest — or if I'd been direct about what I thought about the pandemic and the Covid censorship — probably half the staff would've left and probably half the funders would've stopped funding us.
GC: I worked in public radio for years, so I know about self-censoring for sure. So, this database, how did it come about and how long did it take to develop?
AL: Well, we first started with this policy project, which was about what a new administration could do to essentially dismantle the censorship industrial complex. That started in October last year, and in November we decided to build a database of all the federal grants. And then of course there was this huge two-week period where USAID and DOGE became kind of the talk of the town, but we weren’t actually ready for it. That's when it should have gone out or beforehand, but we were trying to be very careful about what we included and didn't include. Part of the later motivation was to counter what for us seemed like a little bit of [DOGE’s] carpet-bombing approach to defunding this stuff, which generates a huge amount of backlash. We want to provide something a little more precise.
GC: You recently wrote about the top 30 countries that received grants to address the misinformation. One thing that surprised me is that Ukraine didn’t make the top 30 in either the total amount or the number of awards. Why is that?
AL: My theory is that a lot of it's hidden because of security considerations. Also, there are not a huge number of organizations that have the capacity to take money for this. The ones that do are really big and it's being included in much larger grants, where Ukraine [mis/disinfo] only represents a small percentage.
GC: This database is obviously a tool for journalists. Do you see it as a tool for the general public?
AL: Yes, to a degree, because I also think there are advocates and et cetera out there that would find it useful to know who is actually getting the funding, and potentially more for journalists to kind of understand what was being funded. Because the other thing about this database is we're not saying this is all censorship work, which is why we have the ratings and the flags. If you look at some of the one-flag things, they're quite innocuous and you wouldn't kind of go, hey that sounds like a deeply problematic project. We wanted to capture more broadly what was happening in the grants space. There are a lot of bad awards that are, I think you could say, advancing a kind of censorship system, but not all of it is. So you have to be quite careful to not throw the baby out with the bath water.
But I think journalists, people who are free speech advocates, would definitely find it quite useful to get a clearer understanding of what actually happened. The narrative that this was all censorship work, that doesn't fit. And the narrative that huge amounts of this were going out to foreign initiatives— there's a certain amount, but it's not the majority of the money. And the narrative that it's all USAID I would say is also not correct. It's actually much more the Defense Department in terms of pure amounts of money, and it's the State Department in terms of number of grants.
And I should also say this is not a complete picture. For sure there's stuff that we missed. And there are a lot of grants that seem to us very banal, like combatting misinformation around vaping. We didn't include those because we didn't think they seemed in any way controversial. There are 100 or more like that. If we had included them, it would skew to present that maybe the majority of this stuff isn't deeply problematic, but there's a chunk of a third or more that is really a problem. I would hope that one of the reactions to this is, here's a more accurate tool that we can use to define problem areas and actors, rather than just let's destroy this entire department.
GC: The ratings liber-net gives to grants are subjective. Has there been pushback from any groups about how you rated them?
AL: Yes, they are definitely subjective and we're clear on what our methodology is. And essentially the more it's something that looks like censorship — so something that's flagging content for take-down is something that's also involved in very high levels of surveillance of citizen speech — those go up to the top [and get rated four or five flags]. But it's subjective in the same way as some place that rates restaurants. You're getting their version of what the best or worst restaurants are. We've basically gone and ranked what we think are the worst restaurants.
We did have a project reach out to us and argue that their rating was wrong. We listened to them and they made some good arguments, and we dropped them down a couple of flags from four to two. There are 27 with five flags and 77 with four flags. There are really bad ones, but I think it's more this whole kind of group-think that's happening about how to deal with a very messy internet. The overall response to a very turbulent information environment is to consult experts and impose a top-down cleanup operation as opposed to more grassroots and bottom-up solutions, which is a huge flip for that space. One of the things that attracted me to the digital rights space is that its ethos was very much about the people and grassroots. Early internet culture was actually generally a progressive, California-type culture that thought very highly of everyday people for the most part.
And this [new model] thinks extremely poorly of everyday people. It thinks everyday people are a really big problem. And so it's essentially extremely elitist in how it comes up with these solutions. At some point they went, “Oh, these kind of icky people have gotten on the internet and we don't like them.” Suddenly the “deplorables” got on the internet and they all changed their tune about how fantastic this was all going to be and how much this kind of large scale participation in the democratic process was going to be a good thing. And they went, “Oh, this is bad. Now we have to justify imposing order on the Internet.”
"But I think journalists, people who are free speech advocates..."
Um, well, not sure we come near to making this assumption any more.
Progressive money laundering. Public funds for political goals.