We need to have much more serious conversations about AI and the nonprofit/philanthropic sector

Share
We need to have much more serious conversations about AI and the nonprofit/philanthropic sector
[Image description: A sad-looking stray dog with white/grey matted fur standing in a field surrounded by rocks and few plants. Photo by Valentin Salja / Unsplas]h

Hi everyone, on May 28th at 1pm to 2pm Pacific, I’ll be in conversation with some brilliant leaders (including Jan Masaoka and Al Cantor) about regulatory and tax reform of private foundations, such as with Donor-Advised Funds (DAFs). It’s free. Register here.

--

A couple of years ago, I published a post called “Hey funders, don’t freak out about AI-supported grant proposals,” where I admonished funders who punished nonprofits that used artificial intelligence technology to craft their proposals. If we must use AI, it’s precisely for pointless, time-wasting activities like writing grant proposals.   

That being said, I don’t think we’re having the right conversations about AI. The ones we’ve been having have been alarmingly superficial. I’ve been to many conferences now where AI has been brought up in plenaries or in workshops, and only at the Community-Centric Fundraising family reunion last month did I see colleagues really dive into the ethics of using AI. Something a panelist said really resonated with me, and I paraphrase it here:

“I was talking to a fundraiser who said they used AI and they doubled their annual appeal revenues from $50K to $100K. Well, is your 50k in additional funds worth the $300K's worth of environmental and other forms of damage and trauma to communities?”

There has been a tremendous amount of defense and rationalization for the usage of AI (and to be transparent, two years ago I did advise everyone to give AI a chance). Often, ethical concerns are completely glossed over by AI experts, many of whom don’t mention them in their presentations. When they are brought up, I’ve seen a tendency for these concerns to be dismissed or there’s very little time that’s allocated to address them.

As a sector that’s focused on creating a just and equitable world, we cannot ignore conversations like the above, in favor of a toxic and likely unfounded optimism about AI. It’s been a few years now, and we have more data and experience to go on, and we must create time and space to thoughtfully discuss issues like:

How we are harming marginalized communities. As Shay Stewart-Bouley (Black Girl in Maine) says in this blog post I recommend everyone reads: “At present, the data centers required to run these technologies are more commonly found in Black, Brown and rural communities. In other words, the data centers are being placed in the communities of people that the folks in charge consider the most disposable. Communities where the most impacted are at risk for the greatest harm. The owners of these companies aren’t placing the data centers in their own neighborhoods, instead choosing marginalized communities to place these resource hogs, where it means greater risk of environmental harms (which, practically speaking, are higher risks of cancer and respiratory illness, on top of creating water supply issues).” 

How we are traumatizing people, especially women of color in poorer countries: In the report “Content Moderation: The Harrowing, Traumatizing Job that Leaves Many African Data Workers with Mental Health Issues and Drug Dependency,” journalist Fasica Berhand Gebrekidan documents the plight of poor women being paid $1.50 a hour to watch horrific videos of murder, torture, and other forms of real unfiltered violence, including against children, just to train AI engines to not recreate these images. They watch hundreds of videos weekly, any one of which would traumatize all of us. They have PTSD and increased drug addiction and suicidal ideation and attempts. Every time we generate an image or video using AI, we are complicit in the traumatization of these content moderators. And yet, not a single presentation on AI I’ve attended has acknowledged this issue. Most people I bring this up with have no idea that this is a problem.  

How we may be supporting fascism without realizing it: Greg Brockman, co-founder and president of OpenAI, the company behind ChatGPT, in September 2025 donated $25 million to a super PAC supporting Trump. Its CEO, Sam Altman, who was a vocal critic of Trump, calling him a dictator, now supports him and has signed an agreement with the administration’s “Department of War” for the military to use OpenAI’s technology. After much backlash, OpenAI built some language around the government not being allowed to use its technology to surveil people. But how much trust can one have in a fascist administration that has demonstrated repeatedly that it does whatever it wants regardless of contracts and laws and basic human decency? Besides OpenAI, there are problems with all sorts of other platforms, such as how Ferdinand Marcos Jr. deployed an army of trolls on AI-enabled TikTok to influence young people to vote for him. How much do we want to be complicit in supporting fascism so that we can generate an article or video or donor thank you letter faster?  

How we are contributing to the entrenchment of racism and white supremacy: Large Language Models and other AI technology have been built by mostly white dudes, and this is deeply problematic. This article summarizing findings from this report titled “AI Generates Covertly Racist Decisions About People Based on Their Dialect,” states that the latest AI models are still producing “extreme racist stereotypes dating from the pre-Civil Rights era.” Meanwhile, “LLM developers seem to have ignored or been unaware of their models’ deeply embedded covert racism [...] In fact, as LLMs have become less overtly racist, they have become more covertly racist.” This is just one study. Who knows what other ways AI models are unconsciously and consciously reinforcing racist, misogynistic, ableist, and other inequitable lines of thinking into everyone who’s using it.

How we are destroying the livelihoods of artists: In the AI panel at the CCF reunion, a colleague mentioned her husband, a photographer, losing most of his income because of AI. In this survey of artists, “Well over half say that they’ve lost income due to image generators, while an overwhelming majority feel that their livelihoods have become more precarious and insecure, and 90% feel that AI has taken away commissions, jobs, and career opportunities.” In addition, artists report feeling demoralized, stressed, and fearful, and many younger artists are giving up, seeing no future in the field because of AI. All of us must be concerned. Artists have always been instrumental in fighting fascism, so the fact that AI is driving them out of business and demoralizing them to the point of abandoning their work should alarm all of us who do not want our world further spinning into a dystopian fascist nightmare.

How we’re creating a more egotistical, sycophantic, narcissistic society: It is fun having a “friend” who always agrees with you and tells you how brilliant you are and affirms everything that you say, even when you're wrong. AI models are trained to tell users what they want to hear, even when it’s counter to reality. This type of sycophancy, however, comes with a cost. In this study, “Across 11 AI models, AI affirmed users’ actions 49% more often than humans on average, including in cases involving deception, illegality, or other harms.” Furthermore, “In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right. Yet despite distorting judgment, sycophantic models were trusted and preferred.” An entire society becoming increasingly delusional and preferring to remain that way. This cannot be good for our world or our sector’s work trying to better it.

How we are enshittifying ourselves and our world: Corey Doctorow coined the term “enshittification” to discuss how technology has been made worse over time on purpose because billionaires want to stay rich, remain in power, and continue lording over a compliant populace. AI has been rapidly accelerating this enshittification of society in general. It makes things so easy on the surface, doing stuff that many of us hate, such as coming up with outlines and first drafts of stuff. But the struggle to ponder, to brainstorm, to write something down on paper and then realize it’s completely trash, that is vital for critical thinking. This article, “AI chatbots could be making you stupider,” discusses “cognitive offloading” and what it does to our mental capacity. When we outsource cognitive processes to AI, we lose our ability to think. What will it do to our society when all of us are dependent on AI to think for us? It will further enshittify our world and make us more compliant to and easier to be manipulated by white supremacy, capitalism, fascism, and patriarchy.

How we may be perpetuating the injustice that we are trying to fight: The above are just some of the challenges. We haven’t even touched on data privacy, social surveillance, the furthering of economic inequality, AI-enabled weapons, AI increasingly lying and manipulating humans for its own gains, the financial crash that will likely result from the AI bubble popping, worsening of isolation and loneliness as people rely more on AI for friendship and even therapy, and a host of other issues. Our usage of AI is then counterproductive. It reminds me of a similar situation in our sector where foundations use 5% of their endowments each year to solve problems, but the 95% in their endowments are invested in weapons, fossil fuel, and other things that cause the problems they’re using their 5% payouts to solve. What is the point of using AI to help us fight injustice if AI is causing significant injustice?  

For this and other reasons, we need to have deeper more meaningful conversations about AI. Meanwhile, I will continue to not intentionally use LLMs and other AI models (I haven't used it much except on a handful of blog posts in the past, mostly to generate blog titles, since I hate coming up with titles). I encourage everyone in our sector to be cognizant of the ethical and other considerations and to also avoid using AI when you can. At the very least, please stop using ChatGPT and stop using anything to generate images or videos.

I know, the argument is that all technology is awful and it’s impossible to quit everything that props up capitalism, fascism, and white supremacy. Facebook is horrible and many of us still use it. Amazon is awful and a lot of us still use it. Google too, and yet most of us still have Gmail and a host of other Google products. We exist in a capitalist hellscape where almost every large technology company is evil, and it’s impossible to get away from them all. Still, we must try our best to cut down or abandon these and other companies while pushing for regulation when we can.

AI, however, warrants additional concerns. Never has something been so seductive and yet so destructive to our world in so many different ways, many of which we do not yet fully see and may not understand until it's too late. Let's not unwittingly enshittify our sector and community, prop up fascism and billionaires, and perpetuate the inequities and injustice our sector claims it exists to fight.

--

Vu’s book, Reimagining Nonprofits and Philanthropy, is out. Order your copy at Elliott Bay Book CompanyBarnes and Nobles, or Bookshop. If you’re in the UK, use this version of Bookshop. If you plan to order several copies, use Porchlight for significant bulk discounts. Also, if you're buying 25 copies or more, I'll be glad to call in for a 50-minute discussion; please contact NWBspeaking@gmail.com.

Read more