At our September Integrity Body meeting (aka our board), we focused on the use of AI in the next phase of Creating the Future’s mission: documenting what we’ve learned from the 30+ demonstration projects we’ve done, applying Catalytic Thinking in a variety of settings.
You can watch the whole meeting at this link. The following is a summary of the questions and comments in that discussion.
Background:
In 2023, Creating the Future charted a course for what’s next for our mission. That path was clear (and still is).
- Our mission: to make the powerful questions of Catalytic Thinking ubiquitous
- To accomplish that within the 10 years of our mission (year-end 2026), we would create a library of Catalytic Thinking resources, leaving that library as our legacy.
- We would also engage deeply with folks who are already teaching about social change, so that people can learn Catalytic Thinking where they are already learning
- As part of that library, we will document the learnings from the 30+ demonstration projects we have done in our 10 years, creating case studies that folks can consider in their own work.
- The first step in that process is therefore to fundraise, to gather the resources needed for a team to do that documentation work – interviewing folks, writing up the case studies, posting them to the library
That resourcing piece has been a big part of our work this year, researching and talking with foundation leaders as we seek the right fit. Because Creating the Future’s work does not easily fit into most funding categories, even in normal times this has not been simple.
As the year has progressed, though, it is clear we are no longer in normal times. In addition to the unsettling political environment, AI has burst onto the scene in a huge way. Our plans therefore need to adapt.
Those changes will not shift the intention of the project, nor the values and approaches we take for engaging our community members in this effort. And we will still build the project through the lens of Collective Enoughness.
The changes are all about the context within which the project will happen – and in part, the way the project will be executed.
All of that is what our Integrity Body discussed at our September meeting.
- What will our sharing what we’ve learned make possible for all the individuals who will be affected?
- What can AI make possible for determining what we have learned, and documenting those learnings? (to use AI, or not to use AI, that is the question…)
- What conditions must be in place to achieve those results?
What could sharing what we’ve learned make possible for those individuals who may be affected?
- People do not know what they do not know. Often the systems that we have been involved in and oppressed by and the resources we have been denied have limited our awareness and expectations of what is possible. We may have been discouraged from exploring what we do not know. We may want to see this as a growth opportunity.
- One of the funders we interviewed suggested that even in the investigation and interviews the interviewers would benefit great from what they learn as they interview others.
- People could get a stronger sense of agency from the competencies they see and develop by being a part of this process.
- Discussing the difference between “developing agency” versus “being empowered” as an example of best practices that can develop when we use the Catalytic Thinking tools as a framework to discuss the work we do.
- The value of case studies – being able to share stories about how a particular organization like a food bank was able to use Catalytic Thinking to serve their community. It helps people to believe they can do the work also. The questions do the “heavy lifting” of building equity into the decision-making process!
- The library will contain 1) the theory of catalytic thinking, 2) the case studies that demonstrate the application of the theory, and 3) the pedagogy of ways to share the work. These “stories” serve as the core and can be adapted so that people and see how they have the “agency” to do this themselves.
- These stories open the door to greater possibilities.
- We may be limited in our thinking by previous images of how things should be. We may need to help people by not focusing on existing systems and by giving people the opportunity to be more imaginative and creative as they look for new ways to move forward.
- How the story is told needs to be carefully told so the stories are not dogmatic.
- This may make the work more challenging because there may be resistance based on the existing expectations of people. The case studies should be sure to include the friction that this creates, and the humility that may be needed to work through the discomfort of these stages in their work.
How might AI fit into making all of this possible? What can AI make possible for what we have learned and what we want to share?
- AI is like having infinite interns. How far can you trust them? They may be helpful in some ways, like thematic analysis with 99% accuracy, but there may be errors embedded in the work with no way to identify what they are or where they are.
- AI is not forthcoming by letting you know it may be wrong.
- The terminology “Artificial Intelligence” really is based on pattern recognition, but we need to recognize that beyond that it may be limited.
- Hildy and Dimitri spent time in CA with two users of AI for nonprofits (Beth Kanter and Gayle Roberts). Both use it in diverse ways. One use that may be valuable to future work is the ability to reframe a story and give someone a different point of view to work from.
- The importance of asking AI “Is there is anything I missed?” after asking it a question. Using the tool for exploring versus answering questions.
- People may be using AI in ways that may be lazy and that may not help them learn properly. He emphasized that like many tools it needs to be used properly.
- AI may allow us to synthesize vast amounts of work that we do not have the resources to do now. In doing this we would be giving our data to others and supporting their endeavors.
- Might we be able to use a local copy of AI to segregate our project data?
“What’s the worst that can happen if we use AI to synthesize and share our work?” and “What’s the worst that can happen if we do NOT use AI for that work?”
- On the worse side, we have previously talked about data centers, and we have talked about giving our data to someone who may misuse it. Would this conflict with our values?
- Round Robin: Angie singled out the data center concern, John recounted the error factor, Karl reminded us that we anthropomorphize these things which involve relationships. Justin pointed out that we may be making assumptions that may challenge the conclusions we have about the value of “Catalytic Thinking” itself!
What is the worst thing that can happen if we do NOT use AI?
- The project will not be done.
- It would take much longer to get the work done.
- The cost in the budget is much higher.
- Who will teach us how to use these tools? Will we use them properly? What are we missing, and what are the patterns that we do not see?
- This is not just about using AI. It may be much more about seeing the stages of the project and then asking which stages may benefit from AI. Also, what are we going to learn as we go along that may affect us and the work?
- An important question we can define is: what parts of the work absolutely need humans?
- Another unknown for us are the assumptions that may be embedded in AI and how they may impact how and where we can use this tool.
- What will it look like when we incorporate principles from Catalytic Thinking in this process? Our involvement may help us better understand the value of human contribution in this process.
- The need to look at case studies and determine whether to weigh certain ones over others based on representation and robustness; AI will add information based on the patterns it sees but not necessarily sum up what it is; one of the challenges that have had using AI to synthesize interviews has been how to leave the interviewers voice out of the synthesis, if that is desired.
- Does the question itself become part of the summary? Or might the interviewer make a comment that then becomes part of the data just like the response does?
- The need to break the work down into parts and ask is there a role in AI that would make getting what we want from a particular part more effective.
Reflections: What is standing out from our conversation?
- John: There are no simple questions and none of the AI tools are ubiquitous so we may need different tools for distinct parts that we identify and the more we can clearly articulate the processes the more effectively we will be able to evaluate the processes
- Justin: It is important to think about where the human analysis (judgement) comes in. AI is an analysis tool it is not an evaluation tool – this requires human discretion.
- Angie: Still thinking about we might lose with AI versus what we might gain.
- Karl: There are substantial immediate benefits to the project that may be substantial, but there are worries about the social relationships and political economy of the gains. We may be benefitting a few rich actors at the expense of other members of our community who may see shrinking opportunities (musicians)
- Vu: be careful not to overthink things. We may need to use AI to better understand it and to resist and use it to oppose those who may be using these tools for purposes and in ways that may be weaponized against us.
- Jessica: It matters who is using the tool and how they are using them. Artists can use these tools to create wonderful things, but the tool may be used in ways that hurt artists also. AI also misses context which it may fake but the human element makes a difference for the work we are documenting. The misuses of AI are accelerating, and it may make sense to counter some of that by finding out how we can use these tools to accelerate our work as well.
- Dimitri: Regarding the idea of being replaced by AI, there is something irreplaceable about having real conversations with real people.
- Hildy: It looks like this is really another demonstration project and while it may be a little scary it appears that we agree that there are benefits and risks for using AI. And that it is also important that we understand and know how to use these tools. This will not be our last conversation on this topic.