Positions, Season 2, Episode 1

A note on our cover image for this episode: “Erica” and “Derek” (named by ChatGPT) are DALL-E-generated images depicting what ChatGPT offered as a response to the following prompt: “How would you describe a profile for the ‘undecided voter‘ going into the 2024 presidential election in the United States? Provide two: one leaning toward the democratic candidate and one leaning toward the republican candidate.” Image generated on May 13, 2025, using GPT-4o and DALL·E 3.
AI Literacy Won’t Save Democracy—But Democratic Imagination Might
By Stefania Milan
With AI-generated content flooding political discourse, calls for “AI literacy” are growing louder. Citizens, we are told, need to learn how to spot deepfakes, verify sources, and think critically about the information they consume. This is true—but also deeply insufficient.
As this episode of Positions makes clear, the “perma-crisis” we face is not just about disinformation or algorithmic manipulation. It is about something broader: a breakdown in trust, participation, and community, and in our collective ability to imagine a shared future. AI literacy, in the conventional sense, won’t fix that. To meet the moment, we must go deeper—reimagining how we learn, how we govern, and how we create knowledge together.
How do we build societal resilience in a time of political polarization, misinformation, AI-generated fakes, and algorithmic manipulation? I would like to offer two reflections that speak to a more systemic perspective, complementary to the overarching concern raised in this podcast, namely the need to rebuild the democratic imagination in the data-driven society. I first consider how people learn about technology and society, and how we might do so more effectively. Second, I reflect on the defensive attitude displayed by democratic institutions when it comes to the moral panics associated with the advance of AI in society. I conclude by calling for the integration of critical technological education with institutional reinvention—which might result in a renewed commitment to civic imagination and democratic renewal.
Let’s Stop “Teaching;” Let’s Start Co-Creating
Efforts to educate the public about AI—and how it shapes what we know and how we act—too often rely on a familiar but flawed model: experts teaching the uninformed. Yet research in behavioral economics and psychology suggests this approach rarely works and may even backfire, although the robustness of the evidence of this “backfire effect” remains contested.1 People generally don’t like being told they’re wrong, biased, or uninformed—especially when it comes to politics. They tend to seek out information that confirms their beliefs while dismissing contradictory evidence.2 As a result, traditional literacy interventions, including debunking and fact-checking, can entrench views rather than shift them.
We need a fresh approach: rather than fantasizing about teaching AI literacy in traditional ways, we should co-create it with the people it’s meant to serve. Echoing the spirit of Paulo Freire’s Pedagogy of the Oppressed, which framed education as a dialogical, liberating process fostering critical consciousness,3 and Maria Montessori’s emphasis on learning through experience and trusting the learner,4 AI literacy must be participatory and dynamic. It should treat people not as passive recipients of knowledge, but as active collaborators in making sense of the technologies shaping their lives. Education on technology must be lifelong, grounded in lived experience, and responsive to real-world concerns. It should be designed to empower—not to belittle or overwhelm.
Following Freire and Montessori, people learn not just through instruction, but by doing, discussing, and imagining together. If we want them to navigate the age of AI with confidence and a sense of ownership over our collective future, we must create spaces where they can do so on their own terms—spaces that empower rather than intimidate, include rather than alienate, and inspire rather than frustrate. While difficult conversations and moments of discomfort are necessary for growth, as Losh and Raley rightly note in this episode, we cannot afford to lose people along the way. Disagreement should be an invitation to deeper engagement, not a trigger for alienation.
This is why, at a very fundamental level, AI literacy must also be an exercise in hope. People need more than information—they need to believe they can shape the world, not just endure it. They need to feel that their voice matters. Citizen science can play a vital role, especially when paired with hands-on engagement and playfulness. Through participatory labs, role-playing, storytelling, scenario-building, and prototyping, citizens and policymakers can jointly explore risks and challenges—but also envision and test alternative futures. These are not just exercises in education—they are exercises in democratic imagination. They equip people to say not only “this is broken,” but “here’s what could be.” More importantly, they reaffirm that everyone still has a stake—and a say—in shaping what comes next.
Build Institutions That Learn
The problem is not just individual—it is institutional. Most public institutions were not built for an era in which opaque AI systems shape everything from public opinion to welfare allocation. And AI technologies, in turn, were not designed with democracy in mind. What’s required is something more fundamental: to reimagine and rebuild the very infrastructure of democratic governance. If, as the guests and co-hosts of this episode insightfully noted, the ship of platform and AI regulation has already sailed, it is not too late to recast democracy itself. After all, democracy is not just a system of checks and balances—it is a system of exchange, learning, and iteration. What it needs now is not only accountability, but also humility and imagination. We should demand institutions that embody both.
How can institutions learn? By way of example, let’s consider the rapidly growing gap in our understanding of the social costs of AI. We still know too little about how AI systems—and the messages they amplify and impose on unwitting users, such as AI-generated content embedded in tools like Google Search—actually land in people’s everyday lives. How do individuals interpret these outputs? What meanings do they make? What fears or hopes do they evoke?
Too often, knowledge we rely on—whether in public debate or policymaking—is anecdotal, speculative, or driven by moral panic. To move forward, we need new mechanisms that embed data-informed, real-world learning directly into policymaking. These might include citizen panels or living labs, where diverse groups engage with emerging technologies in structured settings. But they also include lab experiments or quasi-experimetal setups that capture how AI technologies shape meaning, behavior, and trust. Such approaches can generate evidence that goes far beyond metrics. Only then can institutions adapt in ways that are both informed and democratically legitimate—aligned with lived experiences of the people they serve.
In short, we need to experiment more to gather data—with real people and real communities.
This resonates with a key point raised in this episode of Positions: done right, experimentation can open up democratic futures rather than foreclose them—if people and institutions are willing to engage in it in open-ended ways. As the podcast emphasizes, politics is ultimately an exercise in imagining the future. It needs new data points—not just predictions rooted in past preferences and behaviors like those served by platforms.
In a lecture I delivered with behavioral economist Joël van der Weele at the University of Amsterdam, we argued for a commitment to experimentation in the public sector—not just as a technical method, but as a vital democratic practice able to “upcycle democracy”: that is, to creatively repurpose and strengthen existing democratic institutions and practices, making them more resilient, inclusive, and responsive to contemporary challenge—without discarding their core values.5
Yet three key challenges stand in the way. First, policymakers are often reluctant to experiment, because real experimentation is slow, politically risky, and demands a willingness to learn from failure, even when outcomes challenge political agendas or campaign promises. Second, as political scientist Virginia Eubanks has shown, experimentation with algorithmic systems often comes at the expense of marginalized and impoverished individuals and communities.6 Third, AI technologies carry embedded ideologies that reflect the industrial cultures in which they are developed —as well as the concentration of power surrounding them. And yet, as the swift political recalibrations by tech executives following the 2024 Trump election make clear, state power remains central—in other words, policymaking (still) matters.
Nevertheless, with proper safeguards and full transparency, public experimentation could yield invaluable knowledge to inform better regulation, more effective literacy initiatives, and more equitable design of digital infrastructures. It can empower policymakers to shape the infrastructural and institutional design of AI systems—and counter the tendency to universalize insights drawn from a privileged few to the broader public.7 This is why we proposed the creation of national institutes for policy experimentation and evaluation—to centralize expertise, improve standards of evaluation, and embed a culture of democratic experimentation across all levels of government.8
It may be costly and slow. And yet, if democracy is to evolve alongside AI, we must take that risk. We need institutions capable of asking hard questions, tolerating uncertainty, and adapting based on real evidence from diverse communities—not just click-through rates or engagement metrics that platforms offer. Institutions that learn do not just monitor; they evolve—with and through the people they serve.
Democracy Needs Imagination
There is a fundamental problem with these proposals: both AI literacy and democratic renewal require a different kind of public—one made up of curious, proactive, and self-driven individuals attuned to the common good. But such engagement does not emerge in a vacuum. The platform society, with its focus on consumption, hyper-personalization, and passive interaction, actively undermines it.
To nurture a more critical and engaged public, we need environments that foster curiosity and critical thinking, encourage participation, and reward engagement. This means rethinking not only civic education, but also redesigning the digital, institutional, and cultural infrastructures that shape how people encounter, interpret, and respond to technological change.
In the post-rational deliberative turn in politics, AI literacy cannot just be about spotting deepfakes or understanding how algorithms work. It is about fostering a sense of agency, and a society where people believe they still have the power to shape the future. It is about believing we, as individuals and communities, still have a voice in shaping what comes next. That is the real literacy crisis we face—not merely a lack of information and knowledge about how complex technologies work and affect our daily lives, but a deficit of imagination and collective power.
We won’t fix the democracy crisis with more fact-checking or digital hygiene alone. Likewise, we cannot simply “lecture” our way out of this critical moment. While these tools have value, they are unlikely to reach everyone or spark the systemic change we need. What we can do is create new spaces for learning, governing, and experimenting—together. What we need is a full-on democratic reset: one that empowers people to reflect, to challenge, and above all, to imagine the future on their own terms. That is how we begin to rebuild democracy for the age of AI.
Audio Transcript
[00:00:00] Delores Phillips: Welcome to Positions, the podcast of the Cultural Studies Association, sponsored and published through the open source journal Lateral. Positions aims to provide critical reflection and examination on topics in cultural studies for scholars, students, and a general audience. Make sure to follow CSA and Lateral journal on socials and subscribe to our podcast to keep up with new episodes.
In today’s episode, I’m joined by the CSA New Media and Digital Cultures working group co-chair Reed Van Schenck. In a conversation with Elizabeth Losh, author of Selfie Democracy, published in 2022 by MIT press, and Rita Raley, author of Tactical Media and co-editor of the recently published special issue of the journal, American Literature, entitled “Critical AI: A Field Information.”
Today we discuss AI literacy, the 2024 election, and finding a way forward in a changing digital political landscape. Enjoy the discussion.
[00:01:01] Reed Van Schenck: My name is Reed Van Schenck. I am an Assistant Professor of Marketing and Communication at IE University in Madrid, and I’m also co-chair of the New Media and Digital Cultures working group at the Cultural Studies Association, and my research focuses on reactionary digital networks, publics, and infrastructures in the United States.
[00:01:23] Elizabeth Losh: I’m Liz Losh. I’m the Dittman Professor of English and American Studies at William and Mary with a specialization in new media ecologies. I’m currently co-chairing the MLA/CCCC Joint Task Force on Writing and AI. My last book was Selfie Democracy from MIT Press. It focused on the use of digital technologies in the White House. So it was a book about both Barack Obama and Donald Trump, looking at them together.
[00:01:49] Rita Raley: My name’s Rita Rayley. I’m joining this conversation from Southern California. I’m a Professor of English at the University of California, Santa Barbara. I also have [00:02:00] appointments in Film and Media Studies, some other departments, but primarily my work is in computational culture. I’m especially interested in the politics and aesthetics of these new tools and techniques.
[00:02:12] Delores: And I’m your host. I’m Dolores Phillips. I am the Director of the African American and Diaspora Studies Center and Associate Professor at James Madison University. Before we get started, let me note that we are having this conversation on November 8th, 2024, just three days after the election. I think we’d like to begin with what AI literacy means and what role AI literacy may have played in the most recent election.
[00:02:38] Rita: The literacy question that this is a—it’s broader, but—this is something I’ve been thinking about for some time alongside of and with Liz for probably decades now, of course, people want to say that literacy is the wrong concept or paradigm to be thinking in these terms. We’re obviously not dealing with book culture or print culture. So then what does one mean? Education, [00:03:00] familiarity, technical savvy, know-how, expertise: there are all these cognate concepts that come into play. In some ways it reinforces this expert/amateur distinction that is for some time been fraught and, and it’s fraying. So it just seems strange to reinstate this notion of expertise. But I want to do it in the sense of an everyday or ordinary hands-on sense of basic familiarity, like operative awareness and knowledge.
[00:03:31] Elizabeth: I might argue that there’s certain ways that people are asking AI literacy questions, or at least digital literacy questions when they’re figuring out how to respond to a Facebook post that has got them agitated or something on X that’s got them irritated, or a TikTok that is stuck in their craw in some way. I think that we are in a time of these sort of digital irritants where people do need to at least [00:04:00] figure out how to not alienate their friend networks. I think that it’s interesting to think about this election in the context of the fact that generative AI tools were integrated into Google Search just a few weeks before the election.
[00:04:21] Elizabeth: And so, for a lot of people when they were looking up information about the candidates, the first results that they would be seeing would be a generative AI summary of material on the internet. And one of the things I noticed about those summaries is that they were often inaccurate. So for example, I looked up what the highest military rank of J. D. Vance was, and it actually gave me the answer or Tim Walz rather than for J.D. Vance.
[00:04:51] Reed: I think that the way that people are interfacing with a lot of AI content to take the trope of AI slop, for [00:05:00] example, the sheer amount of space on a screen that Google Gemini takes up when somebody’s looking for political content. To what extent does AI literacy need to engage with the sort of feeling of irritation that this is in my way—what I’m looking for is being blocked by a form of content that nobody seems to be asking for, and yet we just keep on receiving.
[00:05:27] Elizabeth: Yeah. There’s a idea in traditional rhetoric about exigence, right? This idea that it’s not that we engage rhetorically with the world because we have great thoughts that we formulate as leaders, but rather we are responding to something that is a call to speech, right? So that a lot of the time when we’re performing rhetorically, that it’s really reactive rather than active.
[00:05:58] Delores: There’s been a good bit of what I’ve [00:06:00] been seeing that’s been arguments over whether or not the image is even real, arguments about whether or not the post is even real. So there seems to be this distraction away from content and more toward modality: Is this actually a real image? And so we’re arguing about this instead of actually engaging content anyway. Is that something that AI literacy might be able to address or to think about?
[00:06:27] Rita: That is really interesting. It is true: We’re really bound up in these ontological questions, and of course you can’t stabilize; that distinction, real/false, fell away with the simulacrum. So, generative AI is just an intensification and acceleration. And it does hold out this fantasy, the notion of a deep fake, the promise is that you can actually differentiate and articulate a difference with empirical evidence that you would say: This is false, this is real.
[00:06:58] Rita: But it is true, Dolores, your point [00:07:00] about the fact that we seem not to be thinking at the level of content any longer. In some ways I think it’s less—if one imagine imagines it in geometric terms—it’s less about where one cleaves the difference between the two, and more about the seemingly, relatively radical expansion of the space of the real, such that you, the area around the bell curve has expanded to the point that you now think this is a realist image—the Pope in a white puffer jacket is a realist image, right? So it’s less: this is true/false, was he or was he not? And it’s more the expansion of possibility and the sense in which now we see differently, precisely because that space of the realist image has expanded.
[00:07:51] Delores: I actually really like that reading because what ends up happening is when we’re having your conversations about these images, about these posts, [00:08:00] about these tweets, whenever we’re having these conversations and we’re having a discussion about whether or not this is a real image or generated we’re less interested in whether it’s a real image, then what does the discussion do if this is a real image? What does it mean if this is a fake? How are we supposed to have the conversation about this, if this is a created artifact?
[00:08:21] Elizabeth: Yeah. One of the things I found really interesting is to see how Donald Trump and folks in the kind of Trump strategic orbit have embraced these AI images in which he’s shown in these kind of heroic poses—it’s Trump as a muscley superman. And so this kind of idea about heroism as it’s defined in comics or movies or other kinds of popular media, like they, his fans know that it’s AI generated. But there’s this [00:09:00] embrace of a kind of extrapolated heroism, imagined fantasy of this empowered figure that’s going to somehow do battle for them that has this enormous currency—and sometimes literal currency in the form of NFTs.
[00:09:20] Reed: There’s also a spectrum on it too. It’s not just the production of AI content that’s obviously generated and gets some of its currency from being obviously generated. There’s also ones that are less obviously generated, but are still serving that purpose. I think about one of the most heavily circulated images after the Trump attempted assassination, which was in fact AI generated because you can see that he has six fingers in it and still people latched onto it. They use it as the best picture that was taken at the rally. And maybe that will get us to think in creative ways about the sort of post-rational or post-deliberative turn in [00:10:00] politics that people have been talking about pretty much since the original Trump campaign.
[00:10:04] Reed: It seems less so that we’ve completely abandoned rationality. But as Rita was discussing with the expansion of the real, it also seems as if we’ve been given a little bit of precarious room to have expanded ideas of what constitutes rationality. Not to mention quite a bit of disagreement on where they should start and where they should stop.
[00:10:29] Rita: There’s this volumetric projection now. We’re aware, now we’re still looking at caricature. We’re looking at types and figures, and tropes and templates, and generic formulas, but they’ve been volumetrically expanded and they’ve almost been animated, and it’s not simply through animating technologies. I think it’s because that’s our life-world. If in some way that one dimensional hashtag presentation of a Photoshopped image in [00:11:00] 2015, 2016 was somehow distant from us still, now our life-world is just populated with. With synthetic imagery and stories and tropes.
[00:11:13] Delores: So what impact would that have on our understanding of democratic process and deliberative democracy?
[00:11:22] Elizabeth: I don’t think we know yet, but I think there will be some interesting work to be done. So, for example, there’s been a lot of work on AI and political polarization, and AI and political bias, and the ways that our very bifurcated society might be becoming more so thanks to AI-generated content. I’m working on a new project about thinking about campus activism, and one of the questions I’m interested in is: How is generative AI going to [00:12:00] change campus activism? So that’s like a research question I can look at at my own campus.
[00:12:05] Elizabeth: So I’m interested in the question of campus activists who are now using these technologies to write speeches, to write press releases, to write mass emails, that they’re using these technologies to research their positions. They might even be using these technologies to design protest signs and websites and other materials. So I’m trying to think about like a research question I can actually look at in my own sphere because I feel like so much of the time, the kind of work that we’re doing as scholars of politics on the internet is we’re taking this like 30,000 foot in the air view rather than looking at our own communities and thinking about how these technologies are changing the people that we see every day [00:13:00] and the way that they understand their everyday politics.
[00:13:03] Rita: We are approaching near complete universal—and put that in quotation marks—implementation and application such that these tools are unavoidable if you’re actually working in a computational environment in any way whatsoever. Which is to say like I do think we’re looking at an epistemic shift, right? Underlying all of this, I think are fundamental technological transformations. Negotiating this new environment that is medial, social, technical, cultural, political, etc., does in some ways require leveraging the tools and concepts and institutions that have served the world well and also served it ill, but served it well for some time.
[00:13:45] Rita: AI literacy—to bring this back—the educational apparatus has developed tools that are still worth preserving, it’s true. But I think if we don’t develop new tools that are adequate to, responsive to and in fact [00:14:00] emerging from this new technological landscape, then we are not going to be able to navigate this uncharted—so far—territory.
[00:14:08] Rita: So yes, I would like to think: Bring back deliberative, rational democratic processes, bring back regulation. But I think that ship has sailed. The project of regulation, the project of stabilization, and returning to that reverse trajectory is itself a nostalgic fantasy of restoration and repair. So sometimes I feel—and there are a lot of reasons to feel despair—but I feel despair and dismay, especially this week, and in the 72 odd hours after the election, and I feel like people still do not understand what happened. And of course, it’s a permacrisis, it’s polycrises, but there are ways in which you can diagnose the situation, and I have yet to see people [00:15:00] understand the really profound role that media and technological systems had on this election. We’re still speaking in such antiquated terms as if we’re fighting about issues again in some sort of antiquated way. Like we’re in a public sphere and we’re all watching Cronkite. How could we not have understood still what role these social media platforms had on changing hearts and minds?
[00:15:30] Reed: And which social media platforms at that too. I feel like a major shift between the 2012 & 2016 election cycles and this current one is where different audiences are plugging into social media. When we think about alternative platforms, for example, they tend to be more likely to highly integrate AI—for example Elon Musk’s Twitter/X behemoth with Grok. There is a white supremacist platform called Gab that has [00:16:00] its own artificial intelligence system that actually mimics characters under this idea of an AI philosophy in which AI cannot be said to represent individuals, but they instead have to have specific characteristics such as like Jesus AI or Hitler AI.
[00:16:17] Reed: It seems like an area that media studies hasn’t really talked about are these competing ideologies behind AI that technically go into the parameters that shape large language models, but also shape discourses of how AI should be implemented, how it should be informing the polis, and the lack thereof.
[00:16:38] Delores: It leads me to a project that I’m currently working on where I’m looking at how to generate a flagging model for biases in using different AI tools, applying these flagging models to these LLMs, for example, to flag bias as an instructional mechanism for students to learn how to actually [00:17:00] engage with and how to actually identify these points of entry.
[00:17:02] Delores: How do we see prompt injection as introducing a peculiar, human element that ends up being untouchable by some of the strategies that are obviously needed to reign in some of AI’s worst features in terms of how it’s being used, but also tap into the kind of promise that I think I’m hearing lurking beneath Rita’s avowed statement of despair, when you’re talking about how these things are productive of a moment that we cannot tame?
[00:17:37] Rita: So a colleague of mine actually has a really sharp paper on the use of prompt injection for image generation, and it’s as if you were to use DALL-E 3, for instance, at this point, and you were to ask for a picture of scientists working in the lab, etc., you would get a picture with women, you would get people of [00:18:00] color. And of course you recognize historically the probability of seeing the woman in the lab is less than you would seeing a white man and so forth. And it’s prompt injection, of course, right? So that you’re able then to get a certain representation that the data just by pure statistical calculation, would not generate right.
[00:18:21] Rita: Because the norm, the default, the baseline is going to be a certain type of caricature. So there’s an interesting way in which prompt injection—technically, again, it’s post-processing—can be used or leveraged for the correction of biases or the correction of, or the kind of insurance of a something like equity, fantastic as it may be, but of course it can be used for nefarious purposes as well.
[00:18:50] Elizabeth: Maybe one thing to say about how these statistical models work is that they extrapolate [00:19:00] based on past data and you are going to get material, if there isn’t the human moderator, that will replicate the biases and patterns of the past so that it’s very difficult to get something that might be imagining a future that isn’t predicated on the past unless you have that kind of human intervention.
[00:19:41] Elizabeth: Politics is a future-focused activity in a lot of ways, even if it’s often connected to past imaginaries or present miseries, or that there is this idea that we’re imagining this America of the future for the next four years, every time we vote for president, and that it is a kind of a statistical projection that we’re doing in our own heads about what the [00:20:00] future is going to be like if we make a given political choice, and that models can give us some simulations of what that future might look like. But, as Rita and I know, you know, this is a pretty fraught process.
[00:20:16] Elizabeth: I think the other thing I’d mention is guardrails and human moderation. So I’m interested in situations where the prompt won’t give you any output. Where, if you ask ChatGPT to give you the formula for ricin, right, it will say no, although you can get around that. So I am interested, actually, in when I’m testing prompts, I’m seeing fewer guardrails, and so I’m thinking that a lot of these AI companies really have, they want to give the customer what the customer wants, even if what the customer wants is something that’s [00:21:00] toxic or unpleasant or harmful. And so I’m seeing, I don’t know what you’re seeing, Rita, but I’m seeing a lot more—it’s a lot easier to get around guardrails. I feel like when I test out the prompts that used to get me nothing, that now I’m able to get a recipe for a bad tasting cookie, or a job description for a pirate, or all the things that before ChatGPT would say: No, I can’t give you that because it’s unethical and harmful.
[00:21:28] Rita: This conversation about prompting and prompt injection is bringing me back to a place we started with the discussion of AI literacy. Okay, so in some ways, I think the question about literacy is better framed as, or at least for me: What is the one thing you wish people understood about AI machine learning? Like concretely, what basic point do I wish we could all understand? Could we be on the same page with one idea? And the one idea is a really simple one. It’s not an [00:22:00] idea. It’s just a, it’s an operative principle, and that is that whatever observation you might make about a model, how it’s behaving, what it’s doing, does not hold if you scale up or scale down, or if you look at a different application or implementation.
[00:22:17] Rita: Prompt injection is a technique. It’s been decided upon in a corporate setting. There are people, stakeholders, decision makers, determining how they want consumers to read their images or receive their images or use their images. Not all image generators use prompt injection or allow for prompt injection. So too, with language models, there are all sorts of techniques, post-processing techniques, that will determine whether you’re getting a personalized—the appearance of a personalized—chatbot, or in a university setting, one that’s expressive of a sort of brand identity, educational, [00:23:00] institutional brand identity.
[00:23:02] Rita: I wish, in other words, that we were able to step away from the general statements about AI and machine learning—what language models do, what image generators do—and say: Okay, what is it about this model in this application? What were the decisions? What is the training data? What were the decisions made about post-processing? And then can we think about what we want to see differently.
[00:23:27] Delores: So you’re imagining a conversation that has a great deal more context with the specific tools and the specific models that we’re actually using, and that until we get to that particular level of granularity that the conversation about AI is going to be misleading?
[00:23:44] Rita: I do think so. So in, in some ways just showing students different behaviors that result from the same prompt and then getting them to ask: Okay, why is it that I’m getting a different answer to this question? You know what? And then what [00:24:00] training data might be informing this model? What post-processing techniques are being used in this other context? That can be an extremely illuminating discussion. It’s basic A/B testing as well, right? The compare, contrast.
[00:24:15] Elizabeth: That spirit of experimentation and play is really important when it comes to helping students understand these technologies, and it’s one of the things that, you know, speaking on behalf of the Task Force, we’ve gotten a lot of pushback from faculty on. That they feel like in the college setting, they don’t want to engage with these platforms that have these environmental policies or labor policies or privacy policies that they don’t like, and so they really don’t want to do the kinds of exercises that you’re talking about, which are the kind of [00:25:00] exercises I do in my classrooms as well.
[00:25:03] Reed: I think that this ethos of experimentation, or just the desire to try—mess around and find out—with generative AI is very optimistic. It’s a hopeful way of looking at it, because if this is on our plates. Then we have to work with the conditions that we’re given in order to educate our students, but also outside of a university context, to engage media politics on the terrain that media politics is happening in the world.
[00:25:33] Reed: I just got back from the Association of Internet Researchers Conference in Sheffield, and there the DISCO Network, which is a very cool digital media working group in the States, was talking about their new concept of technoskepticism, which I found really illuminating. And the way they define it is: learning from the way that marginalized communities in particular will conditionally accept and refuse certain uses of [00:26:00] technology in order to further their survivance in order to further resistance.
[00:26:04] Elizabeth: One of the things that’s interesting to think about is: What can we do imagining different kinds of familial, friend and community context where people can have this, these kinds of experiences? We’re always encountering these highly personalized media experiences, and I think the more that we have this kind of opportunity to have collective media experiences, particularly around generative AI, I think that it really is illuminating in a way that nothing else is because we have this sense of intimacy with our devices. And I think that we need to experiment with sharing these experiences of intimacy in more collective community contexts.
[00:27:00] Elizabeth: This is one of the things I’m trying to think about: What can we do to bring in community members, churches, K-12, veteran groups. I think there are a lot of things that could be done to think about broader communities rather than just academia.
[00:27:21] Rita: I want to take this conversation about experimentation back to the early days of Kamala’s run. There were two things that I thought at that point that the campaign really understood, and that Democrats understood themselves, and that is that Trump was an influencer, and the only way to combat influencer capital is with celebrity capital.
[00:27:42] Rita: The second thing I thought they understood is that experimentation had to be leveraged in the open-ended sense. Like, rather than thinking about predefined, prescribed uses of TikTok or any other platform, you had to let people loose and [00:28:00] just try to see what worked. And there was a moment when it seemed like the Kamala HQ accounts were being run by teenagers, kids that had just been let loose and told to experiment and it, and there was a sense of exuberance, a little bit of irrationality and play. And therefore you felt hopeful, like maybe something’s going to come of this. Maybe they’re going to figure out how to exploit and use, for different purposes, these tools and techniques. Eventually, someone came in and gave them a script, right? And that moment of experimentation and whatever possibilities it might have opened up was foreclosed, and they were like back on doing talking points. The moment of experimentation disappeared.
[00:28:51] Delores: The tenor of those first heady days of the campaign, especially when they were taking advantage of virality, when they were taking advantage [00:29:00] of the kinds of discourses that proliferate among these different kinds of subcultures that animate the internet—I think that characterizing that as experimentation that quickly lapsed into political campaigning orthodoxy was, I think, a very accurate way to describe what seemed like the collapse of a certain kind of tenor that at first made the campaign seem an incredibly exciting opportunity. What level of trust do you think this signified and younger content creators and what they see as campaigning versus what the new generation was experimenting with?
[00:29:39] Elizabeth: In Selfie Democracy, I interviewed a lot of people who were involved in Obama’s campaign, and who were folks who were part of this group called Blue State Digital, and the people running that company were recent college grads and a lot of the people involved in that company [00:30:00] were teenagers, essentially. They were college interns.
[00:30:04] Elizabeth: And that carried all the way on into the White House. I interviewed a guy who taught Valerie Jarrett how to tweet. Who was 19 at the time, right? So he’s like a 19-year-old talking to Obama’s Chief of Staff about how to use Twitter. And I think that we’re just not at that moment anymore. There’s not this trust of young people that we saw at the beginning of the Obama era, and I think that Obama was a flawed candidate and a flawed president in many ways. But I do think that trust in young people is something that allowed Obama to be a successful candidate for president. I think it’s true that the Harris campaign just didn’t trust them.
[00:30:56] Rita: One thing that strikes me still is that [00:31:00] overwhelmingly, I think, the power of that television ad that the Trump campaign ran—the anti-trans television ad—that they ran over and over again. Like in some ways you think, wow, we are right back with the “Daisy” ad. Who knew that television could still be so powerful?
[00:31:18] Reed: That ad was so powerful that Kamala Harris stopped defending trans people. I can never forget her line, I believe it was an NBC interview. Will you support trans people’s right to transition? We’re going to follow the law. Well, the law in Florida . . . . So that advertisement truly seemed—that broadcast style advertisement truly seemed so powerful that the Democratic campaign no longer trusted its base instincts, let alone its values, and it backpedaled towards attempting to appeal to this audience that, at the end of the day, is probably never going to vote for a candidate like Kamala Harris.
[00:31:55] Rita: One thing I also don’t understand—another thing I don’t understand—about the way that Democrats [00:32:00] ran this campaign is how they did not attend to Spanish language media. Who was attending Spanish language media? Where was the Kamala HQ in the Spanish language TikTok? Nowhere. And you just think: There is no counter to these narratives.
[00:32:15] Elizabeth: People roasted Trump’s appearances on Spanish language television, and yet he was there.
[00:32:20] Rita: He was there. Yeah.
[00:32:22] Elizabeth: He was part of that media conversation.
[00:32:25] Rita: Exactly. And so to appearances, and this is an obvious thing to say as well, but it doesn’t matter. It’s back to the real/false distinctions that we were thinking about. It’s all appearance. All the footage from the Trump rallies—there was this period where people felt gleeful, or some felt gleeful, thinking: Oh, everyone’s leaving the rallies. They seem not to care. Look at all the empty seats. Then you realize: the metrics you’re using to evaluate success in this instance, those metrics don’t pertain anymore.
[00:32:55] Elizabeth: This is one of the arguments I make in Selfie Democracy, like people [00:33:00] who are going to these public events, it’s not about the way it looks to the television camera. People might go to the event, shoot some selfies of themselves at the Trump rally. And yeah, they’ve got to get home, they’ve got to make dinner, they’re going to go other places. Maybe they’ve got other things to do.
[00:33:18] Elizabeth: But that individual, narrowcasting of kind of solidarity with a particular kind of political identity isn’t necessarily visible to a broadcast media camera. People shoot a selfie in a way that looks good, right? You have this ability to control and curate the experience so that it looks great from the thing that you are shooting. And so the fact that there is this actual reality of empty seats doesn’t matter, right? [00:34:00] I’m going to make wherever I’m at look great or look horrible, depending on my mood, but I can make it look a particular way because I control the camera angles. And I think you have to understand that a Trump rally is thousands of people who can control the camera angles.
[00:34:19] Reed: Speaking of selfie democracy, or maybe in this case selfie authoritarianism, I can’t help but think about January 6th, which seemed, as we now know, was not very important to the American electorate, truthfully. It certainly did not stop the majority of Trump voters from showing back up, and if certainly did not mobilize the Democratic party to go to the polls, seeing as how the Democrats lost, it’s looking like it’s going to be about 12 million voters, if not more, for a litany of reasons. But in any case, in January 6th, of course, we know that a lot of the evidence for the prosecution, as [00:35:00] well as propaganda for the right, after the fact, comes from people who recorded themselves and took selfies in the Capitol building as this kind of triumphal moment.
[00:35:14] Elizabeth: One of the things I thought was interesting about January 6th is that it was really an attack on representative government in the minds of a lot of Trump supporters, that it wasn’t an attack, that it’s been reconfigured as not being.
[00:35:28] An attack on individual legislators, which it very much was. People were yelling, “Hang Mike Pence,” right? People were calling for the death of Nancy Pelosi. It was political violence that was aimed at individuals, that it was murderous incitement to commit political violence against individuals,
[00:35:52] Rita: But at the same time, it was still a media event—hence the mock guillotine, as many people have remarked. It was performance, it was [00:36:00] spectacle, and it was all staged for re-presentation and reproduction and dissemination throughout media environments. So of course, there is a brutal materiality to that day, but at the same time, media spectacle in some ways is the takeaway.
[00:36:21] Delores: I was struck by how you were speaking about the material, the sort of brutally material aspects of this, which included the literal danger to bodies, literal death, the materiality of excrement on a desk, the materiality of taken objects that were then disseminated elsewhere, outside of the Capitol, as almost these trophies for sale. But the way that you’re speaking too, about how spectacular the event was, and how the spectacle of it was circulated through media environments and seemed almost tailor-made for it, does that lead us to a sensibility of a dematerialized citizenship? One [00:37:00] where our involvement in the body politic is less about materiality and more about spectacular images?
[00:37:08] Rita: The revolution must be live streamed, right? Like we had already seen mass murders that were live streamed. So, the participation as an audience member in these things, if you think of it as an audience number, is bodily. Like people are affectively, corporeally deeply invested in the horrific events that they’re watching. It’s hard to sustain the material/immaterial when you’re dealing with people’s profound affective investments in what they’re watching unfold on their screens.
[00:37:41] Elizabeth: And I’m mindful of another moment of political violence that we’ve had recently, which was the assassination attempt on Donald Trump’s life, and the ways that people wore those bandages, this bodily imitation of the sort of [00:38:00] wound of the savior as a kind of ritual act. The need to reconstitute the bandage is fascinating to me. And I think people on the Left who are mocking it, were not understanding its semiotic power, and that kind of machismo that was associated with standing up and raising your fist after an act of political violence.
[00:38:29] Reed: It seems like there might be three different relationships between materiality and spectacle at play here. On the one hand, we have folks pooping on desks, performatively wearing these bandages, people that are opting into the materiality in order to co-perform the spectacle in the cases of people like Trump. On the other hand, we have let’s maybe your Democratic operative folks that are trying their best to disavow the spectacle, but are just not willing to admit how [00:39:00] much material power it truly has.
[00:39:00] Reed: And then I think what cannot be lost is the category of folks for whom this is always, already material, and they have no choice in whether they want to participate in it or not. And this calls to mind for me the Haitian population of Springfield, Ohio. Folks are literally being recorded because Project Veritas-style journalists are asking people to find evidence of Haitians eating cats and dogs. Those people have absolutely no choice in whether or not somebody is going to hit them with a camera in their face while they’re trying to enjoy their lunch.
[00:39:38] Reed: So in what ways, if we think about the semiotic power as well as the material power of this media, can we think through materiality as being an imposition, or on the other hand, spectacle as being a comfortable place where the privileged can enjoy it all?
[00:39:57] Elizabeth: Kat Tiidenberg has this [00:40:00] intersecting set of Venn diagrams. She’s got three circles, and one is power, one is privilege, and one is visibility. And I think thinking about this question of how people with power and privilege can make themselves visible, but they can also choose to be invisible, while for a lot of people, they really are not allowed to have control over their own visibility, right? They are hypervisible, and they don’t have that capacity to control how they’re depicted.
[00:40:40] Rita: And another way to think about the example of the Haitian immigrants is to go to QAnon, which is surprising we haven’t brought this up yet, but what does at play in all of these situations is this insistent need [00:41:00] to actualize or materialize the virtual, right? So, you have the narrative. There’s a sex ring being run in a basement of a pizzeria. Immigrants are eating animals. And then one needs to find the truth of this, but it’s not simply to discover, it’s actually to instantiate. There’s this weird desire to actualize the narrative. It’s not about revelation, it’s about manifestation and production.
[00:41:31] Delores: Let’s bring this back to, I think it was 2017 with a fact is whatever fits or manifests the narrative itself, but you have a roster of alternative facts. So I don’t have the facts, I have the alternative facts. There’s this whole other dimension of experience that is informing these decisions and perspectives, that is housed in conspiracy thinking, that’s housed in disinformation, that’s housed in getting back [00:42:00] to our very beginning of the conversation, a certain lack of media literacy.
[00:42:04] Rita: The larger picture here, of course, must be the fraying of expert culture or the belief in expert culture. Doctors are not to be trusted. Think about Fauci and so forth. Certainly not educators and political figures. No one, in fact, who has an informed opinion is somehow to be trusted: I’m going to do my own research, right? So the bigger story here is about the collapse of expertise or the reformulation or reconfiguration of expertise such that we all have it because we’re all able to do our own research, supposedly.
[00:42:42] Elizabeth: Yeah. And most of us here are old enough to remember the nineties and all of this celebration of the internet for its leveling effects, right? This is going to be so great for democracy because everyone was going to be finally equal. With [00:43:00] these media, you’d be equal to a journalist, you’d be equal to your elected representative. You would have all of this power to access the media. This would be so fantastic! And of course, Rita and I and Mark and a bunch of other people were like, I don’t know . . . . It might not work out that way.
[00:43:20] Delores: So what difference are we perceiving between our moment right now in 2024 versus 2020 or 2016?
[00:43:30] Elizabeth: with the advent of generally available generative AI tools, there is this incredible, computational complexity in the media environment that people are encountering. And I was thinking about the right wing belief that like the Democratic party controls Siri. That’s a lot [00:44:00] simpler an explanation than what actually controls Siri. Trying to explain to a student what controls Siri is actually pretty complex because there’s so many different things layered. The stack is pretty complex that you’re talking about there, and this simpler explanation for these really complicated environments—I can understand why it’s appealing.
[00:44:34] Rita: When we were thinking in 2016 about the manipulation of our media environments, we were always anchoring it, or the conversation seemed to anchor it In figures or institutions or agencies. The IRA, what is happening in Manila, and so forth. QAnon? Obviously. Like, who’s the mystery? Who’s behind it, who spun the tail? I think the realization now is that the [00:45:00] originator, even though we’re deeply interested, who’s Satoshi, etc., like who’s the originator of these things, all that matters is the seed, like the fact of the seed, and then things proliferate. It’s less important to trace it back to a source, right? Because you don’t need to trace it back to a source. You have this kind of free floating circulation of stories, of mythos, of ideas, and I don’t know that the one directionality that was promised by the troll farm, it originates here and then it comes to us. Now I think it’s just pure circulation in an interesting way.
[00:45:38] Reed: It’s important to discuss the media ecology that allows the content that troll farms produce to be circulated, disseminated, and received as if not real content, certainly trustworthy enough for large swaths of the population to be making their decisions. I really do think that there are [00:46:00] distinct media ecosystems from which people are getting their content. For example, WhatsApp groups have not been discussed about nearly enough in the context of the American electorate in no small part because it’s largely non-white populations that are using WhatsApp groups, people with large groups of friends and family that exist outside of the United States. I only started using WhatsApp when I moved to Madrid, and now I’m only getting a sense of the impossibility of tracing where something came from and to whom it has already been received.
[00:46:37] Elizabeth: It’s funny to look at the founding discourses around Twitter. And how it was originally imagined more as a kind of group text application than it was as something that would be part of the mainstream media cycle and be important for [00:47:00] journalism. And so it’s interesting to think about how WhatsApp sort of scratches the itch for that kind of narrower notion of publics that allow for media sharing in an even more walled garden than a Facebook group.
[00:47:22] Rita: In the sense this another cause for despair, sorry, is just the way in which the tools that had been used for compelling, even, you might say liberatory political purposes have just been turned against us, if I think in terms of us/them. If I think twenty years ago about alternate reality games, and the very powerful use of these things—World Without Oil was one, right?—in order to, again, change hearts and minds, get people to imagine a future without fossil fuels. The premise is that you cannot [00:48:00] imagine a better future unless you try to enact it yourself. That promise—and it was a kind of utopian promise—has just dissipated, and the mobilization of these very same tools and techniques for destructive, you could say catastrophic, political purposes.
[00:48:22] Elizabeth: I was actually playing World Without Oil when I was at a conference in Europe. I think I was in the Netherlands, and it had just reached the point in the narrative where the planes had stopped flying because of the world without oil. And so I was like posting stuff like videos on YouTube: Help, I’m stranded in Europe at this conference and I can’t get home. And all of these sort of random European strangers were like: Oh, we can take you in, Liz! You can stay with us, and you’ll be fine. Here in the Netherlands, we have bicycles and we have a lot of [00:49:00] wind power and stuff. So, I do remember those days of this kind of imagined hospitality around the kind of world of LARPing.
[00:49:11] Rita: The beauty of the social at that point. It’s now the mob.
[00:49:17] Reed: Actually, another thing we talked about at the internet studies conference—this was a point made by Dr. Catherine Knight Steele—is that it really is so easy in this point in time to wallow in despair and to criticize, but it is becoming so much harder to look for directions forward, and that’s exactly what we’re here for, it’s what we need to be doing. What to think about. Again, this question of sociality, the vision that early ARG, LARPing, and maybe albeit misguided ideas of techno-publicity, the horizon of sociality that they were drawing towards . . . . It was certainly misguided in placing trust in platform oligarchs, but what can we do to find that trust again in each other, in order to work forward, again, using the digital media environment that we have, but reframing the terms of order that we’re called to use them in?
[00:50:12] Rita: I wouldn’t discount the role of the aspirational or the speculative narrative, the end of Children of Men, the film version where they’re going to the ship labeled Tomorrow, the end of Margaret Atwood’s, Oryx and Crake: “Zero hour. Time to go.” I do think that kind of open-ended aspiration to different futures that might be better that speculative fiction still trades in, I think that has to be a tool in the arsenal.
[00:50:47] Rita: Many people are talking about bioregionalism, thinking about a local, situated politics that’s grounded in environment and community, and the actual sense of people that you meet and deal with on a daily basis, that seems to move toward a kind of disconnectivity or a fantasy of opting out or [00:51:00] removing oneself from the grid: it doesn’t have to. I think the bioregional movement is really powerful, especially because it deals with history, like the history of the places, and it also works toward repair or restoration, but without these grotesque fantasies of an antiquated, idealized past that is obviously sanitized, right? You could still trade on or think about a history of a place without trying to imagine stripping away all that was really gruesome and awful about it, and thinking about what one wants to repair and recover going forward. Those are just two paths forward. I think there’s not one path forward, nor is there just one tool. There can’t be. It’s a permacrisis, which means you need multiple solutions.
[00:51:52] Elizabeth: I also think, going back to our discussion of digital irritants, it’s important to also think about digital comforts [00:52:00] and the fact that so many people now go to personalized media because they want comfort from political anxieties, about the climate, economic anxieties, and I think thinking about situations that can facilitate discomfort rather than irritants. And I’ve been thinking a lot about how the right wing has demonized DEI, because this is a situation which people have to be uncomfortable in the workplace when they’re called out for weaponizing their privilege in particular contexts. And so I think that thinking about how do we think about difference and not necessarily privileging interactions with [00:53:00] like-minded people. I worry that people on the Left, their feelings are hurt. And so there’s a lot of sort of collective care work about the kind of misery of the loss of the election, rather than thinking about engaging in difficult conversations.
[00:53:25] Elizabeth: And I think those difficult conversations have to take place on the Left as well. So it’s not just about sitting down with your Trump-voting siblings at Thanksgiving dinner. I think it’s really also about, you know, thinking about how the divisions within the Left were weaponized as well, and that the differences that can go on within progressive movements, and the kind of solidarity that is not sameness.
[00:54:00] Rita: I was just thinking, can we come up with some operatic, summative statement that brings us forward into a bright new day?
[00:54:05] Elizabeth: I got nothing, man.
[00:54:12] Rita: Yeah, like the fight continues, right? That’s the, we can’t just give up. Everyone has to think about how to fight in their own way.
[00:54:21] Delores: Thank you again, Reed, Liz, and Rita for sharing your thoughts on AI literacy and politics. We’d also like to thank the entire Positions editorial and production team, along with our co-producers, whose work makes these episodes possible. And we also thank you, our audience, for your time and attention today.
[00:54:36] If you haven’t already, make sure to subscribe to our podcast and join us for the next episode of Positions where we’ll tackle other essential and engaging topics in cultural studies.
Credits
Produced by Mark Nunes and Elaine Venter
Season Two
Hosted by Delores B. Phillips
Production by Elaine Venter, Lucy March, Mark Nunes, and Kathalene Razzano
Editorial by Mark Nunes, Theodora Danylevich, Anthony Grajeda, Howard Hastings, Reed Van Schenck, Kathalene Razzano, Jennifer Scuro, and Elaine Venter
Music by Matt Nunes
Notes
- Briony Swire-Thompson, Joseph DeGutis, and David Lazer, “Searching for the Backfire Effect: Measurement and Design Considerations,” Journal of Applied Research in Memory and Cognition 9, no. 3 (September 1, 2020): 286–99, https://doi.org/10.1016/j.jarmac.2020.06.006. ↩
- Peter Schwardmann and Joël van der Weele, ‘Deception and Self-Deception’, Nature Human Behaviour 3, no. 10 (October 1, 2019): 1055–61, https://doi.org/10.1038/s41562-019-0666-7. ↩
- Paulo Freire, Pedagogy of the Oppressed (Continuum, 1968). ↩
- Maria Montessori, The Absorbent Mind (Theosophical Publishing House, 1949). ↩
- Stefania Milan and Joël van der Weele, “Upcycling Democracy” (Amsterdam, May 24, 2024, University of Amsterdam), https://pure.uva.nl/ws/files/191141273/Text_inaugural_lecture.pdf. ↩
- Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s Press, 2018). ↩
- Stefania Milan and Emiliano Treré, “Big Data from the South(s): Beyond Data Universalism,” Television & New Media 20, no. 4 (2019): 319–35, https://doi.org/10.1177/1527476419837739. ↩
- Milan and van der Weele, “Upcycling Democracy.” ↩