Daniel Mwesigwa

When is a “persona”? The relationality of AI personas in AI-mediated worlds

April 19, 2026

When President Trump projects himself as the new Pope on his Truth Social network, what kind of persona(s) are AI-generated art helping him express? Where is the line between creative public relations and parody? When xAI’s Grok is weaponized by blue checkmarks to undress unsuspecting women on Twitter/X, what logics are embedded in the production and distribution of those non-consensual images? When user experience (UX) researchers use large language models (LLMs) to simulate user behavior through synthetic personas, what is left of the relational understanding of lived experience? It’s no longer entirely surprising that generative AI is reconfiguring how personas work across settings such as political expression, platform governance, and design research. What do these applications of personas tell us about the changing definitions of “persona,” and what is at stake here?

Personas have been central in human-computer interaction and allied fields for over two decades, where they’ve been understood as data-driven abstractions of key user groups that serve as important coordination and empathy devices for designers and other stakeholders in design and interaction research (Pruitt & Grudin, 2003). While extensive accounts of recent and established scholarship across media studies and critical data studies have highlighted how social actors perform particular personas (and personalities) in hybrid media environments (Bazarova et al., 2013; boyd, 2011; Pinch, 2010), the recent uptake of LLMs in creative expression has altered how personas are used to order communication and relations, obscuring what qualifies as parody, caricature, or even social reality itself. However, the rapid growth of LLM-assisted persona construction has radically changed what personas are, including the epistemic foundations upon which they are built. Since personas can be created by anyone with average AI prompting skills, the authority of empirical data (including data from usability studies and digital ethnography) which inform persona construction has been destabilized.

I want to argue – and this is my hot-take-in-the-works – that personas have been partially understood. Personas are applicable in various domains, and there’s more we have to learn from expanding how we define them. Indeed, personas have traditionally been treated as artifacts, as bounded objects that represent user groups (in design) or public presentation of self (in media performances). However, personas should be primarily understood relationally. Following Susan Leigh Star’s (1999) conceit – where she reframes infrastructure not as a thing but as relation – I similarly ask not what a persona is but when something becomes a persona: in relation to what practices, for whom, and under what conditions. Star showed that infrastructure connected to organized practices, and becomes visible primarily upon breakdown. Generative AI is now producing such breakdowns, rendering previously invisible persona construction practices newly visible and contestable. This reframing also moves beyond what Leif Weatherby (2025) calls “remainder humanism,” the persistent insistence that machines cannot capture some essential human remainder. Rather than asking whether synthetic personas truly represent human experience, I ask “when is a persona?”: how do personas operate ecologically in social relations and everyday practice? Now this is where the work starts (or ironically breaks down!).

References

Bazarova, N. N., Taft, J. G., Choi, Y. H., & Cosley, D. (2013). Managing impressions and relationships on Facebook: Self-presentational and relational concerns revealed through the analysis of language style. Journal of Language and Social Psychology, 32(2), 121–141.

boyd, d. (2011). White flight in networked publics? How race and class shaped American teen engagement with MySpace and Facebook. In L. Nakamura & P. A. Chow-White (Eds.), Race after the internet (pp. 203–222). Routledge. https://doi.org/10.4324/9780203875063-13

Pinch, T. (2010). The invisible technologies of Goffman’s sociology: From the merry-go-round to the internet. Technology and Culture, 51(2), 409–424.

Pruitt, J., & Grudin, J. (2003). Personas: Practice and theory. In Proceedings of the 2003 Conference on Designing for User Experiences (DUX ’03) (pp. 1–15). Association for Computing Machinery. https://doi.org/10.1145/997078.997089

Star, S. L. (1999). The ethnography of infrastructure. American Behavioral Scientist, 43(3), 377–391.

Weatherby, L. (2025). Language machines: Cultural AI and the end of remainder humanism. University of Minnesota Press.