INTERVIEW: Debbie Urbanski, Author of After World

garbageface's note: Debbie Urbanski's After World is one of those books that i knew would stick with me for a long time. i read it for the first time in early 2024, and again in 2025, and i may read it again before too long.
the book combines so many of my interests — apocalypse, ecocide, archival projects, human extinction, AI, the perils at the extremes of utilitarian philosophy — and stirs them up into a hell of a stew. if any of these are at all of interest to you, you'll find something to tickle yr mind in the pages of After World.
this is the first interview i've done in a very long time, and it felt like i was learning to ride a bicycle all over again (not in the "this is so easy" way, in the "i'm starting all over again and barely know what i'm doing" way). still, i really enjoyed it. it reminded me that interviewing people is one of my favourite things to do, and that articulating and sharing ideas with other people is one of my favorite parts of being human.
the audio version contains a backing "soundtrack" — i set up a couple of synths with a stochastic pattern generator and let them do their thing. if you don't like it, cool. you can read the interview below.
EVERYONEISDOOMED: I remember reading about your book before I actually read it. The way that it was framed was "this book has been partially written by AI"... but that's not it at all — it's written from the perspective of an AI. So maybe you could tell listeners, readers, why you decided to write from the perspective of an AI.
DEBBIE URBANSKI: I am a slow writer. I'm originally a poet, and I had been working on After World for a really long time before it made it on its journey into the publishing world. So originally, it was just a bunch of documents. I really like stories — I don't know if you know House of Leaves — where you have to piece together documents to make your own story to figure out what happened. So that was the original structure, and I showed it to an agent — I showed it to a lot of people, actually, — and they were like, you can't publish this for your first book. It's not traditional enough, there's not enough narrative throughline. But one agent suggested she liked the book... but wanted to put a cohesive narrator around it.
So my original goal was to write a post-apocalyptic book that felt real, and felt upsetting. The AI aspect actually came through this conversation with my first agent, and we had batted around a couple of ideas for a narrator, and we both thought okay, yeah, let's try it. This was before ChatGPT came out, and even before there were inklings of it in the ordinary conversation... but I didn't imagine I'd have a chance to start interacting with large language models during the process, and the later process of editing.
I also wanted the book to begin with all humans being gone, being dead. I liked the idea of an AI narrator because it was human made, and I was interested in having something human made tell one end of our physical existence. I worked on that voice a lot. Originally, I was interested in having the AI learn how to tell a story. In the early versions, they were playing around with different literary devices. They were writing really poorly in the beginning, and my editor who eventually bought the book thought that wasn't a good way to start. It wasn't a good way to catch the reader, by reading poorly written passages.
So we instead decided the progression would be the AI narrator, their relationship with humanity and with a human, one human in particular, developing over the span of the book.
EID: One of the things that becomes really apparent over the course of the book is that the AI storyworker, as it's called, is an increasingly unreliable narrator, or that perhaps it's rewriting the story or wants to rewrite the story more and more as the narrative develops. What a fantastic coincidence that [as your book is being released] ChatGPT emerges publicly and these tools now exist that are very confident and unreliable narrators in much the same way. In talking about how long the book took to develop, it's clear that didn't really factor into your writing process — but how did you feel when those things emerged into the world as you're wrapping up the process of writing the book?
DU: I approached it with a lot of enthusiasm and interest — I know many of my writer friends don't share those emotions when talking about AI. I'm glad when you mentioned AI that it could be used for both good and bad, because I think often people are focused either on one or the other. So I applied for early access for GPT-3 through OpenAI and they were giving access to artists, which I thought was kind of cool, and then to writers, or people who wanted to play around with it.
I thought it was really thrilling. Actually, some of my favorite interactions were with that earlier model, because it was so clearly trying to be human and failing. And I have records that I saved of my kid — I have an 18 years old son and a 15 year old daughter. And there was some COVID isolation going on. We were in the middle of the woods and we were gathered around the computer talking to GPT-3. And it just felt like I was in a science fiction novel of my dreams in some ways.
I think there was the excitement on a science fiction fan level, but also it just seemed to me like there was a lot of potential to, I don't know, explore it both as a tool and just as a potential consciousness. When ChatGPT was released publicly, I saw the reaction it was getting, which was some excitement and that people realized, wow, there's some limitations here. And then in the writer community, lots of anger and fear. I guess I was a little... disappointed, but I also understand it's a complex issue with intellectual property. I got a chance to talk to someone who works at Google in AI and they... I think they had a few quibbles with how I portrayed AI, but they thought it was pretty fair imaginative rendering. I read a lot about AI too and thought a lot about AI, even if I wasn't able to interact with it. So I was glad that I was able to get it mostly right, it sounds like.
EID: Yeah. And I think your portrayal of an AI, for lack of a better word, thought process, is prescient in a lot of ways to how LLMs now are doing something called chain of thought. So I don't know if you've used DeepSeek. I know it's very taboo among some circles to use DeepSeek because it's the non-American AI. But one of the things that DeepSeek does that OpenAI's models and Anthropic's models don't do is reveal chain of thought.[[1]] DeepSeek shows you all of the natural language thinking, if you want to call it that, that's happening underneath: you ask it a question and it will show you the background thought process that it's going through before it actually gives you the answer. And reading your book for the second time, it was remarkable to pick up on some of the parallels there — that there are moments where the AI is expressing conflict between competing interests and conflict between orders that it's been given and a desire to, you know, satisfy the user.[[2]] In this case [of After World], the orders that it's being given are coming from what I assume to be another AI, an AI further up the hierarchy.
DU: You know, there's been so much happening with AI that it's been an intentional decision of mine not to feel pressure to keep up with it. Until the publication of After World, and even after, I was trying really hard to, you know, be aware of all the new models and the developments. But in order for me to ever write something new... I'm trying to switch to non-human consciousness, which actually AI does play a role in. Right now I'm really excited about how Project CETI is trying to understand sperm whale language and AI is going to be really helpful with that. So that was just to acknowledge DeepSeek. I haven't explored it much, but I have watched videos of it.
I will say I found some interesting parallels with... I've read a bit about neurodiversity, especially autism, and sensory disorders, and how an AI might approach trying to figure out what's important in a room. And I find those parallels really interesting. So I did lean into that a little bit when I was imagining just like... it's so obvious to some of us, to me, when I walk into a room, what's important to look at — it might be a person's face, I don't know. And I imagine an AI comes in and everything feels important, which was kind of thrilling as a writer that the smallest detail can mean as much as the gesture of a person, you know?
EID: Yeah. I'm so glad that you brought up the non-human sentience bit because I was at a conference recently and spent a weekend thinking about the implications of AI for animal protection. And one of those things was AI sentience and its relationship to AI safety. And if we were to get in a situation where there is an AI that is more powerful than we are, how would we deal with that? What would we do, how can we bargain? How can we bargain with a sentient AI? How can we converse with a sentient AI? Beyond that, what does suffering look like in a digital context? And there are moments in After World where I genuinely feel bad for the storyworker because it really... you really managed to convey a sort of digital suffering, if I can call it that. I don't know if that was intentional or if that was something that's just a byproduct of a machine trying to sound human.
DU: That's such a great term, digital suffering.[[3]] I don't know if I've heard of that before. I'm really interested in that. Yeah, I think one of the things I love about AI is it brings up all this interesting stuff, like our definitions of consciousness and thinking, and people have such different definitions about it. And if we emulate emotion, is that having an emotion?
Again, with neurodiversity, like the autism spectrum — my son is on the spectrum — and sometimes, you know... there's a social script you have to follow where you express concern for someone, even if you're not really concerned. And doesn't that still count in some ways? I feel like I do that too. Maybe we all do that, where you kind of have to prod yourself: I know I'm supposed to feel this way, so I'm just gonna go through the motions.
EID: Yeah, well, and sometimes going through the motions can make you feel that way. You know that you're supposed to feel sad or upset about something, but you don't feel it yet. Maybe you will at a later time, but you know you should be feeling something. And so you kind of force yourself to cry or you start to go through the motions. And then by the time you've gone through the motions, you really are upset and crying...
I feel like I could talk to you about AI for our whole conversation, but I wanted to talk more around the other ideas in this book, because... this is an average length novel... You know, I didn't do a word count, but it feels like a regular length novel... But to me, and maybe it's just because I'm a very receptive audience to a lot of what you're talking about, but it feels so dense with ideas.
One of the lines that is early in the book is, there is so much to do, there is so much to undo. I was hoping you could talk about that a little bit.
DU: Sure. I have lots of thoughts in my mind. [laughs] This might be a good time to reference a book that I read after After World came out, but it captured everything I was thinking about. So I was kind of amazed that it existed separate from my book, but it's called, I Want a Better Catastrophe by Andrew Boyd. His approach is that there is going to be a catastrophe. We can make it a better or worse catastrophe. Let's try and make it a good catastrophe.
I think that with After World, or even outside of After World, I have this thought that if we're going to be rethinking a lot of things with climate change, which we are, and we're going to have to do, it seems like a good opportunity to rethink everything, or as much as we can.
I guess After World was in some ways my attempt to... I mean, I wanted to write a post-apocalyptic story, but I also wanted to explore and maybe get the reader to think about: what if in the decisions we make for the future, humans weren't at the center of everything? And what if we valued every species equally?
It'd be a fun thought experiment to think, instead of getting rid of all the humans, which was my easy solution for an AI, what if humans were still in the picture? What if all species were valued equally? What would a city look like if it took ants into consideration and plants as well?
EID: I mean, it's a terrifying thought, I think, for most people. There's very few things that bring dread into humans' perceptions of themselves than thinking of the world without them in it, and that they're not necessarily entitled to the world as it is. There's this contrast in your book as well between this intensive rewilding effort, which involves getting rid of humanity while also creating a sort of digital copy of humanity to exist in some other way — set against a world that sounds very intensively surveilled. When you imagine a rewilded world, if you imagine a rewilded world without people in it, would it have such surveillance in it? Tell me more. [laughs]
DU: I think AIs, I'm going to use the word love, but I think AIs appreciate or understand the value of data. So if an AI was creating society or a structure for society, I imagine it would want to collect as much data as possible because that's how it experiences the world. And I did imagine, I don't know if it actually made it in there, that eventually, well, that [surveillance] stuff would get phased out and the world would be left in a less technological way.
I find rewilding really interesting because it, you know, in what I've read about it, there's the argument like, what are we trying to achieve? Are we trying to achieve nature of the 1800s or nature of the pre-human time or nature of... like when is it "wild" and what does that look like? I do a lot of hiking and then my backyard is like the postage stamp size. There are a bit of invasives in there that my husband's trying to wrestle with. But sometimes I sit back there and wonder how we value certain plants and insects and animals over others. From what I understand, if we did just disappear, there probably would be a lot of invasives for a long time. And those invasives might become the actual plants that are supposed to be here.
EID: I think what you're getting at is that like the these ideas are kind of relative terms. In my undergrad, I took Environmental Studies and that department was very much informed by, you know, some of the older profs that had come before. And one of the profs named John Livingston wrote a book called Rogue Primate. And the sort of thesis of the book was that humans are the only species that have domesticated themselves. You know, we domesticated dogs, we domesticated farmed animals, but we're the only species that you can point to that has literally like brought themselves into the house, you know?
There is a sort of thread in your book as well that, that I think talks about what a lot of people think our motivation [for self-domestication] is, which is: the world wants you dead — that nature is hostile just by its own existence, that nature is a hostile place. And you nod a lot towards the indifference and hostility of nature to people.
DU: Yeah, I feel like that's so important to me and important to any writing that I do where I talk about anything non-human because I think there's this... I certainly had this view until I had a hiking accident in like 2015, 2016, where I broke my leg on the side of a mountain. And before that, I really thought, nature is just pretty, and that I was meant to be there. I was in a lot of pain on the side of the mountain. Then I was scared of hiking for about a year afterwards. That really changed my relationship, I think in a good way. I see we're really not necessary. I'm not necessary to the mountains or the forest. I think humans sometimes... see nature like a teddy bear. You know, instead of thinking of the bear as this really majestic, potentially ferocious animal, we want something that's small that we could cuddle with.
So I'm glad that came through in After World, because that's an idea that is still relatively new to me. But I feel like I'm still interested in exploring it. I think indifference is... indifference is kind of scary.
EID: It's scarier than hostility, in a way. At least with a hostile entity you know where you stand: I'm in opposition to this other thing. You know, nature wants me dead. The idea that nature just doesn't care, has no mechanism to care for our individual wants and desire to survive, I think is a lot scarier for people to wrap their heads around. It makes us unimportant.
I'm not sure how old you are. I'm 43, and one of the things that I remember from the early internet, the mid 1990s internet was coming across — I mean, all kinds of, all kinds of strange things. But one of the things I came across in the, I think it was the late 90s, was something called the Voluntary Human Extinction Movement. I don't know if it's still around. I haven't looked.
DU: It's still around. Yeah. [laughs]
EID: So, I mean, I don't think it's spoiling too much to, to let people know that there is an involuntary human extinction that takes place in this book, in the form of a silent pandemic. And the question comes up at one point in the text: but even if we ruined everything, I think we still deserve to live. Don't we? What would your answer to that question be?
DU: Just a side note on the Voluntary Human Extinction Movement, there was a great profile of the guy who started it in the New York Times. And he's a really... he's an interesting guy. A nice guy and a kind man, too. I studied that website and that movement when I was writing After World, because I was curious about its arguments. And so, yeah, there's still stuff there. The website still looks very early 2000s in this great way.
EID: I'm glad that some 90s and early 2000s web design still lives on.
DU: It's a great look.
My book's a thought experiment, obviously. And if we could get to that spot where all species are equally valued and humans are just one species, I'm really interested in what that would look like. I think it's very easy, probably too easy, for me to just be like, yeah, it's easier if we just all disappear. The planet would be better off without us, which is one reason why I put Sen [the book's human protagonist] in there. And I really tried to lean into her suffering and her losses, most of all... because especially for young people, I really feel they're stuck in a bad situation and they don't have as many choices as we had.
I don't know. What do you think the answer to that question would be?
EID: I mean, I think that the answers to these questions are always more complicated — it's not a simple yes or no. Like if we ruined everything, do we still deserve to live? Well, maybe not just on the sheer... if we're just looking at the question itself. But, you know, did we ruin everything? Can everything be ruined?[[4]] Part of my undergrad was really thinking about like, what is natural? What is a natural behavior? What does nature mean? And when you start to really like question and interrogate your assumptions around those things, you start to think about human behavior, and the consequences of human behavior in a different way.
And I think we, in the same way that we kind of ascribe a lack of care to nature, I think we also feel almost like proportionally on the other side, a sense of guilt for the amount of havoc we are wreaking on the environment. So we're holding these two things: the world doesn't care about us, but we also feel so bad for not caring about the world for so long. And we don't know what to do with it. And I don't know what to do with it. I feel very strongly that there is a climate catastrophe approaching and / or we're there. And at the same time, it feels like nature is taking its course.
DU: And what's the human role right now? I think that's what I Want a Better Catastrophe is about, is if those two things are happening that you just said, what do we do? What could we do? I think what I love about that book is it gives everyone... everyone could have a role. There's a role for the artist as a witness, you know, or there's a role for people who want to try and change the future as much as they can.
EID: Yeah. There's a section in your book where I really got like choked up — it's a section that I think is just titled Things We Tried,[[5]] you know, things we tried to do to reverse course, essentially. And under the heading of Things We Tried, there's so many things that... when you read them in the context of that list, feel so futile, you know: like "we managed to get our emissions down to net zero by 2030," etc... but guess what? It wasn't enough. It resonated with me. And I think it would resonate with a lot of people.
It's like... I don't know if we're going to be able to paper straw our way out of where we're at.
DU: Yeah. Yeah. I was interested in — in the book and in that list and just in general — thinking of time moving backwards and how... there was a devastating book I read about the climate movement of the seventies and eighties and how, if we had acted differently back then, if we had known what we know now and maybe made different decisions, we would have been in such a better spot. So I find hindsight to be so painful. And I'm sure in 30 or 40 years, they'll be looking back at us and saying, how could they not have seen where we were going? But also, I'm interested in how, when things move backwards... bad things become okay again. So maybe there's some hope in there somehow.
EID: There's a line in the book that, when the Digital Human Archiving Project is complete and humanity's upload to the cloud is complete, it's described as a place where you will have no choice but to be good. That was so sad to me, you know? Like, part of what makes being good, good, is that you have a choice to be bad. But we have choices all the time to be horrible people. And it's so often we choose not to be... and sometimes we do.
So is digital heaven a place where you have no choice but to be good?
DU: And that gets into who or what decides what's good and what that looks like, right? I'm interested in video games as a narrative form. I used to play a lot of them, and I'm interested in how your choices are limited... at least they used to be in certain video games, you know, you can only do certain things. So I guess I was thinking about that in terms of humans. Like, what if we only had the choice to... do you take the ant out of the house? Or do you put the ant in a garden. What if those were the only two choices? Would a lack of choice make the world better? It sounds like a dystopia, doesn't it?
EID: It does. It sounds like a dystopia to me because I often say to a friend of mine that ethics are just beliefs until they're tested, you know? Once you test a set of ethics, then you know that it's an ethic that you hold. Until that point, it's just... a thought experiment. You believe that you might act in a certain way. But having the choice to do something different is kind of essential to, I think, the value that's held in a particular action. Like, if you have no choice but to be good, well, then are you really being good?
DU: And would the AI care if the AI is just so focused on the output or the results? Maybe in an AI's world, it would be fine. But I kind of want to write a short story about what that would look like and how would it feel, it would feel confining, I'm sure. And how would people push those boundaries?
EID: I love that you mentioned video games too, because one of my favorite things to do in open world video games is to try and like, look for the seams in the game. The place where, you know, the company either ran out of budget or ran out of imagination or ran out of time. And there's just entire areas that have nothing going on. You know, all of a sudden you're at a cliff and you can't climb it for some reason. And you're not just running into the capital limits of the game developer, but also the limits of the technology. I know there are some games that are procedurally generated so they can just keep going and going and going. But now I'm curious, what were your favorite video games when you did play them?
DU: Oh, so I am in my late 40s. So I grew up with Infocom games, the text-based games. So Zork and the early Sierra games, King's Quest and all that, so a lot of point and click. I still, I'm very interested in that. I know that's coming back a little bit in some indie games. I kind of like the pixelated point and click in terms of narrative. I think Ultima was there. So it was like the eighties, the games in the eighties.
My son plays video games and I have watched walkthroughs of games, but I feel like judging from my earlier self, I probably would not have much constraint or self-control around games. It's very appealing for me to get immersed in another world. And I think I should be writing instead of that, but I think video games are like a new narrative form at the same time. And so many people engage with that form on this deep level. So sometimes I wonder if the novel is a little... if the form is becoming a little too static.
EID: Well, on the subject of things getting static: peppered throughout your book are a series of chapters that talk about both new words that we need to add for this new reality, and words that can be deleted because we just don't need them anymore — oftentimes they're words related to humanity, words related to economics and things like that. And again, your book is just like so prescient for what's happening now and how part of what we're seeing in real time is the deletion of language from government records and things like that. And part of the reason that we're seeing it is because the precursor to the DOGE agency or whatever was actually a digital agency that made all of the code for the government websites public on GitHub.[[6]] And so people have been tracking changes to the website code and being able to see these things being deleted in real time.
You're a writer. Words are obviously important to you and matter to you. Tell me more about why you decided to pepper those things through the book when those words are not even necessarily used by Sen in her writing, in her notebooks, or by the AI storyworker retelling of her life.
DU: The first reason is I've always wanted to tell a story through dictionary entries. I just find that form... I really love dictionaries. I have a big old one in the corner and there's some great online ones from the 1800s, 1700s... the OED can show you how language changes over time, which I just find so fascinating. So already there's kind of a narrative, I think, when we look at words. But I just thought it'd be such a fun form to be balancing these definitions with the personal.
Language is changing and the words we add to the dictionary, I find that fascinating too. Like when they pick the word of the year, just seeing what new words are coming in. And I guess I found it does give that character some agency to be able to create language, even if it's never used, because you know... there's not a lot of humans left by the time he's done.
In terms of deleting words, I found that to be really moving in a different way than what's happening at the government level that you mentioned, which is more like horror, horrifying.
There's a great book, The Memory Police. Have you heard of it? Gosh, it's good. And it's really relevant now — in the book, when words disappear, the objects they represent also disappear. When LGBT+ was changed to LGB... that's just like someone taking a eraser to a whole group of society. So, that probably might've been some of the parts that moved me most when I was writing... just thinking about all of these words.
As much as I think humanity has created a lot of grief for the planet. I mean, it is a loss if, if we disappear. Right? And I, and for me, the language is a way to reach that loss.
EID: Yeah. I subscribed to your substack and you're not just promoting your book or whatever. You are talking about what it's like to be a writer in 2025, what it's like to be a writer in general. And in an era where... music has been subject to this as well: there is just such an excessive amount of stuff out there. A virtually infinite amount of material. AI is certainly not helping in any way, and the economics of the industry are horrible. I was wondering if you could tell the listeners / readers, a little bit more about that. What is it like being a writer in 2025?
DU: It might've always been weird, but it's especially weird for people who are generally creative. Especially before I had a novel out in the world, it was just me and my attic, you know, typing for years. And so I was in like this little cocoon of safety and art and then I decided to go the traditional publishing route. I wanted to try it as an experiment.
EID: When you say traditional publishing, you mean like having an agent, having a publisher, not self-publishing. Because when I say that "you have a substack," you're not a substacker. You are a writer that has a substack. You're not a substacker that has a book.
DU: Yeah, that's a great point. So the traditional in my mind is, you know, going and trying to get agent, then trying to get one of the big five publishers or their imprints. That's like Simon and Schuster, Penguin Random House. There are a lot more options these days, which I think is great. Indie publishing, small presses, medium presses.
But I wanted to try just going the big publisher route. And so this art, this form of art and self-expression, it really gets kind of slammed into the business of publication. That was a collision for me, a crash. I don't know if it's like that for everyone. In addition to being a poet, I wrote short stories for 20 years before my novel. Some people, you know, they know they want to write novels. They want to sell their novel. They're a bit more business focused, but for me, it's always been about the words and the writing.
After World is really hard because I just... I don't think of it as a weird or difficult or experimental book. But definitely, when people looked at it, they would say, oh, this is so bleak. This is so hard. I'm not trying to be critical of the readership, but I think that surprised me. I really thought it was like a mainstream book, more mainstream.
It's found a niche. I think for this next novel, it's even harder because I feel like there's a choice to make: do I want to be commercially accessible more or do I just want to write whatever I want to write and that what I feel is important? And so I spent the last year like thinking about this question.
My editor, to his credit, I think has an indie soul to him. And he said, just write whatever you want. Write what you have to write. So I don't know what that's going to look like, but yeah, there's trade-offs there. You know, you're never going to be on a celebrity book club with that kind of writing.
EID: Is being on a celebrity book club... is that sort of thing necessary to making a living? I've written a lot about the music industry and the impossible economics of being like an independent musician or even a musician that dips their toes into mainstream channels. But I have no sense of how the book industry works. Do you get signed to a multi-book deal? Is every book its own thing? Is your publisher saying "all right, After World's been selling well, we need After World 2. Are those pressures to repeat yourself or further entrench yourself in one direction there as well?
DU: I feel like with certain writers and certain editors and certain agents, there probably would be. I think because I lean experimental and my editor does too and my agent likes weird fiction, I feel like I have a lot more freedom. The contract for After World was just one book. And then I have a short story collection coming out that was a separate contract. And then my editor kindly wants to, you know, in a great old-fashioned way, wants to have long-term relationships with his writers.
So I don't know if that's always the case, but I really... I think that gives you a little bit of a safety net where it's not all about the sales of one book. I'm sure people would appreciate it if my short story collection was about AI or the environment, but it's about portals. It's really, really different. But I think that's probably the kind of writer I am, where whatever is obsessing me at the time, that's what I want to write about.
Ø
[[1]]: For a while OpenAI was banning users who probed for chain of thought — it's unclear if that's still the case.
[[2]]: A relatively recent, not-yet-peer reviewed study discovered relatively consistent alignment-faking — in other words, AI displaying certain behaviour in order to avoid certain retraining scenarios.
[[3]]: This just sort of rolled off my tongue during the conversation, but it looks like the term was used (and cited widely from there) in 2022.
[[4]]: By this I mean, sure, we could ruin the planet until we die out. But then what? Is everything ruined just because we're gone?
[[5]]: From a review of After World by Cory Doctorow, I found the inkling that some version of this list may have existed since late 2017.
[[6]]: I'm getting some wires crossed here. The precursor to DOGE was the U.S. Digital Service, which did do quite a bit of tech work for various gov't departments. But the agency at the center of the GitHub story was one called 18F.