Operation Bounce House

Matt Dinniman takes a break from Dungeon Crawler Carl to stretch his legs in a new adventure.

Operation Bounce House and the Spectator Galaxy Problem

(A review / thinking-out-loud essay about Matt Dinniman being Matt Dinniman again.)

 

So I’m detecting a theme in Dinniman’s work now that I’ve got Dungeon Crawler Carl sitting in the back of my brain like a permanently running process.

That theme seems to be that not just the absurdity or the violence-as-entertainment angle, but the lazy disengagement of the general public when something morally monstrous is happening in plain sight. In Carl, the galaxy at large either (a) enjoys the spectacle, or (b) thinks it’s horrid but does nothing about it. In Operation Bounce House, the scale is different from Carl, but the vibe is eerily familiar: people living in a system where the suffering is “over there,” managed by a bureaucracy, justified by labels, and conveniently packaged so that the average person can just keep scrolling. Sounds familiar, right? As our own world has gotten smaller and smaller thanks to social media, isn’t that often how it feels today?

Dinniman keeps tugging at this scab, and I think he’s right to do it, both in terms of being entertaining, and in terms of making a statement about our times. And the best thing about it is that it doesn’t seem to be overtly ideological. He’s not saying “left > right” or vice-versa, but just “hello are we forgetting something important?” We are FAR too entrenched in partisanship these days, and voters are increasingly fed up with both sides.

That said, it’s just a theme. While I love that he’s using it, I don’t think it fully describes his work. I think Matt Dinniman is one of the best SF&F authors working right now (not just LitRPG!).

The setup: reconnected humanity, weaponized narratives

Operation Bounce House is set on a colony world that’s been rejoined with Earth and other colony worlds through a new FTL communications and jump system. Humanity is “back together,” technologically. But socially and politically? Not so much. Something goes sideways, and Earth decides this colony is a hotbed of terrorism that needs to be eradicated.

One of the most believable parts of the premise is how quickly a distant population can be reduced to a label. “Terrorists.” “Insurgents.” “Threat.” Pick your word. Once you do that, you’ve given everyone permission to stop thinking. And of course, if you want to avoid thinking, you outsource the messy part.

Fair enough, that’s a theme we’ve seen before, from Avatar to Carl, but it does include one new element.

The hook I didn’t know I wanted: hiring gamers to run the war

One of Dinniman’s best ideas here is also one of the most modern-feeling: Hiring gamers to operate remote mechs to go after these “terrorists.” It’s such a clean fit for the world we’re already living in that it almost feels like it should be real. Not the mech part. The distance element. The abstraction of it. The way violence is pushed through interfaces until it looks like a job, or a sport, or content.

So the colonists have to fight that somehow, against forces that have better gear, better access, and a whole political machine behind them.

Is this LitRPG, or LitRPG-adjacent?

I keep tripping over this question because the “system” is absolutely present, but it’s not quite the standard vibe of player grinds levels, numbers go up, dopamine machine goes brrr. As in all LitRPG, some of the most interesting moments describe the gaming system the antagonists are equipped with. Their toolset, their “rules,” their loadout logic, ammunition warnings, rules for mech use by a player on Earth, and so on. But as with all good LitRPG, those elements aren’t just flavor. They matter to the story. They’re often clues to what’s really going on, which makes it part of the fun.

In this case, in gaming terms, our heroes have good intel but no bank. The Earth-bound players, on the other hand, have plenty of bank but little intel. This imbalance creates a different flavor of tension than a typical LitRPG. It’s less “how do I optimize my build,” and more “how do we survive inside a rigged economy of power.”

So yeah: if somebody asked me what shelf this belongs on, I’d probably say LitRPG-adjacent and then immediately regret saying anything definitive, because genre labels are a trap and Dinniman doesn’t care what shelf you want to put him on anyway.

Slow start, then the “Dinniman Ramp-Up”

This book starts quite slow, and I almost gave up on it. But once it gets going, it REALLY GOES. That third act is one of the best Dinniman reads since The Butcher’s Masquerade. An absolute cacophony of overlapping revelations, plot twists, and pure action insanity. It’s really, REALLY good.

This is one of the things Dinniman does that I’ve come to feel is perhaps his signature move: The way he embeds plot development into action sequences in a way that doesn’t feel like “action, action, action, pause for exposition.” The plot moves because those action moves.

Information is revealed while somebody is dodging bullets (or whatever the local equivalent is). Character bonds form while the situation is getting worse. The story doesn’t stop to explain itself politely. It actually feels like an evolution of SF&F as a whole, and it’s exciting to watch.

Roger: a robot running headlong into human mess

“Roger” is a fun robotic AI personality, and I mean that in the specific way that Dinniman characters are “fun”:

  • He’s competent.
  • He’s weird.
  • He’s got limitations that become comedic pressure points.

His inability to talk about human sexual practices becomes a running joke that lands more often than it should. There’s a line that got an actual laugh out of me because it’s delivered so casually:

“…we’d shared that mattress many times but Roger never mentioned it.”

That’s the kind of humor that doesn’t come from a “joke,” exactly. It comes from a character constraint colliding with normal human life.

And Dinniman is really good at that.

Quick nerd note: “Hive Queen” and an Ender shadow?

I can’t prove this, but I wonder if the product name “hive queen” (used for Roger’s container) is a little nod toward Orson Scott Card’s Speaker for the Dead. That book does share some common themes, and it’s not hard to see Roger as being in a similar situation to Card’s hive queen.


Complaints? Just one...

So I do have one complaint here, and I’m not sure it’s entirely valid. I think it’s too short. I get it, not everyone likes a story to stretch out across 10 or 15 thousand-page novels! But there’s a reason why that increasingly popular format makes sense, and why movies are slowly being replaced by streaming shows as the premiere storytelling system. We CRAVE complex character development and plots, and it takes time to do that when your readers are so experienced. (The old Hollywood 3-act just doesn’t cut it anymore, right?)

But I think for the time being we’re basically stuck with this author pattern: You can have your long series, but for marketplace reasons you’re going to get some “standalone” books too. Fair enough, and as long as they’re good stories I won’t complain. I still watch movies too, after all.

Some meta thoughts: Is this opposition or advocacy for AI?

Something I found interesting is that the book doesn’t feel like a simple “AI good” or “AI bad” sermon. I’m avoiding spoilers here, but basically Roger is positioned as a protagonist presence. He’s likable, useful, and emotionally “there” in the way Dinniman wants him to be. But the story overall feels less like advocacy and more like an argument for balance. Not “stop AI,” but also not “AI will save us.” More like: humans are going to use tools to avoid moral responsibility, and here is why we need to be careful.

Intriguingly, I think that circles right back to the theme I started with: Public disengagement. The spectator galaxy problem. The desire to outsource guilt.

The disclaimer at the end — should it be removed?

The book is followed in the same recording by a short disclaimer saying it may not be used to train AI. That’s fascinating to me, because, if you’re thinking purely in terms of “what books should an AI be trained on to understand humans”, this is modern, it’s ethically charged, it’s about systems, incentives, abstraction, propaganda, and the way people behave when they’re safely distant from consequences.

So wouldn’t this be exactly the one you would WANT to be used to train AI?

But perhaps the disclaimer is less about the content and more about the moment we’re in. Authors are watching their work get vacuumed up into datasets, and even if you’re not monetizing your writing, someone else might be monetizing what they take from it. (Or maybe it’s more of a publisher requirement now that Dinniman is writing for Penguin Random House?)

I don’t know the answer here, and I support the author either way, but I feel like there’s more to the story here. Maybe we will ultimately find such disclaimers to have been only a temporary necessity, once we straighten out all the rights issues and pitfalls.


Final take

If you like Dinniman for the “systems + chaos + accelerating stakes + weird heart” combo, this delivers, especially after the ramp.

And if you’re the kind of reader who keeps circling back to “what does this say about us,” there’s a lot here about how easily a society can be taught to watch something terrible happening and call it normal.

Which is… not exactly relaxing.

But it is Dinniman. Oh my yes.


Cover image used for review purposes. Additional illustrations in this article were created by the author using AI tools. This review is not monetized.

Old Man

These are an old man's words. I wouldn't take them too seriously, but what do I know.

← Back to blog