Book 1 of 2026: Thin Air by Dean Wesley Smith and Kristine Kathryn Rusch: ⭐️⭐️⭐️.
Another decent book in the series, with another crisis for the Enterprise to solve and the colonists to endure. I’m starting to wonder if they’ll actually be able to wrap up all the dangling threads in just one more book.
Every year, I set myself a goal of reading at least 52 books over the course of the year — an average of one a week. This year I made it to 67 books. Here’s a quick (not really) overview…
And again, the trend of the last few years holds true, with another year of primarily escapist fluff. Surprised? I’m not. Have you seen…everything? Still?
Non-genre-fiction (where “genre” is shorthand — though not that short, if you include this parenthetical — for science fiction, fantasy, and horror): None, or perhaps almost none; the Nick Mamatas-edited anthology 120 Murders: Dark Fiction Inspired by the Alternative Era has a few speculative fiction entries, but as a whole probably wouldn’t be classified that way, so can count for this category.
As usual, I read all of the books nominated for this year’s Philip K. Dick awards. However, I’m no longer posting my thoughts or review on the nominees, as I am the coordinator for the Philip K. Dick award ceremony at Norwescon. While I have no input into selecting any of the nominees or the eventual winner, I don’t want to give any appearance of impropriety. So, I’ll just read and enjoy each year’s nominees, and you all will have to make your own judgements as to your favorites.
I only added four books to my Hugo reading project, and one was somewhat accidental. Only getting through four was due to a few factors, including deciding to read the entirety of Martha Wells’s excellent The Murderbot Diaries series before attending this year’s Worldcon here in Seattle where she was a guest of honor (this also accounted for my “accidental” read, as I’d forgotten that Network Effect was a Hugo Best Novel winner), and slogging my way through Susanna Clarke’s Jonathan Strange & Mr Norrell, which took slightly over a month. Of the four I read this year, Network Effect was my favorite.
Outside of those two categories and in addition to reading all of the aforementioned Murderbot series, I also finished Lois McMaster Bujold’s Vorkosigan Saga, read two of the books in Bujold’s World of the Five Gods series (one of which, Paladin of Souls, is another Hugo Best Novel winner), and read both books of Catherynne Valente’s Space Opera series. All of those were good. Apparently it was a series binge sort of year.
Fluff genre fiction: Unsurprisingly, this once again ended up being the strong majority of this year’s reading. Lots of Star Trek novels, with a few detours here and there. And given everything that was going on in 20202021202220232024 2025, it was very nice to have a bookshelf full of options that wouldn’t take a whole lot of brain power for me to disappear into. As I’ve now read most of the TOS books that have been released, most of this year’s reads were TNG-era, but I’m closing out the year with the TOS-era six-book “New Earth” series (again with “series” being something of a theme this year). That series will also start my 2026 reading, as I only got through four of the six before the end of 2025.
I’m still subscribing to two SF/F magazines (Uncanny and Clarkesworld), though I’ve slacked off on actually reading them for the past few months — not due to anything with the magazines themselves, but just because I’ve been more in the mood to work on emptying out my physical “to read” shelves. I’m not sure how much magazine reading I’ll do this coming year, but I will be continuing to subscribe, as I want both of these magazines to continue publishing, and it’ll be nice to have some back stock to dive into when I’m ready (or when I’m traveling, since these are both electronic distributions).
Almost a four-star, due to a particularly imaginative doomsday weapon that really had me lost as to how they were going to technobabble their way out of it. Settled on three, though, as it is a “middle book” that doesn’t stand alone on its own. Still, a more engaging entry than many middle books end up being.
While I’m not an SFWA member or even an author (beyond this little blog), as a lifelong reader of science fiction, I figured it was worth a few minutes to add my response to their current Survey on LLM Use in Industry. For the sake of posterity, here are my responses:
I am a…
SFWA member in good standing (“Active”)
lapsed SFWA member
writer considering SFWA for future membership
✅ member of the general public
I create SFF in the following forms (choose as many as apply)
short fiction
longform fiction
poetry
comics / graphic novels
video games
analog games
film, theatre, and/or TV
nonfiction
✅ I’m a reader/player of SFF.
Has any of your writing been identified as part of stolen data sets in AI-related industry crises?
Yes, and I am part of a certified class action.
✅ Yes, and I am not eligible for most/any class actions.
No, but people in my circles have been directly impacted.
No, but this issue remains a pressing concern.
I don’t know.
(Note: Some time ago there was an online tool to look up sources that were used in one of the earlier revisions of one of the larger LLMs; I don’t currently remember who offered the tool or the specifics of the training database being reviewed. I do remember that both this blog and the Norwescon website, of which I’ve been both author and editor of much of the content for the past 15 years, were included in the training database.)
How has your writing practice changed since the emergence of Generative AI and related LLM integrations?
✅ I proactively turn off every new AI feature I can.
✅ I switch away from writing tools that promote AI integrations wherever possible.
✅ I avoid search engines and other summary features that rely on AI.
✅ I accept AI features selectively, avoiding or switching off all the Generative AI tools I can identify, while leaving translation, spelling and grammar, and/or research assistants mostly intact.
I engage with AI chat features to brainstorm story elements, and/or for research questions of relevance to my writing.
I have used Generative AI for the development of story plots, characters, and/or scene construction.
I’m not a writer or editor.
(Note: The first and fourth of these options seem to contradict each other. For clarity, I either disable or, if it can’t be disabled, actively avoid using generative AI; as noted in the sidebar of this blog, I do use machine-learning/LLM-based tools such as speech-to-text transcription, but when I do, I check and edit the output for accuracy.)
Which of the following most closely resembles your position on the use of Large Language Models (LLMs) in the writing process?
✅ There is no ethical use-case for writers, because this technology was developed through piracy and/or continues to negatively impact environmental systems and marginalized human beings.
Putting aside the historical and environmental issues, Generative AI needs to be approached differently from other LLMs, because other forms of LLM sometimes show up in tools (e.g., spell-check, grammar-check, translation software) that are normal parts of a writer’s workflow.
The use of any AI system for any part of the writer’s workflow that is not the writing itself (so, including brainstorming and research phases) is perfectly fine. It is only the words on the page that matter.
There are cases where the use of Generative AI for active storytelling might be a critical part of the story we want to tell, so it’s really a case-by-case determination.
Some writers are working for companies that make choices about AI without their involvement in the decision-making process, and this matters when deciding how we respond to the presence of AI in their work as individual creators.
I am not opposed to the use of LLMs in any capacity in the creative process.
Tell us more about where you agree with or deviate from the statement you chose above.
My actual answer is probably somewhere between the first (no ethical use-case) and second (recognizing LLM use in some tools) options.
One of the biggest problems with the current discussions (including this survey and in the File770 threads started off of Erin Underwood’s open letter) is the grouping of several related but distinct technologies under the banner term of “AI”.
Machine learning and LLM-backed analysis models are one thing. These are the technologies that have been used for years in many different contexts, including (some) spelling and grammar checkers, speech-to-text transcription, simple text-to-speech generation (suitable for screen readers and similar applications, not for audiobook creation), medical analysis, and so on. These are analytical or simple transformative, not creative, processes. In all cases, though, the output should be reviewed and checked by humans, not accepted as-is by default.
Generative AI (genAI) is the severely problematic aspect, for all the reasons mentioned by many, many people advocating for its avoidance (the many unethical practices in the creation and ongoing use of the technology, social and environmental costs, high error rates, and many more).
It’s unfortunate that all of these aspects are now grouped together as “AI”, as it makes it nearly impossible to approach the subject with any amount of nuance. I suspect that was what Ms. Underwood was attempting to do, though, as she also falls victim to the same confusion, she sorely missed the mark (and has continued to do so in her responses).
As a reader, I would be very disappointed to see the Nebulas (and any other award) accepting the use of genAI in the creation of nominated (let alone awarded) works.
What forms of guidance do you think would most benefit writers trying to navigate the growing presence of LLMs in our industry?
✅ Informational pages on SFWA.org explaining key terms and use-cases.
✅ Articles on how to recognize and navigate forms of LLM in writing tools, and where to look for alternatives.
✅ Full bans on any and all AI use in submissions and nominations processes, with consequences for failure to disclose.
✅ Bans on Generative AI in submissions and nominations processes, with clear and severe consequences for failure to disclose.
✅ Market reports that explicitly set a rigid bar for inclusion based on the publication’s commitment to not working with AI.
Other (please elaborate below).
[Continued] What forms of guidance do you think would most benefit writers trying to navigate the growing presence of LLMs in our industry?
Clarity in defining the differences among the technologies and determining which may be acceptable (such as speech-to-text transcription, spell/grammar checkers, etc.,) depending on the technology and its use, and which are unacceptable (genAI for text or art creation).
Colonization adventures continue on Belle Terre, as Sulu, Uhura, and Chekov deal with troublesome splinter groups and environmental aftereffects of the events of the prior book in the series. A solid mid-series entry, with a good focus on this secondary trio while the Enterprise is busy elsewhere.
Really, this kind of needs two ratings: Five stars for the content, and two stars for the editing. Thom is incredibly knowledgeable about photography and Nikon cameras, and this book is an incredible deep dive into the Z5II, how it works, why it works the way it does, what all those settings mean, and suggestions on how to get the best results out of the camera. However, as he has obviously (and quite transparently; he mentions some of this on his website) adapted large swaths of this book from very similar books on other cameras in Nikon’s Z series, there are a lot of instances where the shift from one camera to another wasn’t caught, leading to everything from the wrong camera model being mentioned to slight errors (I’ve not come across anything major or that would cause a problem, though). Still very worthwhile, and I’ll be sure to keep this easily available on my iPad so I can reference it whenever I need.
Having made it through saboteurs and alien conflicts, the colonists now need the Enterprise’s help dealing with a moon set to explode in a week. The setup sounds far-fetched, but works to keep the overall tension going, plus a few new mysteries are tossed in, sure to be addressed again later in the series.
An interesting start to this six-part series. Shortly post-V’ger, Kirk and the Enterprise guide a 70-ship convoy of 60,000 settlers to a new home six months away. Of course, things do not go well. Most interesting so far for its treatment of Kirk, somewhere on his road from the (perhaps overly) brash self-assurance of TMP to the depression of the start of TWoK, questioning his place and the effects of his career. The new alien races are interesting, as well. However, the primary antagonist is a little too one-note, and while “the Orions” are involved, I’m very confused by them, as they’re described in ways that don’t match the green-skinned humanoids we know as Orions (descriptive bits include: “…slimy muscular arm…”, “…arrowlike orange eyes…”, “…his many-fingered limb…”, “…his claw still tightened around [their] jaw…”, “…purple skin…”, “…turned burgundy with both fury and fear…”, “[his] excuse for eyes…those milky orbs…”). At some point in the editing process, those descriptions should have been corrected or they should have been given some other name than “Orions”.
So much of this made me laugh, and it all wrapped up in an extremely satisfying way. Valente is a hilarious writer (the “Douglas Adams on a sugar high” quote on the book cover is spot on); my only disappointment (and it’s not with the book) is that I was busy enough at last spring’s Norwescon where she was a guest of honor that I barely crossed paths with her and didn’t get to say how much I enjoy her work.