Book 31 of 2025: ⭐️⭐️⭐️
My favorite this month was “In the Shells of Broken Things” by A. T. Greenblatt.
Enthusiastically Ambiverted Hopepunk
Whatever I’m geeking out about at the time.
Book 31 of 2025: ⭐️⭐️⭐️
My favorite this month was “In the Shells of Broken Things” by A. T. Greenblatt.
And that’s it — this morning, I finished off Lois McMaster Bujold’s Vorkosigan Saga. A little under two years (non-consecutively; many other books have been read in between these) of the adventures of Miles Vorkosigan and his family, friends, and enemies.
This was neither planned nor expected. I started because I’m reading my way through all the Hugo Best Novel winners, and several of the Vorkosigan books are on that list, I continued because even though neither romance nor military sci-fi are genres I usually go for, the books are just so good that they pulled me in. (And, well, I don’t like coming into a series midway through if I can help it, so I had to read the books that were earlier or in between the Hugo winners.)
Bujold’s characters are wonderfully realized. Not always heroes or even particularly heroic (and sometimes rather disturbing), but always very real. The books are funny — I wouldn’t really call any of them comedies, but they are frequently comedic. And the world building (over multiple worlds scattered across a wormhole-connected galaxy) is great, with cultures that are obviously related while also being very separate and distinct.
All in all, while I certainly didn’t expect this when I started, this has become a favorite series, and I’m very much looking forward to seeing what Bujold does in a fantasy world, as I head into her World of the Five Gods series (of which the second, Paladin of Souls, is the next Hugo Best Novel book on my list). (It’s a good sign that both the Vorkosigan Saga and the World of the Five Gods series have won Best Series Hugos….)
Book 30 of 2025: ⭐️⭐️⭐️⭐️
In a fitting bookend for a series that began with a sci-fi romance between a military officer and a planetary surveyor, it ends with something essentially the same, even to one of the same characters. Much less adventuresome or military than others, this is more of a pleasant, comfortable wrap up for the series, bringing it back to where it started while checking in with many of the remaining characters. While the stories certainly good go on if Bujold chose to write more, this is also a very worthy ending to the series itself.
The School of Rock kids performing; I think here they were doing Poison’s “Every Rose Has Its Thorn”.
Finished Andor. That was really good. Easily among the best of the modern Star Wars shows and films (that I’ve seen, at least, not having seen them all).
Eric Wilkinson at King 5: AI stepping up as backup for short-staffed PenCom dispatchers (which was headlined “AI now takes some calls for help on Olympic Peninsula” when I bookmarked this): “AI listens for keywords that may indicate crime or violence. It even picks up inflections in the caller’s voice to sense trouble. If any of those criteria are met, the call goes directly to a real person.” Yeah, I can see no way in which this could go wrong…
Helen Smith at King 5: The Cascadia Subduction Zone looks a little different than researchers thought. Here’s what that means for ‘The Big One’ (which was headlined “New research reshapes ‘The Big One’ tsunami risk” when I bookmarked it…what’s with King 5 renaming headlines?): “New findings show that tsunami risk may be different, though not less, in places along the subduction zone. This is due to the absence of a ‘megasplay fault,’ which was previously believed to run from Vancouver Island down to the Oregon-California border.”
David Friedman at Ironic Sans: Proof that Patrick Stewart exists in the Star Trek universe: Fun interview with Star Trek fan and researcher Jörg Hillebrand.
Technology Connections on YouTube: Closed captions on DVDs are getting left behind. Half an hour, but a fascinating look at how closed captions are encoded into analog video, how it works with the digital video of DVDs, and why modern players and Blu-ray disks are falling over with their closed caption support. Some of the basics here I knew from my subtitle projects, but a lot of the technical details were new to me and neat to learn about.
Ben Cohen in The Wall Street Journal: They Were Every Student’s Worst Nightmare. Now Blue Books Are Back. (archive.is link): “Students outsourcing their assignments to AI and cheating their way through college has become so rampant, so quickly, that it has created a market for a product that helps professors ChatGPT-proof school. As it turns out, that product already exists. In fact, you’ve probably used it. You might even dread it. ¶ It’s called a blue book.”
Nadira Goffe in Slate: The Controversy Surrounding Disney’s Remake of Lilo & Stitch, Explained: I don’t have any interest in watching the remake (big fan of the original, though), but as a non-Hawaiian white guy, reading about the political undertones in the original that have been stripped out of the remake was really interesting, as it was a lot of stuff that I didn’t know.
Anil Dash: The Internet of Consent: “The growing frustration around “enshittification” is, in no small part, grounded in a huge frustration around having a constant feeling of being forced to use features and tools that don’t respect our choices. We’re constantly wrestling with platforms that don’t respect our boundaries. And we have an uncanny sense that the giant tech companies are going behind our backs and into our lives in ways that we don’t know about and certainly wouldn’t agree to if we did.”
Kelly Hayes: From Aspiration to Action: Organizing Through Exhaustion, Grief, and Uncertainty: “As an organizer, I’ve been thinking a lot lately about the gulf between what many people believed they would do in moments of extremity, and what they are actually doing now, as fascism rises, the genocide in Palestine continues, and climate chaos threatens the survival of living beings around the world.”
Chelsey Coombs at The Intercept: “Andor” Has a Message for the Left: Act Now: “‘Andor,’ the new series set in the universe, doubles down on its anti-authoritarian roots, focusing on the creation of the revolutionary Rebel Alliance. In the process, it gives us a glimpse into the messiness and conflict that often accompanies building a movement on the left, as activists fight over which political philosophies and strategies work best.”
Yona T. Sperling-Milner at The Harvard Crimson: Come At Me, Bro: “I propose an alternate strategy: I shall fight Secretary of Education Linda E. McMahon in a televised cage match, the winner of which gets $2.7 billion in federal grants and the power to uphold or destroy America’s continued technological and economic success.”
Suyi Davies Okungbowa: I Call Bullshit: Writing lessons from my toddler in the age of generative AI: “Software is as limited as the individuals, systems and institutions that define and prompt it, and as of today, mimicry is its highest form. But as you can see above, mimicry is not a significant endeavour. A human baby can mimic. A chameleon can mimic. Mimicry is basic.”
Book 29 of 2025: ⭐️⭐️⭐️⭐️
We haven’t gotten an Ortegas-centric episode of the show yet, so it’s fun to get a bit more of a peek into her and her background through this adventure. A mysterious alien artifact, a dangerous planet, ornery Klingons…all in all, quite the fun Trek adventure.
(Very mild spoilers: The only flaw for me wasn’t actually a flaw with the book, but a happenstance of my reading: I’d just read the TOS ebook novella Miasma, so this made for two Trek stories in a row with a landing party trapped on a rainy, muddy planet being chased by swarms of hungry giant bugs while cut off from all communication with the Enterprise. I had to keep reminding myself it wasn’t quite as derivative as it seemed.)
I didn’t get to this last weekend, and this week was too busy to sneak it in and backdate it, so I’m just going for a two-week catch-up this time. Good enough!
This past week, in addition to the usual work duties, had several evening events that were fun to do, but definitely threw our weekly routine off.
Three shots from the Underworld show. Plus a bonus shot…
This cameraperson was a real MVP of the evening, having to keep the camera trained on the stage…and keep it steady. There is no way I could do that job; the camera would be bouncing all over the place in time with the music. I was really impressed!
Live music under the Space Needle on a gorgeous early summer day.
A question on a work mailing list got me rambling about my frustrations with the popular confusion of machine learning with “artificial intelligence”.
I’ve added a few albums over the past two weeks that I’m enjoying:
Particularly interesting reads from across the web.
Book 28 of 2025: ⭐️⭐️⭐️⭐️
More good Vorkosigan adventure as Miles heads off to investigate cryogenic companies and ends up in more trouble than he expected. Toss in some cute kids, a menagerie, questionably competent criminals, and diplomats getting their first taste of Miles’s approach to problem solving, and it’s a lot of fun. Though the end took a turn I definitely wasn’t expecting….
The following is a (lightly edited) response I gave to a recent accessibility mailing list question from Jane Jarrow, coming out of a question around concerns around the use of various AI or AI-like tools for accessibility in higher education:
Folks responded by noting that they didn’t consider things like spell check, screen readers, voice-to-text, text-to-voice, or grammar checkers to be AI – at least, not the AI that is raising eyebrows on campus. That may be true… but do we have a clean way of sorting that out? Here is my “identity crisis”:
What is the difference between “assistive technology” and “artificial intelligence” (AI)?
This is me speaking personally, not officially, and also as a long-time geek, but not an AI specialist.
I think a big issue here is the genericization of the term “AI” and how it’s now being applied to all sorts of technologies that may share some similarities, but also have some distinct differences.
Broadly, I see two very different technologies at play: “traditional”/“iterative” AI (in the past, and more accurately, termed “machine learning” or “ML”), and “generative” AI (what we’re seeing now with ChatGPT, Claude, etc.).
Spell check, grammar check, text-to-speech, and even speech-to-text (including automated captioning systems) are all great examples of the traditional iterative ML systems: they use sophisticated pattern matching to identify common patterns and translate them into another form. For simpler things like spelling and grammar, I’d question whether that’s really even ML (though modern systems may well be). Text-to-speech is kind of an “in between” state, where the computer is simply converting text strings into audio, though these days, the use of generative AI to produce more natural-sounding voices (even to the point of mimicking real people) is blurring the line a little bit.
Speech-to-text (and automated captioning) is more advanced and is certainly benefitting from the use of large language models (LLM) on the backend, but it still falls more on the side of iterative ML, in much the same way that scientific systems are using these technologies to scan through things like medical or deep-space imagery to identify cancers and exoplanets far faster than human review can manage. They’re using the models to analyze data, identify patterns that match existing patterns in their data set, and then producing output. For scientific fields, that output is then reviewed by researchers to verify it; for speech-to-text systems, the output is the text or captions (which are presented without human review…hence the errors that creep in; manual review and correction of auto-generated captions before posting a video to a sharing site is the equivalent step to scientists reviewing the output of their systems before making decisions based on that output).
Where we’re struggling (both within education and far more broadly) is with the newer, generative “AI”. These systems are essentially souped-up, very fancy statistical modeling — there’s no actual “intelligence” behind it at all, just (though I’ll admit the word “just” is doing a lot of heavy lifting here) a very complex set of algorithms deciding that given this input, when producing output, these words are more likely to go together. Because there’s no real intelligence behind it, there’s no way for these systems to know, judge, or understand when the statistically generated output is nonsensical (or, worse, makes sense but is simply wrong). Unfortunately, they’re just so good at producing output that sounds right, especially when output as very professional/academic-sounding writing (easy to do, as so many of the LLMs have been unethically and (possibly arguably, but I agree with this) illegally trained on professional and academic writing), that they immediately satisfy our need for “truthiness”. If it sounds true, and I got it from a computer, well then, it must be true, right?
(The best and most amusing summary I’ve seen of modern “AI” systems is from Christine Lemmer-Webber by way of Andrew Feeney, who described it as “Mansplaining as a Service: A service that instantly generates vaguely plausible sounding yet totally fabricated and baseless lectures in an instant with unflagging confidence in its own correctness on any topic, without concern, regard or even awareness of the level of expertise of its audience.”)
Getting students (and, really, everyone, including faculty, staff, the public at large, etc.) to understand the distinction between the types of “AI”, when they work well, and when they prove problematic, is proving to be an incredibly difficult thing, of course.
For myself, I’m fine with using traditional/iterative ML systems. I’m generally pretty good with my spelling and grammar, but don’t mind the hints (though I do sometimes ignore them when I find it appropriate to do so), and I find auto-captioning to be incredibly useful, both in situations like Zoom sessions and to quickly create a first pass at captioning a video (though I always do manual corrections before finalizing the captions on a video to be shared). But I draw the line at generative AI systems and steadfastly refuse to use ChatGPT, AI image generators, or other such tools. I have decades of experience in creating artisanally hand-crafted typos and errors and have no interest in statistically generating my mistakes!
I’m afraid I don’t have good suggestions on how to solve the issues. But there’s one (rather long-winded) response to the question you posed about the difference between assistive technology and “artificial intelligence”.
Book 27 of 2025: ⭐️⭐️⭐️
A simple, quick adventure, as a mysterious signal diverts the Enterprise from ferrying diplomats around so they can investigate. Not terribly surprisingly, the landing party has difficulties and great peril. A perfectly serviceable quick Trek novella.
Really, this is one of those weeks that just boils down to being another week, without any noteworthy points.
🚀 Norwescon has just about wound down, with just this coming weekend’s post-con meeting to wrap things up until we spin up in the fall for next year. Of course, that means a little less for me, as the website needs to be archived and reworked; hopefully I’ll be able to arrange time with my team to start that work soon. The Worldcon situation has dropped down to a light simmer rather than a full boil, which is progress. Mostly, I keep watching what people write and constantly have to fight the temptation to jump in and correct mistaken assumptions or assertions. As satisfying as it might be in the moment, it wouldn’t actually help. Sometimes knowing that I’m better off keeping my mouth shut really sucks, though.
🏡 We spent part of the weekend cleaning up our little back yard for the summer and refreshing the herb and flower planters. (By which I mean, my wife did the planting, and I did the manual labor of moving planters around and hauling the old stuff out to the trash.) Hoping we have more chances to relax back there than we have for the past couple summers.
Pansies at the garden center.
Sitting for a moment between moving things around.
One corner of our yard.
Another corner of the yard.
Lately it’s been a fair amount of old Hell’s Kitchen, because it can be entertaining to watch Gordon Ramsey yell at people.
VNV Nation’s “Construct” came out this week, and new VNV Nation is always good. I did see one friend describe it as “the new VNV Music Factory”, which is funny, but also not wrong, but y’know, I’m good with that. It’s like a review I once saw comparing KMFDM to a Big Mac: You always know that what you get is going to be maybe not not great, but big, cheezy, and acceptably satisfying when that’s what you’re in the mood for. VNV Nation isn’t the same sound, of course, but it’s kind of the same idea: You know what you’re getting, and it’s good comfort food (and occasionally really, really good, though I haven’t identified any tracks off this album that are particular standouts yet).