Self-Hosted Image Gallery Recommendations?

A lazyweb question: Is there decently modern web image gallery software anywhere?

I’d like to move away from Flickr in favor of self-hosting my photo galleries. But so far all the packages I’ve found are…well, they tend to look and feel (both on the backend admin side and the frontend public gallery side) like they haven’t been updated in the past decade or more.

Admittedly, sometimes this is because that’s exactly the case…which also doesn’t make me want to download them. But sometimes they’re still apparently under active development, but still look and feel like early-2000s projects.

Software I’ve installed, poked at, thought “mmm…well…maybe…”, and looked on to see what else I could find:

  • Piwigo is under active development (last release three weeks ago) but has rather sparse documentation if you’re not a developer building plugins, and needs config file editing just to display more than the most basic image metadata.
  • Zenphoto is also under active development (last release a month ago), but appears to be gearing for a more major update…which could be good, but there’s no indication of when that will happen, and much of the current installation (like every one of the default themes) has a “this has been deprecated” warning. So it doesn’t seem worth investing time into getting it up and running and populated if the current version is soon to be end-of-lifed, with who knows what sort of compatibility with the next version.

Things I’ve looked at but not downloaded:

  • 4Images may or may not be under active development; the last update was in November of ’21.
  • Coppermine‘s last update was in 2018…but the two before that were in 2013 and 2010, so who knows if it’s still active or not.
  • Gallery at least admits it’s dead; it points to Gallery Revival, which hasn’t been updated since November of ’21.
  • Pixelpost: “tldr: This project is abandoned, and has known security issues, use at your own risk.”
  • TinyWebGallery: I can’t quickly figure out when it was last updated, but the header graphic advertises “Flash uploaders”, and there are too many ads for online casinos on that page for me to bother digging around any further.

I’d like to stop giving Flickr money (I have nothing particularly against them, but at this point, I have nothing particularly for them either; their website doesn’t “give me joy”, and when embedding photos, the alt text is just the image title, not even the image comments, let alone any option to add true alt text), and I simply don’t trust Google enough to drop all my images into their systems. I’ve played with SmugMug as well, but again, I’d like to be able to self-host, not pay.

I’m a little surprised that this is such a sparse field, but I suppose that Flickr and Google Photos are “good enough” for most people these days, so there’s not a big market for people like me: a tech-savvy hobbyist photographer who’s not particularly interested in relentlessly pursuing monetization.

Recommendations would be appreciated if I’ve missed something worth investigating. As it is right now, though, I’m guessing my best bet will be to see what I can manage with either Piwigo or ZenPhoto.

Bring Back Blogging

Monique Judge at The Verge, in “Bring Back Personal Blogging“:

In the beginning, there were blogs, and they were the original social web. We built community. We found our people. We wrote personally. We wrote frequently. We self-policed, and we linked to each other so that newbies could discover new and good blogs.

I want to go back there.

Hard agree. This blog got its start in the mid-’90s — the earliest “post” I can still verify was on December 29, 1995, and though it now lives in this blog, was originally a hand-coded entry on a static “Announcements” page — back before “blogging” was even a term. In fact, it wasn’t until February 8, 2001 that I first discovered the word “blog”.

So there’s a lot of what Monique writes about that I remember very clearly. And I miss a lot of it. Which seems kind of funny to say, because in a lot of ways, it really hasn’t ever completely died, but the shift to social media definitely impacted the blogging world.

I’m hopeful (if not optimstic) that just maybe the issues at Twitter, the rise of Mastodon, and the general upheaval in online spaces will actually lead to something of a resurgence of people writing for themselves and in their own spaces.

Buy that domain name. Carve your space out on the web. Tell your stories, build your community, and talk to your people. It doesn’t have to be big. It doesn’t have to be fancy. You don’t have to reinvent the wheel. It doesn’t need to duplicate any space that already exists on the web — in fact, it shouldn’t. This is your creation. It’s your expression. It should reflect you.

Bring back personal blogging in 2023. We, as a web community, will be all that much better for it.

Blogging CMS Wishlist

High on my reasons why I wish I had the knowledge (or the time and energy to gain the knowledge) to code my own software: As far as I can tell, nobody has yet written the CMS I want to use for blogging.

Basically, what I want is early-2000s MovableType, only with some modern updates. I’ve long missed many of the tweaks and customizations that I could manage with MovableType that I can’t do on WordPress.

Pie-in-the-sky featureset:

  • Self-hostable or installable on a hosted server (Dreamhost, etc.)
  • Micropub compatible so I can use MarsEdit or other such third-party editors
  • ActivityPub/IndieWeb compatible for federation (at least outbound, ideally bidirectional so that federated replies could be appended as “comments”)
  • Generates a static website instead of building every page when its called
  • Only regenerates necessary pages when updates are published, full-site rebuilds available on demand
  • Some sort of templating “building blocks” system for assembling different pages, posts, or sections thereof
  • Basic templates that are fully standards-compliant and accessible (HTML5, ARIA when/if necessary (since static pages shouldn’t have much dynamic content), etc.)
  • Templates should also be microblogging compatible
    • Example: Titles are optional, and shouldn’t be the only item used for permalinks to any given post, something that bugs me about my current blog template but I haven’t figured out how to fix yet
  • Markdown for writing and storing posts
    • update: Less concerned about this, actually, as I’m using less and less Markdown in my posts, since it is a purely presentational language and not semantic. For example, underscores are rendered as <em>, but I often want <cite> or even <i>; italics are used for a lot more than just emphasis. I do still use Markdown for basic structural formatting (lists, blockquotes, etc.), but there’s a lot more plain HTML getting mixed in to my posts now, and I could pretty easily go back to straight HTML if necessary.
  • The ability to generate multiple versions of posts/pages on rebuild
    • Example: Output both .html and .md versions of a blog post, so a “view source” link could be included in the post template; readers could then easily click through to view the Markdown version
  • Import posts exported from existing common blogging or microblogging systems (WordPress and Twitter, in my particular case)

Things I don’t want or care about:

  • Fancy drag-and drop “block” editors like WordPress’s Gutenberg
  • Comments (beyond pingbacks/trackbacks/federated responses)
  • Having to do everything on one machine (edit locally and upload)

I’m sure there are plenty of other things that I could put in the wishlist or the “no thanks” list, but those are the first ones to come to mind. Every time I’ve done a survey of static site generators, they consistently fail one or more of the above.

Honestly, I think I could live without much of the above, if I could find a static site generator that would allow me to blog and manage posts and pages from anywhere (my desktop, my laptop, my iPhone, my iPad, etc.) through the Micropub API; logging into a web interface of some sort should be possible if necessary but not required for general day-to-day post publishing.

Oh, and it needs to be installed and managed by someone who has a higher-than-average knowledge of computing and tech geekery, but doesn’t do this stuff for a living. Someone who gets annoyed when they call tech support and have to start with the “is it plugged in?” level of questioning, but who also gets annoyed when software assumes that you’ve been immersed in this kind of stuff for decades. There doesn’t seem to be much out there other than WordPress that does a good job of bridging between “it just works” and “I eat, drink, and breathe code in all my waking and sleeping hours” levels of capability. I don’t mind, and even enjoy, poking at the guts of things when I have the time and energy, but I don’t want to be required to do a week of research to figure out what the terms in the “how to install” documentation mean.

So — I don’t suppose that anyone knows of my magical unicorn blogging software actually existing anywhere?

Cross-posting from WordPress to Mastodon

I’ve finally got WordPress to Mastodon cross-posting working the way I want: automatically, whether I’m posting through the WordPress web interface or through a desktop or mobile client like MarsEdit or the WordPress mobile app, and with the format that I want:

Title: Excerpt (#tags)

Full post on Eclecticism: URL

I’d been using the Autopost to Mastodon plugin, which works great, and I can recommend it — as long as you only or primarily post using the WordPress web interface.

However, the plug-in is only triggered when publishing a post through the WordPress web interface. Any time I posted through a client, nothing went to Mastodon. So I either had to go into the web interface and manually trigger an update to the post with the “Send to Mastodon” option checked, or just skip out on using anything but the web interface at all, which I’m not a fan of (especially on mobile).

I’d asked the plug-in author, and they’ve said that this is just the way it is.

So I put out a call for help on Mastodon, and got some kind tips from Elephantidae, who pointed out the Share on Mastodon plugin. This looked promising, as its documentation specifically mentions being able to configure it to work with externally created posts. However, looking through the docs made it clear that most of this plugin’s configuration, including changing the format of the text it sends to Mastodon, is done through adding and tweaking PHP functions…and as with most of my coding knowledge, my PHP knowledge is roughly at the “I can usually get a vauge idea of what it’s doing when I read the code, but actually creating something is a whole different ballgame” territory. Plus, dumping PHP code into my theme’s files risks losing those changes the next time the theme files are updated.

Retaining the code through theme updates can be managed through creating a site-specific plugin, however — a handy trick which, somewhat amusingly, I’d never had exactly the right combination of “I want to do this” and “how do I do it” in the past to discover until now.

So, after a bit of fumbling around with the Share on Mastodon plugin documentation and figuring out the right PHP and WordPress function calls, here’s what I’ve ended up adding to my site-specific plugin:

/* Tweaks for the Share on Mastodon plugin */

/* Customize sharing text */

add_filter( 'share_on_mastodon_status', function( $status, $post ) {
  $tags = get_the_tags( $post->ID );

  $status = get_the_title( $post ) . ": " . get_the_excerpt( $post );

  if ( $tags ) {
    $status .= " (";

    foreach ( $tags as $tag ) {
        $status .= '#' . preg_replace( '/\s/', '', $tag->name ) . ' ';
        }

    $status = trim( $status );  
    $status .= ")";
    }

  $status .= "\n\nFull post on Eclecticism: " . get_permalink( $post );
  return html_entity_decode( $status );
  return $status;
}, 10, 2 );

/* Share if sent through XML-RPC */

add_filter ('share_on_mastodon_enabled', '__return_true');

/* End Share on Mastodon tweaks */

And after a few tests to fine-tune everything, it all seems to work just the way I wanted. Success!

(Also, re-reading through this, I’ve realized that since I like to give the background of why and how I stumble my way through things, I end up writing posts that are basically a slightly geekier version of the “stop telling me about your childhood vacations to Europe and just post the damn recipe!” posts that are commonly mocked. And I don’t even have ad blocks all over my site! At least I’m not making you click through several slideshow pages of inane chatter before I get to the good stuff. My inane chatter is easy to scroll through.)

Don’t ever stop talking to each other

This is a long rant by Cat Valente – and it’s really, really good. Though I’m quoting a particularly good bit from the end, it’s worth reading the whole thing.

Don’t ever stop talking to each other. It’s what the internet is really and truly for. Talk to each other and listen to each other. But don’t ever stop connecting. Be a prodigy of the new world. Stand up for the truth no matter how often they take our voices away and try to replace the idea of reality with fucking insane Lovecraftian shit. Don’t give up, don’t let them have this world. Love things. Love people. Love the small and the weird and the new.

Because that’s what fascists can’t do. They don’t love white people or straight people or silent women or binary enforced gender or forced birth or even really money. They want those things to be the only acceptable or even visible choices, but they don’t love them. They don’t even want to think about them. They want them to be automatically considered superior and universally mandated so they don’t have to think about them—or else what do you think the fury over other people wearing masks was ever about? The need to be right without thinking about it, and never have to see anything that wakens a spark of doubt in their own choices.

Obey, do not imagine, do not differ.

That’s nothing to do with love. Love is gentle, love is kind, remember? They need the attention being terrible brings them, but they don’t love it any more than a car loves gas. Sometimes I don’t even think they love themselves. Sometimes I’m pretty sure of it. They certainly never seem happy, even when they win. Musk doesn’t seem happy at all.

Geeks, though. Us weird geeks making communities in the ether? We love. We love so stupidly hard. We try to be happy. We get enthusiastic and devote ourselves to saving whales and trees and cancelled science fiction shows and each other. The energy we make in these spaces, the energy we make when we support and uplift and encourage and excite each other is something people like Musk can never understand or experience, which is why they keep smashing the windows in to try and get it, only to find the light they hungered for is already gone. Moved on, always a little beyond their reach.

Mastodon RSS Tips

  1. Get an RSS feed for any user by appending .rss to the end of their profile URL. For example, my profile is tenforward.social/@djwudi, so the RSS feed of my posts is tenforward.social/@djwudi.rss.

  2. This also works for hashtag searches; handy for keeping an eye on hashtags (without worrying you’ll miss them in your feed). In my case, as our social media manager, I watch for mentions of Norwescon. Since that hashtag search URL is <your server>/tags/norwescon, the RSS feed is <your server>/tags/norwescon.rss. (I’ve also subscribed to feeds for norwescon45, nwc45, and philipkdickaward.)

The feeds-for-users tip I’ve seen going around, but I’d not seen this applied to hashtag searches, so I gave it a shot, and was happy to see it worked. Figured I’d put both in one post for those who might not have known either.

In Search of a MarsEdit Equivalent for iOS

A question for macOS WordPress bloggers who use Red Sweater Software’s excellent MarsEdit: What’s your go-to mobile iOS blogging tool?

MarsEdit is a great example of a “do one thing and do it really well” piece of software, and I’ve yet to find anything equivalent for mobile blogging. I just want exactly what MarsEdit gives me: A list of my most recent posts and pages, a solid plain-text Markdown editor, and access to all the standard WordPress fields and features.

Every other editor I’ve tried either doesn’t do one or more of those things or is otherwise not quite right in some way. Ulysses was the closest and I tried it for a while, but while it’s a great editor, it doesn’t pull a list of posts and pages from the blog, just works with whatever’s local or in its own cloud sync or Dropbox or whatever, and last time I used it, had a bug where alt text wasn’t getting applied to images correctly.

(The WordPress native app drives me up the wall. I don’t want block editing. I want text and Markdown.)

Really, what I want is an iOS version of MarsEdit. But failing that: any recommendations?

AI Art, Ethics, and Where I Stand

While nobody specifically asked, since I have some friends who are all about the AI art and some who believe it’s something that should be avoided because of all the ethical issues, and since I’m obviously having fun playing with it with my “AImoji” project, I figured I’d at least make a nod to the elephant in the room.

An AI generated image of an African elephant standing in what appears to be a Victorian sitting room.

There are absolutely some quite serious ethical questions around AI generated artwork. To my mind the three most serious are (not in any particular order):

  1. Much of the material used to train the AI engines was scraped off the internet, often without any consideration of copyright, certainly without any attempt to get permission from the original creators/artists/photographers/subjects/etc., and some people have even found medical images that were only approved for private use by their doctor, but somehow ended up in the training sets. That situations like this are likely (hopefully) in the minority doesn’t absolve the companies who acquired and used the images to create their AI engines from being responsible for using these images.

  2. As the AI engines continue to improve, it is getting more and more difficult to distinguish an AI generated image from one created by an artist. There are also a number of people and organizations who have flat-out stated that they are looking at AI generated imagery as a way to save money, because it means they now don’t have to pay actual artists to create work. Obviously, this is not a particularly good approach to take.

  3. Because some of the engines are able to create images in the style of a particular artist, and the output quality continues to improve, there have already been instances where a living artist is being credited for creating work that was generated by an AI bot. And, of course, if you can create an image that looks like your favorite artist’s work for low or no cost…well, for a lot of people, they’ll happily settle for an AI generated “close enough” rather than an actual commissioned piece. Obviously, this is also not a particularly good approach to take.

I’m enjoying playing with the AI art generation tools. I’m also watching the discussions around the ethical questions around how they can and should be used.

The issues above are all very real and very serious. It’s also true that AI art can be just another tool in an artist’s toolbox. I’ve seen artists who use AI art generators to play with ideas until they find inspiration, or who use parts of the generated output in their own work. I’ve seen reports of people who want to commission art use the generator to get a rough idea of what they’re looking for that they can give to an artists as a rough example or proof of concept. So there are ways to use AI art generators in, well, more-ethical ways (it’s hard to argue they’d be entirely ethical when the generators have unethical underpinnings).

So, where I stand in my use at this point:

  1. I don’t use living artist’s names to influence the style one way or another, and have only occasionally used dead artist’s names as keywords (I’ll admit, H.R. Giger has been a favorite to play with).

  2. I don’t feed images in, try to generate images of actual people, or use images of actual people (including myself) as source material.

    One caveat: if a tool does all of its processing locally on my device, I may use my own images, including some of myself. But nothing that feeds images into the systems.

  3. And, of course, anything I do is just for fun, and to make me, and maybe a few other people, laugh (or occasionally recoil in horror).

For a few months this past year, I used an AI-generated image of a dragon flying over a city skyline for the Norwescon website and social media banner image. This was always intended as a temporary measure to fill the gap between last year’s convention and getting art from this year’s Artist Guest of Honor, and as soon as we had confirmed art from our GOH, the AI-generated art came down. It was also chosen much earlier in the “isn’t AI art neat” period, before I’d read as much about the issues involved. As such, I won’t be using AI art for Norwescon again, and will go back to sourcing copyright-free images from NASA or other such avenues when we are in the interregnum period.

So: I understand those who see AI art as something that should be avoided. I also understand those who see it as another tool. And, honestly, I also understand those who just see a shiny new toy that they want to play with. I’m somewhere in the midst of all those points of view, and while I don’t personally see the need to avoid AI art bots entirely, I am consciously considering how I use them and what I use them for.

Travel and CO2

A day of travel, as “seen” by a Aranet4 portable CO2 monitor.

Reading this: basically, CO2 levels are a measure of how well a space is ventilated, and can therefore be a handy proxy for a rough idea of how likely it could be that there might be infectious particles (flu, COVID, etc.) in the air. Lower CO2 = better ventilation and less chance of any bugs in the air, Higher CO2 = worse ventilation, stale air, and higher chance of other bugs in the air. It’s not a one-to-one connection, obviously, as there are other variables, such as number of people in the area, but it can be a good way to get a rough measure of the ventilation.

So here’s how my day went (all times shifted one hour from what’s shown on the graph due to the time change).

A graph of CO2 levels over the course of a day. Marks on the graph separate it into sections: at the hotel (in the low range), at the airport (medium range), on the airplane (high range), and in a car home (low range).
Being able to see this change over the course of the day was fascinating.

Until about 8am, I was at the hotel. Levels stayed in the green and slowly decreased through the night, then increased into the yellow as I woke up and was active and moving around, showering, packing, etc.

8-9am, outside and on the light rail to the airport. Nice and green.

9-noon, in the airport, often in the midst of lots of people as I went through the TSA lines. Even in the large, high-ceilinged airport areas, with lots of room for air to move, levels were generally in the yellow. This is part of why crowded situations, even in large or outdoor areas, are still good places to be masked.

Noon-2pm, on the airplane. Lots of people in a fairly small, confined space. Airplanes might have “good” ventilation, but there’s only so much that can be done, and it was solidly in the red the entire time. I was okay with my KN95 through the airport, but switched to an N95 from just before boarding until after disembarking in Seattle, didn’t eat on the plane, and used a straw when drinking to minimize intake of unfiltered air.

2-3pm: Getting my baggage and taking a Lyft home. Right back into the green.

This was a handy little gadget to have with me this week. That, plus masking, plus vaccination and boosters, and I’m feeling pretty confident in my safety measures.

Accessing Higher Ground 2022

On this last day of the 2022 Accessing Higher Ground accessibility in higher education conference, I put together a thread about the week. Originally posted on Mastodon, this is a lightly edited version for this blog. Be warned, this isn’t short. :)

Me standing beside an AHG poster in the hotel lobby. I'm wearing a black shirt with green alien heads and a grey KN95 mask.
Me on my way to the first day of panels.

High-level thoughts from a first-time attendee: This is a really good conference. I haven’t seen much in the way of glitches or issues (discounting the occasional technical electronic weirdness that happens anywhere). Panel content has been well selected and planned; I’ve been able to put together a full schedule with few “this or that” conflicts. Some panelists are better than others, as always, but I haven’t seen any trainwrecks or other disasters.

I do wish the conference had more of a social media presence. The @AHGround Twitter account linked from the AHG website hasn’t posted since 2017, and the #ahg22 hashtag I only found on their Facebook page, and it wasn’t mentioned until 10 days before the conference. Unsurprisingly, this means that there was very little hashtag use (at first I seemed to be one of the very few users other than AGH itself using the tag consistently or at all; a few more people started using it as the conference went on).

The hotel is a Hilton. My primary other hotel experience is the DoubleTree by Hilton Seattle Airport (where Norwescon is held), and I was amused that in most respects, I prefer the DoubleTree to the Hilton Denver City Center. The room is a little smaller here, and I was welcomed at check-in with a room temperature bottle of water instead of a fresh-baked chocolate chip cookie. But these are small and kind of picky distinctions; really, it’s exactly what you’d expect from a Hilton.

A standard Hilton hotel room with two queen size beds and a view of downtown Denver office buildings out the window.
Looks about like any other hotel room out there.

That said: This particular hotel has excellent ventilation. I’ve been carrying around an Aranet4 air quality monitor, and it has stayed comfortably in the green nearly the entire time; it has only gone into the low end of the yellow during one standing-room-only session in a smaller room. It did get into the yellow as it sat in my room overnight as I slept, but opening the window would bring it back into the green in just a few minutes (though at 20°-40° F outside, I didn’t do this much).

A graph of the CO2 levels for the past few days as measured by my Aranet 4 monitor. Most measurements are in the green "good" zone, with occasional spikes into the yellow zone. Handwritten notes emphasize that the yellow spikes are mostly when I was sleeping.
Being able to keep an eye on CO2 levels was nice, and helped make me feel comfortable with COVID-era conference travel.

As noted in an earlier Mastodon post, the weirdest thing for me has been part of switching from fan convention to professional conference: the lack of anything after about 5 p.m. I’m used to fan-run SF/F cons like Norwescon, with panels running until 9 p.m. or later, evening concerts or dances, 24-hour game spaces, and a general “we’ll sleep when this is done” schedule. Having nothing left for the day after about 5 p.m. is odd, and it feels weird not to know that I could wander out and find things going on.

For people who come with groups and/or have been doing this for a long time and have a lot of connections, I’m sure it’s easy to find colleagues to have dinner or hang out in bars or restaurants (at or outside the hotel) and chat with. But for a new solo attendee, it meant I spent a lot of evenings watching movies on my iPad in my room. (I did find a small group of other Washington-based attendees to hang with one evening, which was very appreciated.)

Impressions of Denver: Hard to say, really. It’s been pretty cold this week (20s to 30s most days), and since a lot of panels caught my eye, I didn’t take time to go exploring beyond going to the 16th street mall to find food. The little I did see in the immediate area is nice enough; maybe I’ll see more if I get to come back to AHG in the future.

Looking down a section of Denver's 16th street mall, a pedestrian commercial area with shopping, restaurants, and bars.
Though I haven’t taken German or been to Germany in years, my brain kept labeling this a “Fußgängerzone”.

Colorado itself, I have to say, didn’t give me the greatest first impression. The trip from the airport to downtown Denver is a 40-minute light rail ride through flat, brown, high desert with lots of scrub brush, punctuated by aesthetically unpleasing industrial and commercial areas. Maybe it’s nicer in the summer, but in the winter? The SeaTac-to-Seattle light rail ride is much prettier. (My apologies to Coloradans for snarking on their state.)

The Colorado landscape between the airport and Denver. The ground is flat, sparse, and very brown, the sky has lots of high, wispy clouds.
Denver has mountains in the distance, they were just out the other side of the train. All I saw was flat.

My least favorite part has been the humidity, or lack thereof. Coming from the Pacific Northwest’s pretty regular 50%+ humidity, having Colorado’s humidity hovering around the 20% level has been horrible on my skin. Even with lotion, I’m itching like crazy, to the point where it’s been difficult to sleep, and my hands are so dry that the skin of my knuckles is cracking and I look like I’ve been punching walls. Whimper, whine, yes, whatever, it’s unpleasant.

But anyway! And now, brief (500-character or fewer) overviews of the sessions I attended while I’ve been here:

InDesign Accessibility (full-day pre-conference session): For a long time, I’ve had a basic impression that PDFs are crap for accessibility. Turns out that PDFs can be made quite accessible, but it takes a bit of work and the right tools, and InDesign is a powerful tool for this sort of thing. While I don’t use InDesign much, I learned a lot about PDF accessibility and how to effectively prepare documents, and many of the concepts will be translatable to other programs. Very useful.

Addendum: I’d also like to take some time to see how many of these techniques and accessibility features are also available in Affinity Publisher, since I’m a fan of Affinity’s alternatives to Adobe’s big three tools (Photoshop, Illustrator, and InDesign). I have no idea how much of a priority Affinity puts on accessibility (either within their tools or the final documents), but it could be an interesting thing to poke around with.

Using NVDA to check PDFs for Accessibility (full-day pre-conference session): Another really useful day. While I’ve known about screen readers as a concept for some time, I’ve just started experimenting with NVDA over the past year, and as a sighted user who doesn’t depend on it, it can be an overwhelming experience. This day gave me a ton of info on tips for using NVDA (including the all-important “shut up for a moment” command), and I’m going to be much more comfortable with it now.

Keynote: Oh, also: The keynote speaker, Elsa Sjunneson, was excellent, speaking about her experiences as a Deafblind person, student, parent, and author. Her statement that “disability is a multiverse” resonated with a lot of people. Plus, it was a treat to see her speak here, as I know of her from her paneling at Norwescon and her Hugo nominations and wins.

Elsa Sjunneson on stage at the conference keynote. An ASL interpreter stands beside her. Both are also shown on a large video screen to one side of the stage.
Elsa and her interpreter during her keynote speech.

Publishing and EPUB 101: An introduction to EPUBs and an overview of some of the better creation tools. I’ve experimented a bit with creating EPUBs here and there in the past, and am familiar enough with the basics that this one was slightly below my knowledge level, but it still gave me some good tips on methods and tools for preparing documents to be output as accessible EPUB files for distribution.

Math and STEM: Since I’m going to be training STEAM faculty on what they need to know to make their courses accessible, which can have some extra considerations to be aware of, this seemed like an obvious choice. It ended up being basically a demonstration of TextHelp’s Equatio equation editing product, which isn’t necessarily a bad thing, as Equatio does do a lot of neat stuff and our campus already has access to it, so I did learn a lot from the session, even with the single-product focus.

Retire the PDF: An intentionally hyperbolic title, this was a call to consider EPUBs as an alternative to PDFs when distributing documents. As long as you’re not absolutely wedded to the visual layout and presentation of a document, EPUBs do have a lot of advantages over PDFs by giving the end user more control over the display (fonts, sizes, reflow to varying screen sizes, etc.) and better screen reader compatibility (especially when compared to poorly constructed PDFs).

Educational Alt Text: A particularly good session on how to think about writing alt text for images, with an emphasis on doing so for an educational context. Thinking about not simply describing the contents of an image, but creating alt text that conveys the meaning and what information the reader needs to get from the image separate from how the image appears, and how to craft effective alt text and (when technologically possible) long descriptions with more information about the image.

Going Further with EPUB: This session got deeper into the innards of EPUBs, looking at how they’re constructed (essentially self-contained XHTML websites), examining a few different tools for creating, editing, checking, and validating EPUBs for full accessibility. Again, much of the basic info I knew, but the collection of tools and verification options will be very handy to have.

Accessible Math Roadmap: Presenting an in-progress reference document on the state of accessible math and the various tools out there for creating and interacting with equations in accessible formats. As noted above, this is an area I’m trying to learn the basics of as quickly as possible, so I’ll be digging into the reference document itself in more detail in the coming days as I continue preparing to help train faculty on how they can do all this for their classes.

Trending Tech Tools: This is apparently the latest in a recurring series of presentations at this conference, going over major developments in accessible technology over the past year, recent updates to a number of widely used tools, and a peek at things coming down the line in the coming months. Particularly for someone new to the field, this was a nice way to get a snapshot of where things stand and what to be aware of.

Advanced VPAT Techniques: Voluntary Product Accessibility Templates (VPATs) are a way for vendors to declare how accessible their products are (or aren’t); this session discussed how best to approach talking with vendors about their VPATs, particular things to look for, and ways to guide discussions with vendors to get more precise information about issues that may be noted when reviewing the VPATs during the pre-selection product investigation and review phase.

Accessible Videos: Covered what needs to be done to make videos accessible, for both the videos themselves (using high-contrast text within the videos, including correct captions, transcripts, and audio description tracks) and the video players themselves, which need to be accessible and allow full access to all features for all users (which most players, including YouTube’s, aren’t very good at doing). Got some good pointers on automated-caption correction workflows and tools as well.

Integrating Tech in Communication: Through no fault of the presenters, this ended up being the least directly useful to me, as while it was about ways to use tech to communicate with students, it was presented by people on a Microsoft-focused campus, and was essentially a rundown of many of the features built into Microsoft’s applications and how they’re using them on their campus. Not bad info at all, just not as useful for me as it obviously was for others in attendance.

So that wraps up my week at Accessing Higher Ground! It was well worth coming, and I’m very glad I was able to come. If I only get to go to one conference next year, it will probably be the big AHEAD conference (along with ATHEN, one of the two parent organizations for AHG), as they’ll be in Portland, but if we have the resources to send me to two conferences, I definitely hope to come back to AHG again. Thanks to the organizers and all the presenters and attendees for such a good conference week!