Answering the SFWA’s Survey on LLM Use in Industry

While I’m not an SFWA member or even an author (beyond this little blog), as a lifelong reader of science fiction, I figured it was worth a few minutes to add my response to their current Survey on LLM Use in Industry. For the sake of posterity, here are my responses:

  1. I am a…
    • SFWA member in good standing (“Active”)
    • lapsed SFWA member
    • writer considering SFWA for future membership
    • ✅ member of the general public
  2. I create SFF in the following forms (choose as many as apply)
    • short fiction
    • longform fiction
    • poetry
    • comics / graphic novels
    • video games
    • analog games
    • film, theatre, and/or TV
    • nonfiction
    • ✅ I’m a reader/player of SFF.
  3. Has any of your writing been identified as part of stolen data sets in AI-related industry crises?
    • Yes, and I am part of a certified class action.
    • ✅ Yes, and I am not eligible for most/any class actions.
    • No, but people in my circles have been directly impacted.
    • No, but this issue remains a pressing concern.
    • I don’t know.

    (Note: Some time ago there was an online tool to look up sources that were used in one of the earlier revisions of one of the larger LLMs; I don’t currently remember who offered the tool or the specifics of the training database being reviewed. I do remember that both this blog and the Norwescon website, of which I’ve been both author and editor of much of the content for the past 15 years, were included in the training database.)

  4. How has your writing practice changed since the emergence of Generative AI and related LLM integrations?

    • ✅ I proactively turn off every new AI feature I can.
    • ✅ I switch away from writing tools that promote AI integrations wherever possible.
    • ✅ I avoid search engines and other summary features that rely on AI.
    • ✅ I accept AI features selectively, avoiding or switching off all the Generative AI tools I can identify, while leaving translation, spelling and grammar, and/or research assistants mostly intact.
    • I engage with AI chat features to brainstorm story elements, and/or for research questions of relevance to my writing.
    • I have used Generative AI for the development of story plots, characters, and/or scene construction.
    • I’m not a writer or editor.

    (Note: The first and fourth of these options seem to contradict each other. For clarity, I either disable or, if it can’t be disabled, actively avoid using generative AI; as noted in the sidebar of this blog, I do use machine-learning/LLM-based tools such as speech-to-text transcription, but when I do, I check and edit the output for accuracy.)

  5. Which of the following most closely resembles your position on the use of Large Language Models (LLMs) in the writing process?

    • ✅ There is no ethical use-case for writers, because this technology was developed through piracy and/or continues to negatively impact environmental systems and marginalized human beings.
    • Putting aside the historical and environmental issues, Generative AI needs to be approached differently from other LLMs, because other forms of LLM sometimes show up in tools (e.g., spell-check, grammar-check, translation software) that are normal parts of a writer’s workflow.
    • The use of any AI system for any part of the writer’s workflow that is not the writing itself (so, including brainstorming and research phases) is perfectly fine. It is only the words on the page that matter.
    • There are cases where the use of Generative AI for active storytelling might be a critical part of the story we want to tell, so it’s really a case-by-case determination.
    • Some writers are working for companies that make choices about AI without their involvement in the decision-making process, and this matters when deciding how we respond to the presence of AI in their work as individual creators.
    • I am not opposed to the use of LLMs in any capacity in the creative process.
  6. Tell us more about where you agree with or deviate from the statement you chose above.

    My actual answer is probably somewhere between the first (no ethical use-case) and second (recognizing LLM use in some tools) options.

    One of the biggest problems with the current discussions (including this survey and in the File770 threads started off of Erin Underwood’s open letter) is the grouping of several related but distinct technologies under the banner term of “AI”.

    Machine learning and LLM-backed analysis models are one thing. These are the technologies that have been used for years in many different contexts, including (some) spelling and grammar checkers, speech-to-text transcription, simple text-to-speech generation (suitable for screen readers and similar applications, not for audiobook creation), medical analysis, and so on. These are analytical or simple transformative, not creative, processes. In all cases, though, the output should be reviewed and checked by humans, not accepted as-is by default.

    Generative AI (genAI) is the severely problematic aspect, for all the reasons mentioned by many, many people advocating for its avoidance (the many unethical practices in the creation and ongoing use of the technology, social and environmental costs, high error rates, and many more).

    It’s unfortunate that all of these aspects are now grouped together as “AI”, as it makes it nearly impossible to approach the subject with any amount of nuance. I suspect that was what Ms. Underwood was attempting to do, though, as she also falls victim to the same confusion, she sorely missed the mark (and has continued to do so in her responses).

    As a reader, I would be very disappointed to see the Nebulas (and any other award) accepting the use of genAI in the creation of nominated (let alone awarded) works.

    (Note:: I wrote about the machine learning vs. genAI confusion on this blog earlier this year.)

  7. What forms of guidance do you think would most benefit writers trying to navigate the growing presence of LLMs in our industry?

    • ✅ Informational pages on SFWA.org explaining key terms and use-cases.
    • ✅ Articles on how to recognize and navigate forms of LLM in writing tools, and where to look for alternatives.
    • ✅ Full bans on any and all AI use in submissions and nominations processes, with consequences for failure to disclose.
    • ✅ Bans on Generative AI in submissions and nominations processes, with clear and severe consequences for failure to disclose.
    • ✅ Market reports that explicitly set a rigid bar for inclusion based on the publication’s commitment to not working with AI.
    • Other (please elaborate below).
  8. [Continued] What forms of guidance do you think would most benefit writers trying to navigate the growing presence of LLMs in our industry?

    Clarity in defining the differences among the technologies and determining which may be acceptable (such as speech-to-text transcription, spell/grammar checkers, etc.,) depending on the technology and its use, and which are unacceptable (genAI for text or art creation).

Leave a Comment