People ask whether AI can create an index. "Do people still do that?"
At the American Society for Indexing (ASI) and at the Indexing Society of Canada (ISC) over the past few years, members have tested Claude, ChatGPT and Adobe AI Assistant, and found them unusable in that they do not understand context, or how humans read for information. For instance, if you are a researcher, it's impossible to read everything that pertains to your subject. A useable index means that you can check out the literature without reading everything, flip to the back of the book (or enter keywords) and find out whether there is anything that might flesh out your current position in the research. You can find out quickly what other researchers are thinking about.
In short, a good index is trustable. If it's a good index, a reader can see at a glance that it covers most of what is in the book. Footnotes, endnotes and bibliographical information all serve to support the text. The researcher who has compiled the information for the book, can provide feedback to a human indexer and discuss what might have been missed.
A large language model (LLM) might be able to create an "index-like object", but you, as a human reader/researcher unaquainted with the author, can not have any certainty that the AI has "read" the material in a way that doesn't leave things out. You can't know what's missing — or search for something the AI program hasn't recognized as pertinent to a search, or contextualized in the larger themes of the book. And AI programs miss a lot. They can sound "true" and look like they know stuff. But you can't trust their findings.
If a calculator gave you the wrong answer to an equation, even once - you'd throw it out. Yet because AI gets some of it right, some of the time, we want to believe it. The person who sounds like they know stuff can be so attractive — until you find out they're lying. AI is sort of like that person. (At least it might apologize).
For more reading on this subject see the ASI Whitepaper on on AI and Indexing: https://asindexing.org/ai-news/supplement-to-white-paper-ai-and-indexing/
Elizabeth Bartmess: Can the Current Generation of LLMS Produce an Adequate Index?
Tanya Izzard: Book Indexing and Generative AI: https://journals.cilip.org.uk/catalogue-and-index/article/view/746
For the legalities around copyright for writers with regards to AI, see Jane Friedman: https://janefriedman.com/ai-and-publishing-faq-for-writers/
At the American Society for Indexing (ASI) and at the Indexing Society of Canada (ISC) over the past few years, members have tested Claude, ChatGPT and Adobe AI Assistant, and found them unusable in that they do not understand context, or how humans read for information. For instance, if you are a researcher, it's impossible to read everything that pertains to your subject. A useable index means that you can check out the literature without reading everything, flip to the back of the book (or enter keywords) and find out whether there is anything that might flesh out your current position in the research. You can find out quickly what other researchers are thinking about.
In short, a good index is trustable. If it's a good index, a reader can see at a glance that it covers most of what is in the book. Footnotes, endnotes and bibliographical information all serve to support the text. The researcher who has compiled the information for the book, can provide feedback to a human indexer and discuss what might have been missed.
A large language model (LLM) might be able to create an "index-like object", but you, as a human reader/researcher unaquainted with the author, can not have any certainty that the AI has "read" the material in a way that doesn't leave things out. You can't know what's missing — or search for something the AI program hasn't recognized as pertinent to a search, or contextualized in the larger themes of the book. And AI programs miss a lot. They can sound "true" and look like they know stuff. But you can't trust their findings.
If a calculator gave you the wrong answer to an equation, even once - you'd throw it out. Yet because AI gets some of it right, some of the time, we want to believe it. The person who sounds like they know stuff can be so attractive — until you find out they're lying. AI is sort of like that person. (At least it might apologize).
For more reading on this subject see the ASI Whitepaper on on AI and Indexing: https://asindexing.org/ai-news/supplement-to-white-paper-ai-and-indexing/
Elizabeth Bartmess: Can the Current Generation of LLMS Produce an Adequate Index?
Tanya Izzard: Book Indexing and Generative AI: https://journals.cilip.org.uk/catalogue-and-index/article/view/746
For the legalities around copyright for writers with regards to AI, see Jane Friedman: https://janefriedman.com/ai-and-publishing-faq-for-writers/
RSS Feed