CILIPS Chartered Institute of Library and Information Professionals in Scotland
Navigation Close

New Voices RGU Student Series 2025 – Liam Irwin

Category: Blog, RGU Student Series 2025

The CILIPS SNPC 'New Voices' blog logo, with white and yellow text on a turquoise background

In the 2025 New Voices Student Series, the CILIPS Students & New Professionals Community will be sharing the views of Robert Gordon University students from the MSc in Information and Library Studies.

With special thanks to Dr Konstantina Martzoukou, Teaching Excellence Fellow and Associate Professor, for organising these thought-provoking contributions.Image of blog post author Liam Irwin.

Today’s blog post author is Liam Irwin. Originally from Canada, Liam recently moved to Scotland with his wife, and a duffel bag full of hopes and dreams. He is currently pursuing his MSc in Information and Library Studies at RGU, where he is thrilled to be exploring the intersection of traditional librarianship and emerging technologies such as AI.

From Keywords to Conversations: The Need for Information Professionals

AI Generated image of watercolour wash and image of an abstract person in the middle.

The illusion of easy answers (ChatGPT, 2025).

The ways in which we locate information are changing – dramatically – in libraries, on campuses, and beyond.

For decades, traditional keyword searches have dominated information retrieval. These unidirectional queries often rely on exhaustive lists of technical terminology – jargon that must be correctly compiled by the information seeker using precise search syntax, nested logic structures, and a labyrinth of field codes, operators, and delimiters (Hersh, 2023). Success requires a wealth of domain-specific knowledge, plus access to the necessary indexes, databases, and published works (Hersh, 2023).

But these traditional approaches are now being augmented by AI systems capable of conversation-like interactions. These newer technologies comprehend context and respond to user intent. This lessens the need for precise search terms, allowing users to describe their information needs using natural language (Kulkarni et al., 2023).

Because this emergent technology can generate text and media in response to user inputs, it’s called Generative AI (GenAI). The inputs created by users are called prompts. In response to these prompts, GenAI composes novel multimedia outputs – sometimes appearing convincingly human in its ability to generate natural language and answer complex queries (Kulkarni et al., 2023).

The art of composing effective prompts is known as prompt engineering (Ekin, 2023). While the barrier to entry is relatively low when using GenAI, care must still be taken when designing prompts, to ensure that the system’s outputs are as intended. For example, the clearer, more detailed, and more context-rich a prompt, the better the output quality tends to be (Ekin, 2023). However, prompt engineering isn’t always so intuitive, which helps explain the continued need for information professionals able to teach modern information seekers how best to harness the full potential of these emerging technologies.

For instance, should I have questions about a lengthy document, I might instinctively organise my prompt like this:

1) “Here is a research grant proposal. Review the proposal, locating any grammatical errors.”
2) [Document]

However, it may surprise some readers to learn that output quality often significantly improves when the above order is reversed (Anthropic, no date). It may also surprise readers to learn that GenAI can generate better outputs when programming-like tags and additional delineation are added to prompts (Anthropic, no date). Here’s how we might apply these recommendations to our above example:

<document> … </document>
<context> This is a research grant proposal. </context>
<instructions> Review the proposal, locating any grammatical errors. </instructions>

Users can also instruct GenAI systems to implement similar tags in their output, which often helps those systems organise their ‘thoughts.’ This approach is often used in chain-of-thought (CoT) prompt engineering techniques, where users explicitly request a step-by-step output (Haugsbaken and Hagelia, 2024).

While effective prompts can increase output quality, they do not eliminate the risk that those outputs will contain factual inaccuracies. Such erroneous outputs are termed hallucinations, and they help explain why AI experts and information professionals alike frequently recommend that those using GenAI corroborate the information it provides using conventional information sources (Ekin, 2023). Erroneous outputs also further explain the continued need for information professionals. GenAI hasn’t replaced keyword searches, because those keyword searches are still needed to factcheck GenAI outputs. Instead, GenAI has merely added to the already long list of competencies that information professionals must help us acquire (Halvorson, 2024).

Hallucinations aren’t the only problems associated with GenAI. By making it easier for creators to quickly generate content, GenAI has also compounded the ‘information overwhelm’ many of us experience on a daily basis (Hirvonen, 2023). Anyone can publish a blog (ahem!) – and the increasing availability of GenAI means that the volume of online drivel is now growing at breakneck speed. This again highlights the need for information professionals because, more than ever before, information seekers must be taught to rapidly evaluate information sources, sifting the wheat from the copious quantities of chaff (Banh and Strobel, 2023).

GenAI is also forcing society at large to reexamine certain long-held beliefs about the very definitions of originality and authorship (Stokel-Walker and Noorden, 2023; Bockting et al., 2023). For example:

Is “authorship” ever black-and-white, when for decades almost every written word has undergone layers of invisible audits by increasingly intelligent systems that automatically review spelling and grammar, autocorrect mistakes, and increasingly integrate context-aware text prediction algorithms?

Is the hallowed doctrine of “academic integrity” a convenient (and perhaps disingenuous) oversimplification, when all outputs – human and otherwise – are necessarily built atop someone else’s hard work, imagination, and questionable authorship?

But at the very least, if society is still unable to answer such basic questions, I would argue that information professionals – who are well situated to help us grapple with these ethical quandaries – remain very much in need.

Reference List:

Anthropic (no date) Long context prompting tips. Available at: https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/long-context-tips (Accessed: 20 November 2024).

Banh, L. and Strobel, G. (2023) ‘Generative artificial intelligence,’ Electronic Markets, 33(1). Available at: https://doi.org/10.1007/s12525-023-00680-1.

Bockting, C.L. et al. (2023) ‘Living guidelines for generative AI — why scientists must oversee its use,’ Nature, 622(7984), pp. 693–696. Available at: https://doi.org/10.1038/d41586-023-03266-1.

ChatGPT (2025) ChatGPT response to Liam Irwin, 16 February.

Ekin, S. (2023) ‘Prompt Engineering For ChatGPT: A Quick Guide To Techniques, Tips, And Best Practices,’ TechRxiv [Preprint]. Available at: https://doi.org/10.36227/techrxiv.22683919.v2.

Halvorson, O.H. (2024) ‘Innovation and Responsibility: Librarians in an era of generative AI, inequality, and information overload,’ SJSU ScholarWorks [Preprint]. Available at: https://scholarworks.sjsu.edu/ischoolsrj/vol13/iss2/4 (Accessed: 18 November 2024).

Haugsbaken, H. and Hagelia, M. (2024) A new AI literacy for the algorithmic age: prompt engineering or eductional promptization?, pp. 1–8. Available at: https://doi.org/10.1109/icapai61893.2024.10541229.

Hersh, W.R. (2023) ‘Search Still matters: Information Retrieval in the era of Generative AI,’ arXiv (Cornell University) [Preprint]. Available at: https://doi.org/10.48550/arxiv.2311.18550.

Hirvonen, N. et al. (2023) ‘Artificial intelligence in the information ecosystem: Affordances for everyday information seeking,’ Journal of the Association for Information Science and Technology, 75(10), pp. 1152–1165. Available at: https://doi.org/10.1002/asi.24860.

Kulkarni, Akshay et al. (2023) Applied Generative AI for Beginners, Apress eBooks. Available at: https://doi.org/10.1007/978-1-4842-9994-4.

Stokel-Walker, C. and Noorden, R. (2023) What ChatGPT and Generative AI mean for science. Available at: www.nature.com/articles/d41586-023-00340-6 (Accessed: 11 November 2024).

Skip to content