PostHole
Compose Login
You are browsing eu.zone1 in read-only mode. Log in to participate.
rss-bridge 2026-02-06T18:00:00+00:00

How AI Literacy Shapes GenAI Use

Using generative AI often doesn’t mean using it well. AI literacy requires both prompt fluency and the ability to assess outputs.


How AI Literacy Shapes GenAI Use

Maria Rosala

February 6, 2026

Email article

Share on LinkedIn

Share on Twitter

Summary:
Using generative AI often doesn’t mean using it well. AI literacy requires both prompt fluency and the ability to assess outputs.

In my career as a UX researcher (particularly in civic tech and digital inclusion), I’ve spent years observing how people use the internet. As AI changes how users search for, create, and communicate information in both personal and professional contexts, it’s becoming an essential new dimension of digital literacy — one that introduces new competencies.

Despite the hype, not everyone is using generative AI (genAI) tools, such as ChatGPT and Gemini. And, those who are using it aren’t all using it in the same way. People working in technology often take for granted a level of comfort or sophistication with AI that many users simply don’t have. (Remember, you are not your user!) Understanding how users with low AI literacy interact with generative AI (genAI) is critical in designing inclusive and supportive AI experiences.

###
In This Article:

AI Literacy and Digital Literacy

Our Study

Prompt Fluency: From Keywords to Conversations

Output Literacy: Evaluating and Verifying Outputs

How Designers Can Help Users on Both Dimensions

AI Literacy and Digital Literacy

As AI increasingly impacts how people search, create, and communicate information online, AI literacy is emerging as a critical new component of digital literacy.

Digital literacy is broadly defined as the ability to find, evaluate, create, and communicate information using digital technologies.

AI introduces new interaction paradigms that require new mental models — ones that don’t always align with users' past experiences with search engines or traditional software.

In our research, two distinct capabilities shaped people’s success when using genAI for information seeking.

  • Prompt fluency: The ability to communicate intent, constraints, and context so that genAI can produce useful outputs.
  • Output literacy: The ability to evaluate genAI outputs --- for example, spotting gaps or misunderstandings, noticing potential hallucinations, seeking sources, and crosschecking when accuracy matters.

These capabilities don’t always grow together. Prompt fluency often increases quickly with exposure, but output literacy may lag: some people learn to get polished and, what they believe to be, helpful answers without becoming better at judging accuracy or knowing when to verify outputs. That’s why AI literacy can’t be measured along a single continuum. It’s a multidimensional skill set shaped by how people prompt, how they evaluate, and how they choose to engage with AI.

The matrix below highlights the types of AI literacy that we’ve observed in our research. Users can fall into one of four quadrants and can move over time into another quadrant based on their attitudes towards genAI, their exposure to it, or newly acquired AI knowledge.

AI literacy involves both prompt fluency and output literacy. While AI experts are strong in both, some users are fluent in prompting but lack critical discernment of AI outputs, while others are highly skeptical and evaluative but are reluctant to engage.

These four quadrants aren’t just theoretical: we encountered examples of each user type in our recent study on information seeking with AI.

  • The AI novice: New or inexperienced users who neither understand AI systems well nor use them confidently. They don’t know how to prompt and how to evaluate outputs.
  • The naive power user: Users who interact fluently with genAI and appear skilled in prompting but tend to accept outputs at face value and miss errors or gaps.
  • The skeptical abstainer: Users who understand how to treat AI outputs but choose not to use genAI often or at all — due to distrust, ethical concerns, or personal preference. Because they don’t engage often, they may not develop much prompt fluency.
  • The AI expert: Users who use genAI strategically and selectively, write effective prompts, and demonstrate healthy skepticism, verifying outputs when stakes are high.

In this article, we show how prompt fluency and output literacy shape how people engage with genAI.

****

Our Study

Create AI experiences that meet users' mental models in our live training course, Designing AI Experiences.

The best way to learn about digital literacy is not to ask, but to watch.

In a recent study, we watched participants aged 23 to 65 conduct research using both the traditional web and genAI on tasks of their own choice. They were free to use any sites or tools and were unaware that the study focused on AI. If they didn’t use AI initially, we encouraged them to try it later in the session. Participants researched a variety of topics, including vacation destinations, DIY projects, and major purchases. Study participants had varying AI experience: one was new to genAI chatbots, three used AI only for specific work tasks, and five used genAI regularly in both personal and professional settings.

Prompt Fluency: From Keywords to Conversations

To recap, prompt fluency refers to the user’s ability to communicate intent, constraints, and context so that genAI can produce useful outputs.

When writing prompts, prompt-fluent participants explained what they were trying to accomplish and what mattered to them. They included context for their problem and constraints for the expected solution. As a result, their prompts were, on average, longer. As one participant said, while composing a longer prompt, “I’m just trying to provide some context here. It seems like it often improves the results a little bit.”

Example prompt: I want to go Montauk, Long Island in New York. I think the Kirk Park Beach is one of the best spots around, given that it is near the downtown area. Do you have a better recommendation? And which cafe/restaurants are the best in the area?

Participants with low prompt fluency were less likely to include context or constraints. Their prompts often resembled search queries, possibly because they conflated genAI with a search engine.

Example keyword-style prompt: smart refrigerator with bottom freezer

In addition, prompt-fluent participants often made more complex requests, asking multiple questions at once and requesting formats that supported decision making (such as tables or ranked lists).

To illustrate this difference, the following are two prompts from participants with very different levels of prompt fluency. They both used genAI to narrow down on a car model to buy.

[...]


Original source

Reply