
As generative AI continues to evolve, it is transforming how users create content, interact with technology, and navigate the digital world. Despite these advancements, usability challenges remain a significant barrier to widespread adoption. Users often struggle with controlling AI outputs, managing unpredictability, and fine-tuning results to match their expectations. The lack of transparency in AI decision-making and the cognitive load associated with prompt engineering further complicate the experience.
A recent study titled “On the Usability of Generative AI: Human-Generative AI Interaction” by Anna Ravera and Cristina Gena from the University of Turin, published in the Joint Proceedings of the ACM IUI Workshops 2025, examines the core usability factors that influence the effectiveness of generative AI. The study evaluates user experience, transparency, control, and cognitive load while exploring best practices for enhancing usability through improved interfaces and interpretability.
The challenge of control: Finding the balance between automation and human input
One of the fundamental challenges in generative AI usability is the tension between automation and user control. Unlike traditional software applications where users follow step-by-step interactions, generative AI operates through intent-based interactions. This means users provide a prompt or request, and the AI determines how to execute it – often leading to unpredictable results.
The study highlights how users may feel a loss of control when AI-generated content does not align with their expectations. The issue is compounded by the black-box nature of many AI systems, where users cannot see how decisions are made or how different factors influence the output. This unpredictability can lead to frustration, especially when users struggle to refine their prompts effectively.
A key recommendation from the study is the implementation of hybrid intelligence models, where AI functions as both an assistive tool and a collaborative partner. By integrating feedback loops, interactive refinements, and more transparent AI models, users can achieve greater alignment between their expectations and AI-generated outputs.
Transparency and trust: The key to AI adoption
For AI to be widely accepted, trust is essential. The study underscores that generative AI systems must be more transparent about their processes, limitations, and potential biases. Many users remain unaware of how AI models work, why certain outputs are generated, or what data influences the results.
Transparency challenges arise due to three main factors:
- The opacity of AI models, which prevents users from understanding how results are derived.
- Lack of explainability, making it difficult to justify AI-generated decisions.
- Concerns over bias and misinformation, particularly in AI systems trained on vast, unregulated datasets.
To address these issues, the researchers propose user-centered design strategies that incorporate explainability features, such as highlighting sources of AI-generated content, displaying confidence levels for different outputs, and offering real-time feedback mechanisms. By making AI-generated content more interpretable, users can develop a deeper sense of trust and engagement with the system.
Cognitive load and prompt engineering: The need for more intuitive interfaces
Another critical barrier to AI usability is the cognitive load associated with prompt engineering. Unlike graphical user interfaces (GUIs), where users interact with buttons and visual elements, generative AI relies heavily on text-based prompts. This creates an additional learning curve, as users must understand how to structure prompts, fine-tune responses, and iterate effectively.
The study finds that many users struggle with:
- Creating precise and effective prompts
- Interpreting AI-generated outputs
- Refining results without extensive trial and error
The authors suggest that AI interfaces should incorporate guided prompt assistance, interactive refinement tools, and better contextual awareness to reduce user frustration. AI-powered auto-suggestions and response previews can help users understand what kind of input will yield optimal results.
Additionally, integrating multimodal interaction options – such as voice commands, interactive sliders, and real-time content adjustments – could make generative AI systems more accessible to non-experts.
The road ahead: Making generative AI more human-centric
The study concludes that generative AI will only reach its full potential if usability issues are addressed through a human-centered AI approach. Future advancements should focus on:
- Improving interpretability and feedback mechanisms to enhance transparency.
- Balancing automation with human control to reduce unpredictability.
- Reducing cognitive load through smarter interfaces and guided interactions.
By prioritizing user experience, trust, and intuitive design, generative AI systems can become more accessible, effective, and widely adopted across industries. As the field progresses, the integration of adaptive learning models, personalized user settings, and collaborative AI interactions will be key to creating a seamless, user-friendly AI ecosystem.
Generative AI has the power to transform industries, from content creation and education to healthcare and business automation. However, its success will ultimately depend on whether users can fully trust, control, and interact with AI in a way that enhances their creative and decision-making processes.