A team of European researchers has developed trustSense, a new web-based tool designed to measure how mature organizations are in overseeing artificial intelligence systems. The system promises to close a critical gap in AI governance, evaluating not just algorithms but the humans responsible for ensuring that technology operates ethically and transparently.
Titled “trustSense: Measuring Human Oversight Maturity for Trustworthy AI” and published in Computers, their study introduces a novel approach to AI assurance, shifting the focus from machine compliance to human competence, ethics, and responsibility in AI deployment.
Human oversight: The missing link in AI governance
As artificial intelligence systems become embedded in critical operations, from healthcare and finance to national security, policymakers worldwide are tightening rules on AI accountability. Yet, while technical audits and algorithmic checks abound, there remains a blind spot in assessing the people and processes guiding AI decisions.
The authors argue that trustworthiness in AI cannot exist without strong human oversight. Even the most advanced machine-learning models depend on human teams capable of understanding, monitoring, and intervening when automated systems malfunction or make unethical choices.
trustSense was developed to address this overlooked dimension. The tool quantifies how “mature” an organization’s human oversight capacity is by assessing ethics, situational awareness, resilience, and readiness to handle AI-driven risks. It specifically targets four key professional groups engaged in AI ecosystems:
- AI Technical Teams who develop and maintain models,
- Domain Users who apply AI outputs in practice,
- Cybersecurity Defenders protecting AI systems from attacks, and
- Investigators or Analysts monitoring adversarial threats.
Through customized questionnaires and privacy-preserving analytics, trustSense provides a maturity score that reflects how effectively each team fulfills its oversight responsibilities.
How trustSense works: Measuring what machines cannot
Unlike traditional AI auditing frameworks, which evaluate algorithms or data governance, trustSense evaluates human and organizational behaviour. It combines ethical, psychological, and operational metrics to determine whether staff can respond adequately to AI-driven challenges.
Each assessment measures dimensions such as:
- Ethical judgment and accountability – evaluating whether teams understand the implications of AI decisions.
- Resilience and adaptability – determining how organizations handle disruption and unexpected outcomes.
- Threat awareness and proactivity – assessing how well teams anticipate risks from biased data, model drift, or cyber manipulation.
- Collaboration and communication – gauging interdepartmental coordination and feedback mechanisms.
- Policy adherence – testing compliance with AI regulations, including GDPR and the EU Artificial Intelligence Act.
The platform is browser-based and does not require user registration. Responses are processed locally, ensuring complete anonymity—no data are stored or transmitted to external servers. The privacy-first design aligns with European data protection standards and serves as a model for ethical technology governance.
After completing the assessment, users receive an instant trust maturity profile, highlighting strengths and weaknesses. The system also generates tailored recommendations to help organizations improve their oversight capability over time.
In its validation phase, trustSense underwent rigorous testing across public and private institutions, including a pilot within a European healthcare organization. Teams that implemented its feedback improved their human oversight maturity scores by up to 40 percent, particularly in resilience, ethical reflection, and inter-team communication.
Integrating trustSense into AI risk management
The research positions trustSense within the AI-TAF (Artificial Intelligence Trust Assurance Framework), a broader risk governance model that aligns with standards set by NIST (U.S. National Institute of Standards and Technology) and ENISA (European Union Agency for Cybersecurity).
This integration allows organizations to embed human oversight evaluation directly into their AI risk management cycle, combining maturity scores with existing technical and security metrics. The approach ensures that decision-making processes consider not just how robust an algorithm is, but how well humans are prepared to manage it.
AI risk management, as the study stresses, is no longer purely technical. Adversaries exploiting weaknesses in AI systems, whether through data poisoning, model manipulation, or adversarial attacks, take advantage of human blind spots. By quantifying both human readiness and adversary sophistication, trustSense provides organizations with a more realistic picture of vulnerability and resilience.
In practice, this means companies can refine mitigation strategies based on human and technical indicators, ensuring balanced risk coverage. The framework offers particular value to small and medium-sized enterprises (SMEs), which often lack the resources for costly third-party audits but still face high AI compliance expectations under emerging EU regulations.
A step toward ethical, accountable AI
Trustworthy AI depends as much on human values as it does on engineering precision. As regulatory frameworks like the EU AI Act begin to take effect, businesses will need to demonstrate not only technical compliance but also human accountability in AI operations.
The authors argue that the lack of consistent standards for human oversight has hindered the progress of ethical AI governance. trustSense bridges this gap by offering a measurable, repeatable, and privacy-conscious method for assessing human readiness.
Its creators see the platform as both a diagnostic and cultural tool, helping organizations internalize ethical reflection and risk awareness as part of daily AI operations. Over time, they hope to see maturity scores evolve into an industry-wide benchmark for assessing responsible AI adoption.
The system also offers a strategic advantage: organizations that understand their human oversight maturity can better anticipate failure points, adapt to new regulations, and foster public confidence in AI-driven services.
The research also highlights that ethical oversight is a dynamic process, not a one-time audit. As AI evolves, so must the humans who manage it. Continuous self-assessment, supported by frameworks like trustSense, can keep oversight aligned with the rapid pace of technological change.
