AI Made Friendly HERE

Five Questions Every Public Administrator Must Ask

Artificial intelligence is already transforming government service delivery, policy and public expectations, and is central to most discussions about the tension between improving services and saving money. It’s essentially the same mantra that has categorized public administration for the past hundred years: how to make government both more effective and more efficient.

With so much at stake, administrators must strike a careful balance among innovation, risk and public trust. Unlike any previous technological innovation, AI is everywhere and appearing in every digital device, from smartphones to security cameras to municipal utility systems. Anyone who has worked with AI is consistently amazed at the astounding speed of its outputs and the uncanny sense of intelligence in its responses.

But this speed and scale introduce heightened risks. The margin for error is slimmer, and small mistakes can escalate quickly. This raises five core questions that every public administrator should continually ask as new ways to employ AI in government emerge:


1. What problem are we actually trying to solve with AI?

AI should never be adopted simply because it is the “latest thing” or because other governments are experimenting with it. Public administrators must tie AI initiatives directly to specific pain points — reducing service backlogs, improving citizen experience, enhancing data-driven decision-making or optimizing scarce resources. For example, an AI chatbot in a call center should aim to reduce wait times and improve accessibility, rather than merely serving as a flashy pilot project.

By starting with a problem-first approach, leaders ensure that projects are aligned with strategic priorities, have measurable outcomes and are justified within the budget. This also makes it easier to build the case for funding and earn public trust, since citizens are more likely to support AI if they see it solving problems that directly affect their daily lives.

Asking this question forces leaders to distinguish between “nice-to-have” experiments and mission-critical applications.

2. Is our data ready, secure and well governed?

AI is only as good as the data it consumes. Without high-quality, well-governed data, AI can do more harm than good. Leaders must assess not only the accuracy, completeness and timeliness of their data, but also its potential biases. For example, historical policing data may reinforce discriminatory practices if used uncritically in predictive algorithms. Similarly, incomplete housing data may skew eligibility determinations for critical benefits.

This question also compels leaders to confront the reality of legacy data silos and inconsistent recordkeeping practices that plague many state and local governments. Implementing robust data governance policies, ensuring compliance with privacy laws and adhering to cybersecurity standards are nonnegotiable. Investing in data cleansing, integration and metadata management may not be glamorous, but it is foundational. Without trustworthy data, AI risks magnifying inequities, undermining public trust and exposing agencies to costly litigation.

3. How do we ensure transparency, accountability and ethics?

Public administrators must ask: Who is responsible when AI gets it wrong? If an algorithm denies benefits to an eligible family or misidentifies a citizen in a public safety database, the consequences can be severe. Establishing clear accountability structures is therefore essential.

This includes adopting explainability standards. Citizens deserve to know why a decision was made, especially in sensitive areas like policing, benefits eligibility, public housing assignments and hiring. Transparency is not only a technical requirement but also a democratic one, as it strengthens the social contract between the government and its citizens.

Ethics must also be at the forefront. Governments should establish multidisciplinary ethics review boards, consult with community stakeholders and adopt guiding principles to ensure that AI deployments reflect public values. Without this, the risk of losing legitimacy and credibility is high.

4. What are the risks, and how will we mitigate them?

AI carries a wide spectrum of risks: bias, security breaches, disinformation, overreliance on automation and workforce displacement, to name a few. Administrators must commit to ongoing risk management, not a one-time checklist. This involves conducting impact assessments prior to deployment, beginning with pilot programs, and mandating “human-in-the-loop” oversight for sensitive applications. A useful starting point is the National Institute of Standards and Technology’s AI Risk Management Framework.

Mitigation also means preparing for unintended consequences. What happens if an AI-driven permit approval system experiences downtime? Is there a backup plan? How will staff intervene if a citizen receives an incorrect automated response? Risk planning must be dynamic, tested and adaptable to new threats. Administrators should treat AI governance much like cybersecurity — an evolving challenge that requires continuous vigilance and adaptation.

5. Do we have the capacity — people, budget and skills — to succeed?

Even the most promising AI initiative will fail without adequate capacity. Success requires more than technology; it depends on people, skills and organizational readiness. Public administrators must ask whether staff are trained to understand and manage AI tools, whether there is political and budgetary support to sustain projects, and whether timelines are realistic.

This often requires change management. Staff may fear AI as a threat to their jobs, and citizens may worry about faceless automation replacing human judgment. Leaders must therefore invest in workforce development, build cross-departmental collaboration and clearly communicate both the benefits and limitations of AI. Doing so fosters trust and ensures smoother adoption.

Capacity also extends to partnerships. Governments may need to collaborate with universities, nonprofits and private-sector vendors to augment expertise and stretch limited resources. Building an AI-ready workforce is a long-term commitment, not a one-time training session.

These five questions provide a road map for administrators to navigate complexity, strike a balance between innovation and caution, and place citizens’ needs at the center of technological change. Start by asking these questions in every AI conversation. Build systems that reflect public values, protect rights and deliver tangible benefits. Ask early, ask often, and lead with integrity and vision.

Alan R. Shark, a senior fellow at the Center for Digital Government, is an associate professor at the Schar School for Policy and Government at George Mason University, where he also serves as a faculty member in the Center for Human AI Innovation in Society. He is also a senior fellow and former executive director of the Public Technology Institute, a fellow of the National Academy of Public Administration and founder and co-chair of its Standing Panel on Technology Leadership. He is the host of the podcast series Sharkbytes.net. The Center for Digital Government and Governing are both divisions of e.Republic.

Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird