The Future Is Human, Not Technological

Collection of photos taken at the 2025 GovAI Coalition Summit
Published On: November 28, 2025

By Richard Oppenheim, Deputy Executive Director, Regional Government Services (RGS)

Over the last year, conversations about artificial intelligence (AI) have moved from speculation and curiosity to concrete questions about readiness, governance, and integration across local government. In 2023, AI felt like a distant horizon; by the end of 2024, it became a daily conversation in staff meetings, leadership teams, and agency councils.

RGS has continued to explore AI for both our own internal operations, and our client agencies as well. Additionally, RGS has become a member of the GovAI Coalition, a nationwide collaborative of more than 600 local government agencies developing AI resources and guidance for local government agencies. I have co-chaired the Coalition’s Readiness and Adoption Committee, focusing on AI policy, AI education, and workforce/ change management.

After a year of national conversations, cross-agency collaboration, and hands-on pilots, one message has rung the loudest:

AI is not just a technology issue. It is a people issue.

This is one of the most important insights we can bring to public-sector agencies. Local governments will succeed with AI only if we invest first in people, culture, systems, and responsible decision-making.

Purpose First, Not Technology First

At the 2025 GovAI Coalition Summit, more than 600 professionals convened to discuss progress and hurdles in public-sector AI adoption. One key takeaway was:

Local government is not in the tech business. We are in the people business.

This emphasis is crucial. Too often, agencies feel pressure to “do something with AI” because everyone else seems to be. But the real question isn’t “How do we accelerate our use of AI?” but “What fundamental problem are we trying to solve?”

This aligns strongly with national guidance. The ICMA Local Government AI Strategy Workbook reminds agencies to begin not with tools but with strategic intent—understanding their community’s needs, defining the problem space, and aligning AI with existing plans and values.

Similarly, the National League of Cities NLC AI Toolkit emphasizes that cities begin AI adoption with a deep understanding of their organizational context, risks, and values, not with a tool comparison.

At RGS, this “purpose first” mindset underlies our 7-Step AI Usage Framework. We do not begin with “Can AI help?” but with “What problem are we solving, for whom, and why?”

Why Local Governments Cannot Ignore AI

Across counties, cities, joint powers authorities, and special districts, several forces are pushing AI from curiosity to strategic priority.

  1. Work Is Already Changing

AI is fundamentally reshaping tasks, roles, and expectations in both the public and private sectors. At the Summit, multiple panelists noted that new roles (AI coordinators, prompt engineers, data stewards) are emerging. But the even larger shift is that every role will soon require some level of AI literacy.  And this has implications for redefined job descriptions, performance management, workforces, and organizational cultures.

The National Association of Counties’ AI County Compass highlights the same dynamic, noting that county governments must prepare for upskilling and reconfiguration of the future workforce as AI becomes integrated into operations.

  1. Residents Expect Modern, Responsive Services

AI-powered experiences are now standard in daily life, from real-time translation to natural-language search, automated summarization, etc. The public expects government to meet a similar standard of accessibility and clarity.

  1. Misinformation and Deepfakes Are Rising

AI is reshaping the information landscape. Local governments now face new risks related to fraudulent, intentional misinformation campaigns using a variety of media. This can impact local government credibility, clarity, and public trust. Solutions must be developed to reduce them.

  1. AI Can Strengthen Capacity, Not Replace People

AI is best used to expand human capacity, not replace human expertise.

We want employees focused on high-value thinking, relationship-building, and community service rather than repetitive administrative burden. AI can help with that, but only when implemented thoughtfully.

AI Adoption Is Change Management

Since AI is a people issue, we must look at the systems people work and live in. AI doesn’t operate in isolation. When we introduce AI into any part of local government, it interacts with everything already in place: how work gets done, who is involved, how decisions are made, and how people experience services.

In his article“From Principles to Pavement: How to Inscribe Responsible AI Into Your Org,” Charley Johnson highlights an important but often-overlooked reality: adopting AI means intervening in an existing sociotechnical system, not simply installing new software. AI inevitably touches the underlying elements of that system, such as relationships, incentives, norms, assumptions, and authority structures.

In practice, this means AI adoption (or any new technology), should be approached as an organizational change management effort, not solely an IT upgrade. Governments must examine their workflows, culture, values, and historical patterns before introducing AI. Without this reflection, AI will reinforce whatever already exists in the system, including any inefficiencies, inequities, biases, or bottlenecks.

The Coalition’s Readiness and Adoption Committee has recently kicked off a Workforce and Change Management working group to address these issues.

Workforce Readiness: The Heart of Responsible AI

A key component of people-focused organizational change is workforce development. At the GovAI Coalition’s Annual Summit, AI literacy and education were mentioned in every session, regardless of the topic. It is foundational because AI is permeating every aspect of how we work and the systems in which we work.

NACo, ICMA, and NLC all independently reached the same conclusion:

People, not tools, determine whether AI is responsible and effective.

Agencies must therefore invest in:

  • foundational AI literacy for every employee
  • role-specific training for HR, finance, planning, public safety, and administration
  • leadership readiness for evaluating and governing AI
  • supervisor skills for reviewing performance
  • safe-to-fail cultures that encourage experimentation
  • organizational change strategies, not tool rollouts

Training is no longer just professional development. Training becomes a critical part of modern government infrastructure.

Responsible AI Requires Shared Language and Shared Practices

Part of looking at our systems and workforces involves paying attention to how departments and functions across an agency share information. Another important takeaway from the GovAI Coalition Summit was the importance of building a shared narrative across departments. IT speaks in security. HR speaks in roles and labor. Operations speak in capacity. Legal speaks in risk. Leadership speaks in strategy.

A shared framework creates a common vocabulary. It gives:

  • employees a safe structure for experimentation,
  • supervisors a clear lens for evaluation,
  • leaders a pathway for governance,
  • communities’ confidence in transparency.

This shared understanding is what turns AI adoption from reactive to intentional.

The RGS 7-Step AI Usage Framework: A Practical Guide for Employees

To help staff and leaders navigate AI responsibly, we developed the RGS 7-Step AI Usage Framework, informed by national thought leadership and our real-world work with agencies. This framework helps our staff with problem identification and determining if AI should be part of the solution.

  1. Clarify the Purpose & System Context

Define the problem, purpose, and expected value. Understand the workflow and system AI will enter. Ask how AI might reshape incentives, behaviors, and community impact.

  1. Understand the Tool (and Its Limits)

Understand key AI concepts. Explain how the tool works in plain language. Identify limitations, risks, and appropriate uses.

  1. Data, Privacy, Security & Public Records

Apply CPRA, privacy, cybersecurity, and records retention requirements. Assess how data use interacts with system norms and equity considerations.

  1. Appropriateness, Equity & Community Meaning

Evaluate fairness, transparency, and accessibility. Consider how outputs may influence trust, interpretation, and perceived legitimacy.

  1. Verification, Oversight, Quality Assurance & ROI

Plan for human review. Validate accuracy, appropriateness, and impact. Examine systemic risks. Assess actual benefits (time saved, workload shifts, service improvements).

  1. Communication, Documentation & System Transparency

Document decisions, inform supervisors, clarify responsibilities, and ensure public transparency.

  1. Ongoing Learning, System Adaptation & Reflection

Reflect on outcomes. Identify what the AI revealed about the system. Share lessons. Adapt workflows, policies, and practices based on learning.

Leadership’s Role in Shaping AI’s Future

As we enter 2026, the question for public agencies is not whether AI will reshape work. It already has. The question is whether leaders will guide that change intentionally.

To do this well, agencies must invest in:

  • people
  • training
  • communication
  • system redesign
  • governance
  • culture

AI will not replace human judgment. But people who understand AI, including its risks, system interactions, limitations, and opportunities, will shape the future of government.

AI is not a technological challenge. It is a leadership challenge.

And we are ready to meet it—together.

Share this article

Stay Up-To-Date - Follow RGS on LinkedIn:
Latest articles