Top 10 AI Threats for Local Government – and How To Address Them (Starting Now)

Chatbot,Conversation,With,Smartphone,Screen,App,Interface,And,Artificial,Intelligence

State and local leaders can take a number of steps to mitigate some of the common concerns surrounding AI, including: establishing clear standards and guidelines for the use of AI; providing oversight and accountability; engaging with experts and stakeholders; investing in research and development; and, providing education and training.

ChatGPT, in response to the question ``How can state and local leaders mitigate some of the common AI concerns?``

Artificial intelligence (AI) is being used today to streamline and improve public sector services in many ways. 

But AI also presents new risks and challenges for local government agencies, who often struggle to attract the skills needed to keep up.

In this article, we discuss the top 10 risks we believe that local government agencies and elected officers should be focused on right now. 

We also offer tips on how to mitigate each risk, working hand-in-hand with AI technology providers like us.

No Turning Back: AI Is Here To Stay

 

UF campus wide AI courses

Artificial intelligence (AI) – especially generative AI like OpenAI’s ChatGPT and DALL-E models – is rapidly entering offices and enterprise systems that power many industries, from finance and healthcare to education and transportation. 

Local government agencies are no exception

Every day, local agency staff around the country are playing with ChatGPT on their lunch breaks – like 100 million other people do. 

By doing this, however, they may be copying-and-pasting sensitive public data into those services to test ideas. With the best intentions, of course.

Problem is, when you paste sensitive public data into ChatGPT, that data becomes part of the training set that will be used to create the next generation of AI.  

Which means you just leaked sensitive information to a public, shared resource.

Whoops! 

This is just one example of how hard it is to mitigate risk in a world being overtaken by AI.

That said, the potential benefits of applying AI to improve local government – especially to augment and empower overworked staff to do more with less – are enormous.  

For example, in a recent study Deloitte estimated that applying AI to government operations should unleash more than $4 billion in labor savings alone.  

Other analysts predict far greater numbers.

But what about the risks if we do this wrong?

No one is suggesting that local governments should avoid taking advantage of the coming AI tsunami.  They should, of course.

But to accomplish this, local government officers need to become aware of the unique challenges of applying AI to government operations.  

The good news is that many of these issues are easy to grasp and to control.  Only a few are truly difficult and will take time to sort out.

At CogAbility, we provide responsible AI solutions for local government agencies including Tax Collectors, Clerks of Court, Property Appraisers and more.  Most of our solutions generate 2X to 10X ROI for our clients without entailing much, if any, risk.  

But part of that has to do with the use cases we’ve chosen to automate first for our customers, which are decidedly low-risk: AI chatbots, AI process automation, and AI analytics for the most part. 

CogAbility’s mission is to help public servant organizations empower their staff and the people they serve with AI – safely and responsibly.

Top 10 Risks of AI In Local Government

 

The rest of this article discusses the top 10 risks of AI in local government, with practical tips that local government officers can use to keep their agency safe.  In our opinion.

The risks of AI in local government fall broadly into three categories: security concerns, regulatory concerns, and public safety concerns

We’ll address each risk one at a time and provide practical tips on how to mitigate the risks using methods available today.

So let’s dive in…

Risk #1: Cyberattacks and Data Breaches 

 

HIPPAA and SOC2 COMPLIANT

 

Most AI systems today rely on large amounts of data to learn, predict, and improve themselves over time. 

This data can also be a lucrative target for cyber attackers who seek to steal or manipulate sensitive information. 

This is especially true when it comes to sensitive personal data that may be captured during an AI chatbot conversation or during AI processing of sensitive criminal justice documents.

As they do with all forms of technology, local government agencies must ensure that their AI solutions, AI-enabled enterprise systems and AI-enabled third party applications are secured by design, regularly tested for vulnerabilities, and backed up in case of a breach. 

Sensitive AI training data should be kept in a regulatory-compliant storage facility, whether on-premise or in the cloud.  Options like Amazon AWS GovCloud (a CogAbility partner) exist specifically for this purpose.

Most regulatory bodies and government agencies have not mastered this subject yet, so it is important that every government officer takes the time to ensure their AI solution providers address this risk when procuring such systems.  

In addition, procurement and IT officers must ask new AI-specific questions when issuing RFPs.  

TIPS

 

  1. If you use AI to process or analyze sensitive personal (PII) data, then ask your providers to explain their SSAE-18/SOC2 compliance.  Many AI startups are only now getting to this requirement. That said, many public-facing chatbots do not accept nor process customer data, so they don’t need to comply with SOC2.
  2. If you use AI to process or analyze sensitive criminal justice (CJIS) information, ask your AI provider if their technology stack and team are currently CJIS compliant and/or Fedramp compliant, the highest level of security per NIST standards. CJIS compliance is a complex topic and no single accepted set of auditing standards exists. That said, there are many practical things every IT department and vendor should be doing.  So ask.

CogAbility’s AI solutions adhere to the latest security and regulatory standards in local government.

Risk #2: Bias and Discrimination 

 

custom AI digital employee

 

AI systems are trained on historical data which often contain biased or discriminatory information. 

As a result, AI can perpetuate and amplify existing biases and discrimination, especially in areas such as criminal justice, housing, and employment.  

To mitigate this risk, local government agencies must ensure that their AI systems are transparent, auditable, and accountable, and that they measure and address any potential biases or disparities. 

So here’s the problem with controlling bias in AI today:  

For the most part, all machine learning systems operate as black boxes today, with little to no traceability of how decisions are made. Which means it is difficult-to-impossible to detect every instance of decision bias and understand why it occurred.

To make things worse, the most useful AI models like GPT4 and Bard are built on extremely large and complex neural network architectures that are practically impossible to decipher.  

Much research is happening on this front, and new tools are announced weekly. 

But so far, the jury remains out.

So what can a local government officer do?  

TIPS

 

  1. Measure bias in the outputs of your AI recommendation & analytics systems.  At the very least, you should be measuring bias in the output / recommendations side of your AI system.
  2. Ask your AI solution providers and enterprise software providers who add AI to their platform to address bias-containment – strategies, traceability, and preventative methods – in their proposals. Ask for data analytics panels that calculate and estimate bias in outputs on a regular basis.
  3. Carefully scrutinize training data for systemic bias before you train your system. This is an advanced data science skill that most agencies do not possess, but if you plan to use AI to make decisions or recommendations that may impact large numbers of citizens, you must address this skill gap.  Hire a consultant or give the RFP to a solution provider that addresses this need.
  4. Make sure bias subject matter experts are available to review your output biases and to identify false positives. All AI systems today fundamentally rely on statistics, and the correlations they present do not necessarily reflect real-world bias. Learnings from SMEs should be fed back to the training team or vendor for them to use in adjusting their training data for the next training run.

 

Risk #3: Privacy and Surveillance 

 

ChatBot Conversational AI Solutions

 

This one goes without saying, but AI systems often collect, process, and analyze massive amounts of personal data, including facial recognition, biometrics, and geolocation. 

This can raise serious privacy and surveillance concerns, especially if this data is misused or shared without consent. 

Local government agencies must have clear policies and procedures in place to protect citizens’ privacy rights and to provide transparency and accountability regarding their use of AI. 

The subject of mitigating privacy and surveillance issues is complex and politically-sensitive, so I don’t want to spend a lot of time addressing it here.  

Aside from requiring vendor and staff adherence to security frameworks like SOC2, a lot of decisions will remain a judgment call. 

TIPS

 

To help you scrutinize your situation & policies more closely, here are a few good places to start:

  1. Blueprint for an AI Bill of Rights: Making Automated Systems Work For The American People by the White House:
    https://www.whitehouse.gov/ostp/ai-bill-of-rights/
  2. Beware of Privacy Violations in AI by ISACA:
    https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2021/beware-the-privacy-violations-in-artificial-intelligence-applications
  3. Protecting Privacy Rights in an AI-Driven World by the Brookings Institution:
    https://www.brookings.edu/research/protecting-privacy-in-an-ai-driven-world/ 

 

Risk #4: Lack of Standards and Regulation 

 

AI is a rapidly-evolving field, and there are no universal standards or comprehensive regulations governing its use in local government today.

The lack of guidance can lead to fragmentation, inconsistency, and uncertainty in terms of ethical, legal, and social implications. 

There is of course no quick silver bullet to writing good regulations and standards.  The solutions here will only come from a time-consuming political process that hasn’t really started yet (for AI).

For this reason, local government agencies must work closely with industry experts, stakeholders, and policymakers to develop and adopt appropriate standards and regulations that strike a balance between innovation and responsibility.  

TIP

 

  1. Invite your AI vendors to participate with you in regulatory activities.  Everyone is learning how to navigate this, so the more brains the better. 

 

Risk #5: Unintended Consequences and Errors 

 

 

AI systems are only as good as their data, algorithms, human feedback, and executive oversight are. 

Even the best-designed and well-intentioned AI systems can – and do – produce unintended consequences and errors. 

For example, although OpenAI’s GPT4 model is much better than earlier versions at generating false positives (aka, “lying”), the model still gets facts wrong 10% of the time.  This is especially true when GPT4 or ChatGPT attempts to solve arithmetic problems or tries to provide specific, detailed advice based on deep human expertise.

In addition, current methods to contain these risks are difficult and expensive to implement and can actually make the problems less predictable – or worse.

For this reason, local government agencies must be prepared to identify and mitigate these risks, especially in critical areas such as public safety, health, and welfare.  

So: choose your AI tech wisely – and don’t jump on the latest bandwagon just because everyone says “it’s great!”. 

CogAbility’s best-in-class chatbot technology uses 8 different methods to deliver the right answer to each constituent’s question – including carefully-worded responses approved by management where necessary.

Risk #6: Skills and Talent Gap 

 

UF’s $70mm AI Initiative: preparing students for an AI-driven future (CogAbility is a partner)

 

AI is a complex interdisciplinary field that requires a diverse set of skills and talents,including data science, machine learning, and human-centered design. 

Unfortunately, many local government agencies lack the resources or expertise to effectively implement and manage AI systems themselves. 

To mitigate the skills gap issue, local government agencies must invest in upskilling their workforce, fostering partnerships with academic institutions and industry leaders, and attracting and retaining top AI talent.

Another alternative is to work with a trusted provider of AI solutions designed specifically for local government agencies.

 

Risk #7: Ethical and Moral Dilemmas 

 

AI systems can make decisions and recommendations that have significant ethical and moral implications, such as who to hire or fire, who to arrest or release, and who to surveil or target. 

Mitigating ethical and moral risks requires equal parts art and science.

To prevent career-ending mistakes that may impact thousands of people, local government agencies must engage in open and honest discussions with citizens, stakeholders, and experts to address these dilemmas and ensure that AI is used in a fair, just, and responsible manner. 

NOTE: The tips provided for #2 Eliminating Bias also apply here.

 

Risk #8: Legal Liability and Accountability 

 

 

AI systems raise complex legal and liability issues, such as who is responsible for AI-related damages and errors, who owns the data generated by AI systems, and how to comply with existing laws and regulations that may not anticipate the use of AI. 

Copyright infringement is another unresolved issue with large language models like GPT4.  

Many people do not understand that when you submit queries and training data to a large language model owned by someone else, you are explicitly giving them the right to use this data in future generations of the technology.  

So care must be taken not to infringe on another’s copyright.

Local government agencies should consult with legal experts, state regulatory bodies, and insurance providers to ensure that they have adequate protection and risk management strategies in place. 

 

Risk #9: Infrastructure and Compatibility 

 

AI systems require robust, secure and reliable infrastructure and compatibility with existing systems and platforms. 

Local government agencies must ensure that their infrastructure can support the data processing and storage needs of AI systems, as well as address any interoperability or integration issues. 

We’ve already addressed situations that require CJIS, Fedramp, SOC2 and PII compliance.  

These same frameworks should guide decisions regarding system integration and interoperability.  

For example, in an AI-powered process automation solution, the entire business process and all underlying systems should be equally secure and compliant – not just the AI-enabled tasks inside of it.

 

Risk #10: Public Perception and Trust 

 

 

This one in particular can keep an elected officer awake at night…

AI systems can be perceived as opaque, complex, and potentially threatening by the general public, especially if they are not properly explained or communicated. 

This is especially true today – in a world dominated by AI headlines that aren’t entirely designed to inform.

In short, the public is going to draw their own conclusions about AI in government based on the stories that get into their media feeds first. 

So make sure your messages also get into their feeds.

TIPS 

 

To mitigate this risk, local government agencies – especially public communication officers – must invest in public education and engagement, building trust and confidence in the integrity, safety, and value of their AI systems. 

For an example of good public stewardship in action, watch the following video produced by the Hillsborough County Tax Collector, Nancy Milan, to introduce their new AI-powered virtual agent Sofie to her team:  

Clearly, Nancy Milan understands the importance of properly educating her team and her community about what she’s doing with AI and how it benefits everyone.

Full disclosure: although Hillsborough is a CogAbility customer, we had no role in producing this video.

 

The Bottom Line

 

It’s very clear today that AI is an enormously powerful new technology poised to transform society and government agencies in many ways; in addition, it poses significant risks and challenges to government agencies and the public if AI’s risks are not properly addressed. 

Due to ChatGPT’s popularity, the public has quickly moved from being mostly unaware of AI risks to being keenly aware of this two-sided truth. 

For this reason, local government agencies and elected officers must become vigilant, proactive, and responsible stewards of AI – by addressing security concerns, regulatory concerns, and public safety concerns in a holistic way.

Local government vendors like CogAbility must likewise rise to this challenge. This is something the CogAbility team is investing heavily in right now and will continue to do. As are other leading vendors who serve this market.

By addressing the top 10 threats of AI outlined above, local government officials & their vendors can ensure that applications of AI to local government are safe, ethical, effective, and sustainable for the long term. 

If you’d like to discuss this topic or anything else related to AI solutions for local government, please contact us today – or drop a comment below.

Add a comment

*Please complete all fields correctly

This site uses Akismet to reduce spam. Learn how your comment data is processed.