top of page

AI Knowledge Management for the Public Sector — How We Handled Access Control and Response Guardrails|AI in the Field

  • 14 hours ago
  • 2 min read

What No One Tells You Before Deployment?


When we were brought in to help a public sector agency build an AI-powered knowledge management system, the initial goal was straightforward: feed ISO 27001 documentation into the AI so staff could quickly look up standards, processes, and compliance measures. The requirement was reasonable. The technical implementation wasn't particularly complex.

Then we started stress testing. And the problems came one by one.


The root cause wasn't that the AI was too dumb — it was that the AI was too obedient.

The first issue was Prompt Injection. A teammate tried crafting carefully designed prompts to "trick" the AI — questions that appeared harmless on the surface but were highly leading in intent. The AI played right along, pulling together sensitive information scattered across documents and serving it up without hesitation.

The second issue was that access control simply didn't follow the data. This agency's documents are inherently tiered — ISMS (Information Security Management) and PIMS (Privacy Information Management) are two distinct domains with different access levels. But the AI had no concept of this. To the model, all documents were equal. Ask anything, get an answer — regardless of who you are or whether you're supposed to know.


So we redesigned the architecture from the ground up, adding two layers of protection.

The first was a bidirectional AI Guardrail. On the input side, we built a filtering mechanism to detect malicious or leading prompts. On the output side, we added an interception layer — even if the AI found content in the documents explaining "how to bypass a security audit," the guardrail would catch it before the response was ever delivered. This layer protects more than just data. It protects organizational trust.

The second was Role-Based Access Control (RBAC). We integrated the AI system with the agency's existing account permission structure. When Account A logs in, it can only access ISMS documents. When Account B logs in, it's restricted to PIMS content. The AI became an advisor that knows its audience — aware of what it can and cannot say depending on who's asking.

The deepest lesson this case left with our team: the hardest part of deploying AI isn't making it smarter. It's making it organizationally aware.

In industries where compliance requirements are strict, an AI without proper access isolation and response guardrails is a liability — no matter how capable its generation is. AI can be an organization's most reliable advisor, but only if you first define its rules and its role.


 
 
bottom of page