Getting Started

Use Cases

How security, compliance, and platform teams use AIOStack to secure agentic AI in production.

Discovering shadow AI

The problem: Developers integrate LLM APIs, deploy agents, and spin up inference services faster than security teams can track. By the time a review happens, dozens of unregistered AI workloads may already be running in production — touching sensitive data with no oversight.

How AIOStack helps: The observer passively detects every LLM API call, ML framework usage, and agent protocol endpoint across all namespaces — including workloads that were never registered or approved. Within minutes of installation you have a complete inventory of what AI is actually running, not just what was declared.


Mapping the identity chain behind every AI action

The problem: When an agent causes a data incident, the question is always: which identity did this? The answer is rarely obvious — agents run under service accounts, assume IAM roles, impersonate GCP SAs, and chain through multiple principals before touching data. Traditional tools show individual permissions but not the full chain.

How AIOStack helps: AIOStack reconstructs the complete identity chain for every observed action — from the user or system that triggered the agent, through the orchestrator, service account, and database principal, to the final destination. When something goes wrong, you have a full evidence chain rather than a partial audit log.


Detecting when sensitive data leaves the environment

DSPM features — including sensitive data classification, field-level tracking, and data flow monitoring — are available with Aurva Enterprise. Contact support@aurva.io to learn more.

The problem: An agent with valid credentials queries a database containing PII, passes it to an LLM, and the response ends up in a Slack message or external storage bucket. Each individual step was authorized. The combination was data exfiltration.

How AIOStack helps: AIOStack tracks the full data flow — which datasources were queried, what was retrieved, which LLM or external API received it, and where it ultimately went. Novel or unexpected egress destinations are flagged automatically so you can investigate before a breach is confirmed.


Right-sizing permissions for non-human identities

The problem: Service accounts and IAM roles are typically provisioned broadly and never revisited. An agent that only needs to read one table ends up with access to the entire database. Over time, accumulated over-provisioned NHIs become significant blast radius.

How AIOStack helps: By comparing what each identity is granted against what it actually uses at runtime, AIOStack surfaces over-provisioned service accounts, IAM roles, and GCP SAs. Security and platform teams can use this evidence to implement least-privilege without guessing — the data shows exactly what permissions are genuinely needed.


Catching purpose drift in deployed agents

The problem: An agent is deployed to handle customer support queries. Six weeks later it's querying financial records, calling an external API that wasn't in the original design, and sending structured responses to an endpoint nobody recognizes. The agent's credentials are valid. Nothing in the access logs looks wrong at first glance.

How AIOStack helps: AIOStack builds a behavioral baseline for each agent and flags deviations — new datasources accessed, new egress destinations, unusual query volumes, or tool invocations outside the expected workflow. Purpose drift is surfaced before it becomes a reportable incident.


Securing MCP servers and agentic tool use

The problem: MCP servers expose tools that agents invoke autonomously. Without visibility into which tools are being called, with what inputs, and in what sequence, it's impossible to enforce policy or detect abuse.

How AIOStack helps: AIOStack detects MCP endpoints in your environment and observes tool invocations at the protocol level — which tools are called, by which agent, in what context, and what data is accessed as a result. This gives security teams the visibility to set guardrails and detect misuse.


Building an audit trail for AI compliance

Full audit trail capabilities — including database activity monitoring (DAM), query-level logging, and compliance evidence packaging — require Aurva Enterprise. Contact support@aurva.io to learn more.

The problem: Regulators and auditors increasingly ask: which AI systems touched personal data? What did they retrieve? Where did it go? Who authorized it? Answering these questions from fragmented logs, SIEM events, and manual interviews takes weeks and still produces gaps.

How AIOStack helps: AIOStack produces a continuous runtime evidence record — agent identity, data accessed, destinations reached, workflow context — that can be used directly for compliance reporting, privacy impact assessments, and incident investigations. The evidence is grounded in what actually happened, not inferred from static policies.


The problem: An alert fires. An agent may have exfiltrated data, been hijacked via prompt injection, or caused an unintended side effect. The security team needs to know exactly what happened, fast — but the relevant signals are spread across Kubernetes logs, cloud audit trails, and LLM provider APIs.

How AIOStack helps: Every observed action — query, tool invocation, egress call, identity switch — is recorded in a structured runtime evidence bundle. Investigators can reconstruct the full timeline of what the agent did, which data it touched, which identities were involved, and where data ended up, without having to correlate across multiple log sources manually.


Generating an AI Bill of Materials (AIBOM)

The problem: Knowing which AI packages and libraries are actually in use is harder than it sounds. Package manifests and lockfiles tell you what's installed — not what's running. A workload might have PyTorch, TensorFlow, and a dozen model libraries installed but only ever load one of them at runtime. Source-based SBOMs require repository access and still only reflect what's declared, not what executes.

How AIOStack helps: AIOStack detects packages at the point of actual use — by observing memory address access and fopen calls during execution at the kernel level. This means AIOStack knows exactly which libraries were loaded and invoked during a workload's runtime, not just which ones were installed. The result is a precise, evidence-based AIBOM that reflects reality without requiring GitHub access, source code, or container image scanning.


Monitoring third-party and open-source agents

The problem: Teams deploy open-source agent frameworks, third-party copilots, and vendor-supplied AI tools that they don't control. The code isn't auditable, the behavior isn't fully documented, and the security posture is unknown.

How AIOStack helps: AIOStack treats all agents equally — it observes behavior at the network and system call level regardless of what framework or vendor built the agent. Third-party agents get the same identity mapping, data flow tracking, and anomaly detection as internally built ones.

Copyright © 2026