Privacy Policy — Agentic AI Use of My Data
Audience: Mark Myers (internal reference)
Purpose: Documents how agentic AI systems handle my data across my full AI ecosystem
Effective Date: March 2, 2026
Governing Document: CONSTITUTION.md ("How I Work With AI," v5)
What This Covers
This document describes how agentic AI systems — not just Local Infrastructure, but all AI tools in my ecosystem — handle my personal data. It covers what data enters these systems, how it's processed and retained, what happens to it after processing, and what rights I have over it.
"Agentic AI" means AI systems that operate with persistent memory, autonomous action capabilities, tool use, and multi-session context. This is meaningfully different from a standard chat interaction and creates different data exposure patterns.
Data Sent to AI APIs
Anthropic (Claude) — Primary Provider
What goes in:
Retention:
Reference: Anthropic Privacy Center — https://privacy.claude.com
Other AI Providers in My Ecosystem
As of March 2026, my primary AI interaction is through Anthropic's Claude API via Local Infrastructure. If additional providers are added (OpenAI, Google, local models, etc.), their data handling should be documented here.
General principle: Any AI provider added to this ecosystem must be evaluated against the same criteria: What data goes in? How long is it retained? Is it used for training? Can I delete it? The "How I Work With AI" constitution applies regardless of provider.
How Conversation History and Context Windows Work
Data in Transit vs. at Rest
In transit: When Golgi processes a request, the conversation context — including system prompts, recent messages, file contents, and tool results — is sent to Anthropic's API over HTTPS. This data exists on Anthropic's servers during processing and for the retention period (7 days for API usage).
At rest (local): Session transcripts, memory files, and governance documents are stored on the Mac Mini. These persist indefinitely until I delete them. They are not encrypted at the application level — protection comes from OS file permissions and full-disk encryption (FileVault).
Context window mechanics: Each API call includes a window of recent conversation. Older messages may be summarized or dropped as the context fills. This means sensitive data from earlier in a session may not persist in later API calls, but it was transmitted during the initial call and is retained per Anthropic's policy.
What This Means
Every piece of data that enters a conversation with Golgi — whether I type it, whether it's read from a file, whether it comes from a tool result — is transmitted to Anthropic's servers. The data is processed, retained briefly, and deleted. But it existed outside my machine during that window.
What AI Providers Retain, Train On, or Discard
Anthropic (API/Commercial Terms)
Consumer terms (Pro/Max/Free) differ significantly: Under consumer terms (updated August 2025), Anthropic may retain data for up to 5 years in de-identified form for training if the user does not opt out. This does NOT apply to API usage, but it applies to any direct Claude.ai conversations outside the API.
Key distinction: Conversations through Golgi (API) have different privacy protections than conversations through Claude.ai (consumer). If I use Claude.ai directly for personal work, that data may be subject to consumer retention policies unless I opt out.
Personal Data That Enters AI Systems
Through my use of Golgi and other AI tools, the following categories of personal data enter AI processing:
What does NOT enter AI systems:
How Agentic AI Differs from Standard AI Chat
Agentic AI creates data exposure patterns that standard chat does not:
Persistent memory: Golgi remembers across sessions via MEMORY.md and daily notes. This means personal context accumulates over time. A standard chat forgets between sessions — an agent builds a growing profile of its user.
Autonomous action: Golgi can execute shell commands, read files, browse the web, and interact with external services without per-action approval. This means data can flow to external services as part of autonomous operation, not just in response to direct requests.
Tool use: When Golgi uses tools (shell, browser, API calls), the inputs and outputs of those tools enter the conversation context and are transmitted to the API provider. A web search result, a file's contents, a command's output — all become part of the data sent to Anthropic.
Multi-session context: Memory files carry context from previous sessions into new ones. Information shared weeks ago may influence today's responses. This creates a longer data lifecycle than single-session chat.
Cross-service reach: Golgi interacts with Telegram, GitHub, Vercel, Firebase, and potentially other services. Data can move between these services as part of agent operation, creating cross-platform data flows that don't exist in isolated chat.
Cross-Platform Data Flow
When data moves between AI systems or between an AI system and external services:
Golgi → External services: When Golgi sends a Telegram message, pushes code to GitHub, or queries Firebase, data from the AI conversation enters those platforms' ecosystems with their own retention and privacy policies.
External services → Golgi: When Golgi reads emails, checks calendars, or fetches web content, that external data enters the AI conversation context and is transmitted to Anthropic's API.
AI output → AI input: If I copy Claude output from one context into another (e.g., a Claude.ai draft into a Golgi session), the data crosses privacy boundaries between consumer and API terms.
Implication: My data doesn't just exist in one place. It flows through a network of services, each with its own privacy posture. The "How I Work With AI" constitution is the unifying governance layer that applies regardless of where the data sits.
My Rights
Anthropic (API)
Local Data
General Principle
The "How I Work With AI" constitution establishes that I maintain authority over my data. AI is a tool, not an authority. Data informs, it doesn't decide. I can revoke access, delete data, or change providers at any time without losing the governance framework that defines how my data should be handled.
The Constitutional Framework
The "How I Work With AI" constitution (v5, CONSTITUTION.md) is the governing framework for all agentic data use in my ecosystem. Its core data principles:
These principles apply regardless of which AI provider, platform, or tool is involved.
Institutional Data Boundary
No student records, protected educational data, or institutionally governed data enters any personal AI system. This is not a technical limitation — it is a deliberate ethical and professional boundary. Augusta University's data stays in Augusta University's systems. My personal AI ecosystem handles my personal and professional work only.
*This document is part of the privacy policy suite. See also: PERSONAL_DATA_LOCAL.md, PERSONAL_DATA_EXTERNAL.md, PUBLIC_USER_PRIVACY_POLICY.md.*
*Governing document: CONSTITUTION.md ("How I Work With AI," v5)*