Framework

Privacy

Personal Data: Agentic

Privacy Policy — Agentic AI Use of My Data

Audience: Mark Myers (internal reference)

Purpose: Documents how agentic AI systems handle my data across my full AI ecosystem

Effective Date: March 2, 2026

Governing Document: CONSTITUTION.md ("How I Work With AI," v5)


What This Covers

This document describes how agentic AI systems — not just Local Infrastructure, but all AI tools in my ecosystem — handle my personal data. It covers what data enters these systems, how it's processed and retained, what happens to it after processing, and what rights I have over it.

"Agentic AI" means AI systems that operate with persistent memory, autonomous action capabilities, tool use, and multi-session context. This is meaningfully different from a standard chat interaction and creates different data exposure patterns.


Data Sent to AI APIs

Anthropic (Claude) — Primary Provider

What goes in:

  • All Golgi conversation content (messages, responses, system prompts)
  • Governance files loaded at session start (CONSTITUTION.md, SOUL.md, USER.md, etc.)
  • File contents read during sessions (documents, code, configuration files)
  • Shell command output and tool use results
  • Memory files and session context
  • Retention:

  • API usage: 7 days, then auto-deleted (as of September 2025)
  • Not used for model training under commercial/API terms
  • Safety-flagged content: up to 2 years (inputs/outputs), 7 years (trust & safety scores)
  • Extended 30-day retention available via opt-in DPA (not currently opted in)
  • Reference: Anthropic Privacy Center — https://privacy.claude.com

    Other AI Providers in My Ecosystem

    As of March 2026, my primary AI interaction is through Anthropic's Claude API via Local Infrastructure. If additional providers are added (OpenAI, Google, local models, etc.), their data handling should be documented here.

    General principle: Any AI provider added to this ecosystem must be evaluated against the same criteria: What data goes in? How long is it retained? Is it used for training? Can I delete it? The "How I Work With AI" constitution applies regardless of provider.


    How Conversation History and Context Windows Work

    Data in Transit vs. at Rest

    In transit: When Golgi processes a request, the conversation context — including system prompts, recent messages, file contents, and tool results — is sent to Anthropic's API over HTTPS. This data exists on Anthropic's servers during processing and for the retention period (7 days for API usage).

    At rest (local): Session transcripts, memory files, and governance documents are stored on the Mac Mini. These persist indefinitely until I delete them. They are not encrypted at the application level — protection comes from OS file permissions and full-disk encryption (FileVault).

    Context window mechanics: Each API call includes a window of recent conversation. Older messages may be summarized or dropped as the context fills. This means sensitive data from earlier in a session may not persist in later API calls, but it was transmitted during the initial call and is retained per Anthropic's policy.

    What This Means

    Every piece of data that enters a conversation with Golgi — whether I type it, whether it's read from a file, whether it comes from a tool result — is transmitted to Anthropic's servers. The data is processed, retained briefly, and deleted. But it existed outside my machine during that window.


    What AI Providers Retain, Train On, or Discard

    Anthropic (API/Commercial Terms)

    Data Type
    Retained?
    Used for Training?
    Deletion
    API inputs/outputs
    7 days
    No
    Automatic
    Safety-flagged content
    Up to 2 years
    No (retained for safety review)
    After retention period
    Trust & safety scores
    Up to 7 years
    No (metadata only)
    After retention period
    Account/billing data
    Duration of account
    No
    Upon account closure + legal holds

    Consumer terms (Pro/Max/Free) differ significantly: Under consumer terms (updated August 2025), Anthropic may retain data for up to 5 years in de-identified form for training if the user does not opt out. This does NOT apply to API usage, but it applies to any direct Claude.ai conversations outside the API.

    Key distinction: Conversations through Golgi (API) have different privacy protections than conversations through Claude.ai (consumer). If I use Claude.ai directly for personal work, that data may be subject to consumer retention policies unless I opt out.


    Personal Data That Enters AI Systems

    Through my use of Golgi and other AI tools, the following categories of personal data enter AI processing:

  • Communications: Email drafts, Telegram messages, SMS content, professional correspondence
  • Documents: Reports, proposals, academic writing, dissertation content
  • Professional context: Work details, colleague names, organizational information, budget context
  • Personal context: Daily patterns, preferences, health data (if queried), family/pet details
  • Academic work: Dissertation content, research notes, scholarly writing
  • Financial context: Budget discussions, vendor information (no account numbers or credentials)
  • Operational data: Calendar events, task lists, project status, meeting notes
  • What does NOT enter AI systems:

  • Student records or FERPA-protected data
  • Institutional data governed by Augusta University policies
  • Banking credentials, SSNs, or government ID numbers
  • Passwords or authentication credentials (handled by the system, not the AI)

  • How Agentic AI Differs from Standard AI Chat

    Agentic AI creates data exposure patterns that standard chat does not:

    Persistent memory: Golgi remembers across sessions via MEMORY.md and daily notes. This means personal context accumulates over time. A standard chat forgets between sessions — an agent builds a growing profile of its user.

    Autonomous action: Golgi can execute shell commands, read files, browse the web, and interact with external services without per-action approval. This means data can flow to external services as part of autonomous operation, not just in response to direct requests.

    Tool use: When Golgi uses tools (shell, browser, API calls), the inputs and outputs of those tools enter the conversation context and are transmitted to the API provider. A web search result, a file's contents, a command's output — all become part of the data sent to Anthropic.

    Multi-session context: Memory files carry context from previous sessions into new ones. Information shared weeks ago may influence today's responses. This creates a longer data lifecycle than single-session chat.

    Cross-service reach: Golgi interacts with Telegram, GitHub, Vercel, Firebase, and potentially other services. Data can move between these services as part of agent operation, creating cross-platform data flows that don't exist in isolated chat.


    Cross-Platform Data Flow

    When data moves between AI systems or between an AI system and external services:

    Golgi → External services: When Golgi sends a Telegram message, pushes code to GitHub, or queries Firebase, data from the AI conversation enters those platforms' ecosystems with their own retention and privacy policies.

    External services → Golgi: When Golgi reads emails, checks calendars, or fetches web content, that external data enters the AI conversation context and is transmitted to Anthropic's API.

    AI output → AI input: If I copy Claude output from one context into another (e.g., a Claude.ai draft into a Golgi session), the data crosses privacy boundaries between consumer and API terms.

    Implication: My data doesn't just exist in one place. It flows through a network of services, each with its own privacy posture. The "How I Work With AI" constitution is the unifying governance layer that applies regardless of where the data sits.


    My Rights

    Anthropic (API)

  • Deletion: Data auto-deletes after 7 days; no manual deletion needed for routine use
  • Export: Session transcripts stored locally can be exported at any time
  • Audit: Local session logs provide a record of what was sent and received
  • Opt-out of training: Already opted out by virtue of using API terms (training opt-out is default for API)
  • Local Data

  • Full control: I can delete any file on my machine at any time
  • Git history: As of March 2, 2026, workspace files are version-tracked for audit
  • Memory management: I can review, edit, or delete Golgi's memory files at any time
  • Session pruning: I can delete old session transcripts to limit data accumulation
  • General Principle

    The "How I Work With AI" constitution establishes that I maintain authority over my data. AI is a tool, not an authority. Data informs, it doesn't decide. I can revoke access, delete data, or change providers at any time without losing the governance framework that defines how my data should be handled.


    The Constitutional Framework

    The "How I Work With AI" constitution (v5, CONSTITUTION.md) is the governing framework for all agentic data use in my ecosystem. Its core data principles:

  • Transparency: I know what data enters AI systems and how it's handled
  • Human authority: I control my data. The AI serves my interests, not its own or its provider's
  • No monetization: My data is not for sale, sharing, or advertising
  • Data minimization: Only what's needed, only as long as needed
  • Honest disclosure: AI involvement in any work product is disclosed
  • Institutional boundary: No student records or institutionally governed data enters personal AI systems
  • These principles apply regardless of which AI provider, platform, or tool is involved.


    Institutional Data Boundary

    No student records, protected educational data, or institutionally governed data enters any personal AI system. This is not a technical limitation — it is a deliberate ethical and professional boundary. Augusta University's data stays in Augusta University's systems. My personal AI ecosystem handles my personal and professional work only.


    *This document is part of the privacy policy suite. See also: PERSONAL_DATA_LOCAL.md, PERSONAL_DATA_EXTERNAL.md, PUBLIC_USER_PRIVACY_POLICY.md.*

    *Governing document: CONSTITUTION.md ("How I Work With AI," v5)*