Framework

Privacy

Personal Data: External

Privacy Policy — External Services and My Data

Audience: Mark Myers (internal reference)

Purpose: Documents how third-party services connected to my AI ecosystem handle my data

Effective Date: March 2, 2026

Governing Document: CONSTITUTION.md ("How I Work With AI," v5)


What This Covers

This document tracks every third-party service that Golgi or my broader AI ecosystem connects to, what data each service receives, and what exposure risk each creates. If a service touches my data through an AI-mediated process, it belongs here.


Connected Services

Tier 1: High Integration (Direct Agent Access)

Telegram

What Golgi accesses: Messages sent to/from the bot, group chat messages where the bot is present

What data flows: Full message content (text, images if sent), sender metadata, timestamps

Token scope: Bot API token — can read/send messages in permitted chats, manage bot-specific settings

Retention: Messages persist on Telegram's servers per their policy; bot can access message history within API limits

Risk level: High — primary communication channel. Bot token compromise = impersonation + message history access

Their privacy policy: https://telegram.org/privacy

GitHub

What Golgi accesses: Repositories, code, issues, pull requests, commit history

What data flows: Code content, commit messages, issue/PR text, repository metadata

Token scope: Varies by configuration — potentially repo read/write, issues, PRs

Retention: All data persists on GitHub per their terms; public repos are permanently public

Risk level: Medium — code and project data exposure. Token compromise = unauthorized commits, data exfiltration from private repos

Their privacy policy: https://docs.github.com/en/site-policy/privacy-policies/github-general-privacy-statement

Firebase (Google Cloud)

What Golgi accesses: Firestore database (SMS message queue, contact lists), authentication config

What data flows: SMS message content, phone numbers, contact metadata, delivery status

Token scope: Firebase Admin SDK or service account — Firestore read/write, potentially auth management

Retention: Data persists until explicitly deleted; Firestore has no automatic expiration by default

Risk level: High — contains phone numbers, message content, and contact information. Credential compromise = access to all stored messages and contacts

Their privacy policy: https://firebase.google.com/support/privacy

Vercel

What Golgi accesses: Deployment platform for golgi-sms webhook and helm-app dashboard

What data flows: Source code (deployed functions), environment variables, deployment logs, incoming webhook requests

Token scope: Project-level access for deployments

Retention: Deployment history persists; function logs retained per Vercel's plan limits

Risk level: Medium — webhook endpoints receive inbound SMS data via Twilio. Vercel compromise = access to deployed code and environment secrets

Their privacy policy: https://vercel.com/legal/privacy-policy

Tier 2: Moderate Integration (Periodic or Conditional Access)

Twilio

What Golgi accesses: Inbound SMS messages routed through webhook

What data flows: SMS content, sender phone numbers, carrier metadata, delivery status

Token scope: Account SID + Auth Token for SMS send/receive

Retention: Message logs retained per Twilio's policy (default: available via API for duration of account)

Risk level: Medium-High — handles phone numbers and message content. Token compromise = ability to send SMS from the account, access message logs

Their privacy policy: https://www.twilio.com/en-us/legal/privacy

Google Workspace (Calendar, Gmail, Drive)

What Golgi accesses: Calendar events (read), email content (if configured), Drive files (if configured)

What data flows: Event details, email content, document contents — depends on configured scope

Token scope: OAuth scopes determine access — should be limited to minimum necessary

Retention: Data persists in Google's ecosystem per their terms

Risk level: Medium-High — broad personal and professional data. OAuth token compromise = access to email, calendar, and documents within granted scopes

Their privacy policy: https://policies.google.com/privacy

Tier 3: Planned or Low Integration

CampusESP

Status: Mentioned in workspace context but integration scope TBD

What would flow: Parent/family communication data (must be evaluated carefully for institutional data boundaries)

Risk note: Any CampusESP integration must be evaluated against the institutional data boundary — no student records or institutionally governed data enters personal AI systems

Their privacy policy: Review before any integration

Oura / Apple Health

Status: Potential future integration for health/wellness context

What would flow: Sleep data, activity data, readiness scores, health metrics

Risk level: Low-Medium — personal health data entering AI context. Not institutionally sensitive but personally sensitive

Risk note: Health data entering AI conversations is transmitted to Anthropic's API during processing

Their privacy policies: https://ouraring.com/privacy-policy / https://www.apple.com/legal/privacy/


API Token Scope and Access Levels

Service
Token Type
Access Level
Rotation Schedule
Telegram
Bot API Token
Bot messages only
As needed
GitHub
Personal Access Token
Repo-dependent
Review quarterly
Firebase
Service Account / Admin SDK
Firestore read/write
Review quarterly
Vercel
Project Token
Deploy + env vars
As needed
Twilio
Account SID + Auth Token
SMS send/receive + logs
Review quarterly
Google
OAuth Token
Scoped per grant
Expires/refreshes automatically
Anthropic
API Key
Model inference
Review quarterly

Principle: Tokens should be scoped to minimum necessary access. Broad tokens (admin, full-access) should be replaced with scoped alternatives where the service supports it.


How My Data Might Be Exposed Through Integrations

Telegram bot token compromise: Attacker could read messages, send messages as the bot, and access conversation history. This is the highest-impact single credential because it's the primary communication channel.

Firebase credential compromise: Access to SMS message queue, phone numbers, contact lists. Could read private messages or inject fake messages into the queue.

Chained compromise: If an attacker gains access to the Mac Mini (SSH, physical, or malware), they have access to all credentials stored under ~/.local-infra/. This is the single point of failure — all service access flows from local credential storage.

Context window leakage: Data from one service (e.g., email content) enters the AI context and is transmitted to Anthropic's API. The data now exists in two places: the originating service and Anthropic's 7-day retention window. A separate policy violation or data breach at either endpoint exposes the data.

Webhook exposure: The golgi-sms Twilio webhook is a publicly accessible endpoint. If not properly authenticated, it could be targeted with spoofed requests.


Risk Tiers Summary

Risk Tier
Services
Primary Concern
Critical
Telegram, Firebase
Message content + contact data + primary communication channel
High
Twilio, Google Workspace
Phone numbers, email content, calendar data
Medium
GitHub, Vercel
Code, deployment secrets, project data
Low
Oura, Apple Health (if integrated)
Personal health metrics

Audit and Revocation Process

Regular Audit (Quarterly)

  • Review all active API tokens and their scopes
  • Verify each token is still needed and appropriately scoped
  • Check for any services with broader access than necessary
  • Review Firebase data for accumulated messages that should be pruned
  • Check Vercel deployment logs for unexpected access
  • Review GitHub token permissions against actual usage
  • Emergency Revocation

    If a credential is compromised or suspected compromised:

  • Revoke the token immediately at the service provider
  • Generate a new token with the same or narrower scope
  • Update the local configuration (~/.local-infra/ config files or environment variables)
  • Audit for unauthorized access — check service logs for unusual activity during the exposure window
  • Document the incident in memory/daily notes
  • Notify Mark if the agent detects or suspects compromise (per SOUL.md security protocols)
  • How to Cut Access to Any Service

    Each service can be disconnected independently:

  • Telegram: Revoke bot token via @BotFather
  • GitHub: Delete token in GitHub Settings > Developer Settings > Personal Access Tokens
  • Firebase: Disable service account in Google Cloud Console
  • Vercel: Remove project token in Vercel dashboard
  • Twilio: Rotate Auth Token in Twilio Console
  • Google: Revoke OAuth grants in Google Account > Security > Third-party apps
  • Revoking a token immediately cuts Golgi's access to that service. No data is lost locally — only the live connection is severed.


    Institutional Data Boundary Reminder

    Before integrating any new external service, verify:

  • Does this service handle student data? → Do not connect to personal AI systems
  • Does this service fall under institutional data governance? → Do not connect without institutional approval
  • Could data from this service inadvertently include protected information? → Evaluate carefully, err on the side of exclusion
  • The boundary is clear: personal and professional data in the AI ecosystem, institutional and student data in institutional systems. No crossover.


    *This document is part of the privacy policy suite. See also: PERSONAL_DATA_LOCAL.md, PERSONAL_DATA_AGENTIC.md, PUBLIC_USER_PRIVACY_POLICY.md.*

    *Governing document: CONSTITUTION.md ("How I Work With AI," v5)*