Framework

Daily Practice

General Guidance

How I Work With AI: General Guidance

Principles applied to daily practice

This document is a companion to How I Work With AI (v5). Where the main framework

describes values, philosophy, and governance, this guidance describes how those

principles show up in the day-to-day reality of working with AI. It is intended to be

practical without being prescriptive, giving enough structure to maintain consistency

while leaving room for the judgment the framework prioritizes.

Governing document: How I Work With AI, v5

Before You Start: The Front-Loading Principle

The single most impactful habit in my AI practice is front-loading context before asking

for output. This means providing the what, who, why, tone, constraints, and any relevant

source material before the first prompt. When I do this well, the first draft is usually

close. When I skip it, I’m looking at two to three extra revision rounds that cost more

time than the front-loading would have.

Before starting any AI-assisted task, I ask myself: Does the system have everything it

needs to do this well on the first pass? If not, I provide it before I ask for output.

Daily Practice Principles

Principle 1: AI drafts. I decide.

Every piece of AI-assisted output goes through my review before it reaches anyone else.

This isn’t a bottleneck. It’s the point. AI handles production. I handle judgment. The

review step is where my voice, my context, and my relationships with the audience get

applied. Skipping it is never acceptable, regardless of how routine the task feels.

Principle 2: Match the tool to the task.

Not every task needs AI, and not every AI tool is right for every task. Before reaching for

a tool, I consider whether the task genuinely benefits from AI assistance or whether I’m

using it out of habit. Some work is better done by hand. Some thinking is better done

alone. The goal is augmentation at the right moments, not automation of everything.

Principle 3: Iterate, don’t accept.

My best AI-assisted work comes from treating conversations as collaborative drafting

processes. The first output is a starting point, not a deliverable. I provide feedback,

redirect, push back on things that don’t land, and get specific about what’s working and

what isn’t. The quality of my output is directly proportional to the quality of my

engagement with the process.

Principle 4: Protect what’s sensitive.

Before entering any information into an AI system, I consider whether it contains

protected data, confidential information, or content that shouldn’t leave its institutional

context. FERPA-protected student information, personnel details, and sensitive

institutional data stay within approved systems. When in doubt, I leave it out.

Principle 5: Disclose when it matters.

I don’t hide my AI use, but I also don’t performatively announce it in contexts where it’s

irrelevant. My disclosure practice follows common sense: academic work gets cited,

professional contexts get transparency when asked or when it matters for trust, and

internal working documents don’t need a disclaimer on every draft. The test is whether

someone receiving my work would reasonably want to know that AI was involved.

Principle 6: Maintain the voice.

Every output that carries my name should sound like me. This means actively editing AI-

generated content for voice, not just accuracy. It means catching and removing AI tells

(the “Additionally” transitions, the em-dash habits, the over-structured prose). It means

reading output aloud when something feels off. Voice consistency is not optional. It is

the first priority.

Principle 7: Stay honest about quality.

If AI-assisted output isn’t good enough, I say so and either revise it or start over. I don’t

send mediocre work because it was efficient to produce. The standard for AI-assisted

output is the same as the standard for any output that carries my name: would I be

proud of this if someone asked me about it?

Principle 8: Keep learning.

AI capabilities change quickly. My practice should evolve with them. This means

periodically reassessing which tools I use, how I use them, and whether my habits still

serve me. It also means being honest about growth edges: the exploration-to-

implementation gap, the scope creep tendency, and the polishing question are all

patterns I continue to monitor.

Context-Specific Guidance

Professional communications

AI is most useful here for first drafts and structural thinking. I provide the audience, the

purpose, and the tone. I review for voice, accuracy, and whether the message actually

says what I need it to say. Sensitive communications (personnel issues, political

situations, crisis response) always get more careful human attention and are never sent

without significant personal revision.

Academic and scholarly work

AI supports my research through literature synthesis, concept organization, and

drafting assistance. I cite AI use in accordance with APA 7 guidelines and my

institution’s academic integrity policy. My scholarly voice, my analysis, and my

arguments are my own. AI helps me organize and articulate. It does not think for me.

Teaching and curriculum

AI helps me develop workshop designs, curriculum frameworks, and instructional

materials. When I teach students about AI use, I model the same practices I describe in

this document: transparency, integrity, and the understanding that AI is a tool that

requires ethical judgment to use well.

Student-facing materials

Materials that go to students and families carry particular responsibility. These

audiences are navigating significant life transitions and deserve accuracy, warmth, and

care. AI drafts are always reviewed with the audience’s experience in mind, not just the

content’s correctness.

The Daily Check

At the end of any significant AI-assisted work session, I ask:

Did I front-load enough context, or did I waste time on avoidable revisions? Does the

output sound like me? Is it accurate? Would I be comfortable if the full process were

visible? Am I using AI to be better at my work, or am I using it to avoid doing my work?

If those answers are honest, the practice stays healthy.

This guidance document is governed by and subordinate to How I Work With AI, v5. In

any conflict between this guidance and the main framework, the main framework takes

precedence.

See also: Agent Framework for governance of AI agent-specific operations.