The Human–AI Interface: Creating Structured Prompts for Boards and Presidents
Human-AI Interface - Prompts 1
By Rick Aman onWeek 1 – Setting the Relationship with AI: Role, Level, and Limits
“The problem is not that machines will think like humans, but that they will pursue objectives without regard to human values.” — Stuart Russell, Human Compatible
Over the past several weeks, I have framed the Human–AI Interface around three leadership responsibilities: WHY, WHAT, and HOW. That structure matters because it protects governance clarity in an AI-enabled organization:
WHY belongs entirely to humans. In a post-secondary institution, trustees steward mission, identity, and long-term obligation to community. AI does not define purpose. It does not carry fiduciary responsibility. It does not weigh moral tradeoffs or generational impact. Those responsibilities remain squarely with human leadership.
WHAT is directional. Boards and presidents determine strategic priorities and articulate a preferred future. AI can illuminate patterns and test assumptions, but it does not choose direction.
HOW is execution. This is where AI does its best work. It organizes information, detects drift, and increases visibility across complex systems that no leadership team can consistently hold together.
That framework has guided this series. Now we move from concept to practice.
If boards and presidents are going to use AI responsibly, the first discipline is not technical. It is relational. Before asking a single question, board members should define the relationship between themselves and the tool. In governance, clarity of relationship precedes clarity of insight. Board members should define three areas of AI clarity through nuanced prompts: role, level, and limits.
First: An AI Prompt Defines ROLE
When a board member begins an AI prompt with, “You are an AGENT experienced in working with governing boards in two-year colleges, providing practical and exercisable ideas for governance,” something important happens. AI adjusts its vocabulary. It shifts its level of abstraction. It adopts a governance orientation. It narrows toward sector-specific awareness. Even tone becomes more appropriate to board work. Defining role improves relevance. But the word agent is problematic.
In governance language, an agent acts on behalf of someone with delegated authority. Boards appoint presidents as agents of the institution. An agent has discretion. An agent represents. An agent carries delegated responsibility. When a trustee casually refers to AI as an agent, the language subtly elevates the tool beyond its proper place. Even if the trustee does not intend it, the framing matters.
If AI is treated as an agent, the conversation can begin to sound like this: What should we do? What strategy should we adopt? What actions should we take? That language shifts authority. It blurs the boundary between advisory synthesis and delegated leadership.
If instead a trustee prompts the AI with clarity, “You are a governance-level analytical advisor experienced in working with two-year college boards,” the posture changes. An advisor synthesizes. An advisor clarifies. An advisor surfaces patterns and tensions. An advisor does not act on behalf of the board. An advisor does not decide. That distinction reinforces the Human–AI Interface framework. WHY remains human. AI does not define purpose. WHAT is clarified, not chosen, by AI. HOW is strengthened, not governed, by AI.
Choosing the correct role preserves authority.
Role framing shapes output. AI adjusts tone, vocabulary, and scope based on how it is positioned. But role framing also shapes perception. For trustees, language carries authority. If boards are not careful about how they describe AI, they risk elevating it rhetorically before they elevate it practically. Personality influences tone. Role influences orientation. Discipline, however, is determined by scope and limits.
Second: An AI Prompt Establishes LEVEL
Even if a board member’s prompt frames AI as an experienced governance advisor, without discipline of governance scope the response can still drift. AI is trained to be helpful. And “helpful” often means offering solutions. It may move quickly into staff directives, tactical adjustments, process improvements, or prescriptive recommendations. That is not a flaw in the technology. It is a misalignment of governance scope.
Trustees govern at a strategic level. They are responsible for mission alignment, long-term direction, risk oversight, and institutional integrity over time. They do not manage workflows. They do not redesign course schedules. They do not restructure advising offices. Those responsibilities belong to the president and executive team.
If AI is not explicitly constrained to operate at the governance level, it will naturally slide into management terrain. It may recommend reallocating resources, changing staffing models, altering program structures, or adjusting operational processes. The content may sound intelligent and reasonable. But it will be misaligned with the trustee role. That is where subtle governance erosion begins, not through bad intent, but through blurred scope.
Governance scope answers a simple question: At what level of responsibility is this conversation happening?
At the governance level, AI should remain focused on:
Pattern recognition
Strategic interpretation
Risk visibility
Alignment assessment
Tension identification
Scenario implications
It should not provide:
Staff directives
Workflow redesign
Tactical enrollment strategies
Marketing plans
Budget reallocations
Implementation steps
The board governs the conditions under which execution occurs. The president governs execution itself. When AI is allowed to operate below the governance horizon, trustees risk slipping into operational oversight. That weakens the board–president partnership. It can unintentionally undermine executive authority. It can create confusion about who is responsible for what.
Discipline of governance scope protects that partnership. As execution accelerates and dashboards multiply, the temptation is to dive deeper, optimize faster, and intervene earlier. But governance is not strengthened by proximity to operations. It is strengthened by clarity of oversight. AI can illuminate execution without the board managing it. It can highlight emerging patterns without prescribing staff action. It can surface risk signals without dictating response.
When trustees are clear about level, AI remains a strategic lens rather than a management substitute. The board stays where it belongs, governing direction and boundaries, while the president remains accountable for execution within them. That clarity does not slow the institution. It stabilizes it.
Third: An AI Prompt Defines LIMITS
This is the most important discipline of all.
ROLE establishes posture. LEVEL establishes scope. LIMITS establish authority.
Without clear limits, AI will naturally extend itself. It will connect dots, draw inferences, recommend actions, and sometimes move beyond what the available data justifies. That is not malfunction. That is how generative systems are designed to operate. They complete patterns. They fill gaps. They attempt to be helpful.
In governance, that instinct must be constrained.
AI may not redefine mission. Mission is not an optimization problem. It is a declaration of identity and obligation. That remains firmly in the human domain. The AI may not prescribe executive action. Decisions about staffing, structure, budget allocation, program expansion, or contraction belong to the president and executive team. The board governs accountability, not tactics. It may not infer performance beyond the data provided. Governance discipline requires clarity about what is known and what is conjecture. AI may not replace judgment. It can surface tradeoffs. It cannot weigh values. It cannot assume responsibility for consequences. And finally, it may not blur authority between board and president.
AI supports inquiry. It does not assume authority. In governance, authority framing matters. The maturity of governance is not shown in how confidently AI speaks, but in how carefully trustees constrain what it is allowed to do. In an AI-enabled institution, the danger is not that machines will take control. The danger is that humans will become casual about boundaries.
Before asking AI for insight, define its role, its level, and its limits.
Putting AI Prompting Into Practice
Consider this generic prompt by a board member:
“Using publicly available information, identify patterns or signals that may indicate emerging opportunity, risk, or strategic drift for our college over the next 12–36 months.”
It sounds responsible. It is forward-looking. But it lacks role clarity, governance scope discipline, and authority boundaries. The output may be polished. It may even sound insightful. But it will likely drift toward generalization or prescription.
Now compare it with a more refined and nuanced prompt:
“Act as a governance-level advisor experienced with two-year college boards. Using only publicly available institutional information and the context provided, identify patterns or signals that may indicate emerging opportunity, risk, or strategic drift for our college over the next 12–36 months."
This board-initiated prompt provides governance-level analysis only to the AI. Do not recommend operational actions or tactical changes. Do not infer performance beyond the information supplied. Present findings as structured observations, reinforcing signals, and board-level questions to be explored in partnership with the president.
Why does this work better? Because it anchors role clearly at the governance level. It defines governance scope explicitly. It prevents operational drift. It constrains speculation. It encourages inquiry rather than prescription. It reinforces the board–president partnership. And it keeps AI in the WHAT and HOW domains without invading the WHY.
The difference is subtle, but governance discipline often lives in subtle distinctions. The first prompt invites plausibility. The refined prompt invites disciplined insight. This is the foundation of the Human–AI Interface in governance.
Next week, we will examine what happens when AI is used without sufficient context and how seeding prompts appropriately moves boards from generic pattern recognition to institution-specific insight.
-----
Aman & Associates works with governing boards and executive teams to clarify purpose, strengthen strategic direction, and use AI as disciplined support for leadership, not a substitute for it. Through board retreats, strategic futuring, and executive advisory work, we help leaders shape direction early rather than react after results are already set.
Rick Aman, PhD Aman & Associates
rick@rickaman.com | www.rickaman.com/articles