AI Transparency.
Settled Ground, Inc. uses artificial intelligence to support research, analysis, and operational efficiency. All strategy, all decisions, and all relationships with participants are human-led. Public commitments are paired with internal policies and signed by participants at onboarding through the AI Use Disclosure.

Operating principle.
AI operates within defined internal controls and is used only in ways that strengthen, not replace, human judgment. Every conversation with a participant is human. Every recommendation that reaches a participant is reviewed and approved by a human. Sensitive participant data is handled per a documented internal data-handling policy.
Where AI supports the work, and where humans only.
Where AI supports the work
- Research — verifying funder details, reading public documents at scale, surfacing patterns in market and competitive research.
- Drafting support — first drafts of materials authored by program staff in support of participant-led institution-building. Every draft is reviewed and edited by a human before any output reaches a participant.
- Analysis — pressure-testing recommendations, surfacing considerations the human team might miss, structuring complex strategic questions.
- Operational efficiency — internal documentation, scheduling, and administrative work where AI saves staff time without affecting program outcomes.
Where humans only
- Conversations with participants — every meeting, call, and substantive email is a human conversation. Participants will not speak to an AI assistant claiming to represent Settled Ground, Inc.
- Final recommendations — every recommendation is reviewed and signed by a human. AI informs thinking; AI does not make decisions.
- Confidential participant data — sensitive organizational data, financial information, and personal data about communities served do not enter AI tools that have not been verified by the internal data-handling policy.
- Strategic judgment — judgment about a specific organization, its leadership, its community, and its trajectory remains human.
- The relationship — staff presence with the participant through the work. AI does not replace presence.
The AI Use Disclosure.
Every participating organization signs an AI Use Disclosure at onboarding. The disclosure documents the specific AI tools used by program staff, the categories of work for which each is used, the categories of participant data that are and are not shared with each tool, the human review process between AI output and any recommendation, and the participant's right to request that specific tools not be used in their engagement.
The Disclosure is reviewed annually and updated when new tools are adopted or existing tools change in capability.
Sample AI Use Disclosure
Counsel-drafted before launch.
Download PDFInternal policies.
Public commitments are operationalized through internal policies.
- Internal AI Tool Approval Register — list of approved tools, data each tool may handle, review cadence. Updated quarterly.
- Internal Data-Handling Policy — procedures by which participant data is segmented, anonymized where appropriate, and prevented from entering tools not approved for the data category.
- Internal Human Review Standard — the documented review process between AI output and any recommendation that reaches a participant.
Participants and funders request the current version of any internal policy in writing.