Not because the relationships weren't happening. Some of them clearly were. It was because no one had defined what a successful mentoring relationship looked like in this context, no one was measuring whether the conversations were actually happening, and the "program" had no connection to the outcomes the school genuinely cared about — attendance, belonging, and making it to graduation.
What happened next wasn't a rebuild from scratch. It was a tightening — taking what existed, stripping out the ambiguity, and adding just enough structure to make the outcomes visible.
Defining what success actually meant
The first thing the team did was stop calling it a buddy program. That framing implied casual friendship — which was fine, but it wasn't a measurable intervention. The new framing was explicit: this was a structured peer mentoring program designed to improve ninth-grade retention and reduce the sense of isolation that drives early dropout.
With that framing in place, the team could name the outcomes they were tracking: attendance rate for mentored ninth graders vs. a comparison group, belonging survey scores at 30 and 90 days, and on-track status for credits at the semester mark. None of those metrics required new software. They were already being collected. What changed was that someone was now responsible for looking at them.
Renaming the program from "buddy system" to "structured peer mentoring" wasn't cosmetic. It changed what advisors expected of it, what training upperclassmen received, and — critically — what the principal was willing to put on the schedule. Language shapes investment.
The 11-component framework as scaffolding
One of the biggest practical changes was giving peer mentors an actual conversation structure. Previously, upperclassmen were told to "just check in" — which sounds simple but is genuinely uncomfortable for a sixteen-year-old who doesn't know what they're supposed to ask or how to handle what they hear.
The team introduced a lightweight session guide — not a script, but a set of topics to move through across the school year. Each conversation had a loose agenda: check-in on attendance this week, one academic challenge, one thing going well, and a specific next step. Fifteen minutes. Written down. Reviewed by an advisor monthly.
The difference was immediate. Mentors stopped saying "I don't know what to talk about." Mentees stopped feeling like the meetings were pointless. And advisors had something to review instead of just trusting that the meetings were happening.
What the data showed at semester one
By the end of the first semester under the new model, the attendance gap between mentored ninth graders and non-mentored peers had narrowed measurably. Belonging scores — collected via a five-question survey the school already administered — were notably higher in the mentored group. The on-track credit rate was 11 points higher.
None of those numbers would publish in an academic journal. But they were real, they were the school's own data, and they were enough for the principal to protect the time on the schedule rather than cede it to test prep.
Training the peer mentors, not just recruiting them
The previous model recruited upperclassmen based on GPA and teacher recommendations, gave them a one-hour orientation in August, and then left them to it. The new model added three 45-minute training sessions across the year — one before launch, one at the six-week mark, and one at semester break.
Those sessions covered active listening basics, how to recognize when a mentee needs to be connected to a counselor rather than handled peer-to-peer, and how to re-engage a mentee who has gone quiet. They also served as a community of practice: mentors shared what was and wasn't working, which surfaced problems the advisor team didn't know existed.
Upperclassmen who went through the training reported feeling significantly more confident and, notably, more invested in the program's outcomes. When students feel prepared rather than thrown in, they stay.
Scaling it across six campuses
Year two brought the question every network-level leader eventually faces: how do you maintain quality when you're running this across six buildings with different principals, different schedules, and different advisor cultures? The answer wasn't uniform implementation — it was a shared framework with local flexibility.
Each campus kept the same outcome metrics, the same session structure, and the same training cadence. Everything else — how pairs were formed, when they met, what recognition looked like — was up to each campus. That flexibility made buy-in possible. Forcing a single rigid model across six cultures would have produced six half-hearted versions of the same thing.
What to take from this
- Renaming a casual peer program as a structured intervention changes what funding, scheduling, and training it receives — the language isn't just optics.
- You don't need new data to prove program value. You need someone assigned to look at the data you already have.
- Conversation structure reduces mentor anxiety and dramatically improves session quality — especially for younger peer mentors.
- Training cadence matters more than training depth. Three spaced sessions outperform one intensive orientation every time.
- Network-scale programs survive on shared metrics and local flexibility — not top-down uniformity.