top of page

How to Spot Real AI In A Conference Full of Buzzwords


ree

If you're heading to NACAC next month, prepare to hear the word "AI" a lot. It will show up in product demos, slide decks, signage, and sales conversations. It may be listed as a feature, a differentiator, or a strategic advantage. But in most cases, it won’t be clear what it actually means.


That's not a criticism. It’s just the state of the market. Artificial intelligence is being positioned as a solution to almost every enrollment problem. And to be fair, in some cases, it really is. The challenge is separating what’s useful from what’s simply branded.


This past week, we hosted a video podcast exploring how institutions can think more clearly about AI in enrollment. The conversation touched on some of the real advances being made—where machine learning is showing up in daily workflows, and how it's helping teams make better decisions without replacing the human work of relationship-building.


There was no attempt to forecast five years out or speculate about the future of recruitment. Instead, we focused on what's already working inside enrollment offices today: where machine learning is quietly powering prioritization, identifying risk, and giving staff more clarity in their daily work.


If you missed the session, you can still watch the recording here. But whether or not you attend, here’s a practical takeaway for fall: you need a way to evaluate AI claims quickly and meaningfully. So below is a short list of questions to bring with you into vendor conversations. These aren’t meant to be confrontational. They’re meant to surface what’s behind the label.


1. Is the system updated daily—or is it based on static reports?

The best AI systems adjust constantly as student behavior changes. If a model is only refreshed once a cycle, it won’t be helpful in real time.


2. Does it prioritize action—or just surface analysis?

AI that shows you what happened is helpful. But AI that tells you who to focus on next is what drives outcomes. Insight without direction still leaves the counselor guessing.


3. Is it enrollment-specific—or adapted from another use case?

There’s a big difference between technology that was built for admissions teams and technology that was repurposed from other industries. Enrollment-specific AI understands the timing, urgency, and nuance of the student decision cycle.


4. Can a counselor use it without a data science background?

If a tool requires a specialist to interpret the results, it’s unlikely to scale across your team. A good system integrates into the counselor’s workflow and gives clear, daily direction.


5. Does it help manage the middle of the funnel?

Most AI products still live at the top (lead generation) or the bottom (yield nudging). Very few are solving the execution gap in the middle, where interest gets lost, and melt risk begins. That’s the space where behavior-driven AI is proving most useful—and where most teams need the help.


We covered many of these principles in the video podcast, though the conversation wasn’t limited to a single framework. It touched on governance. On how to talk about AI with leadership. On what responsible implementation looks like in practice. And on what happens when enrollment teams use data to focus—not just to report.


If there was one theme, it’s that AI is not an abstraction anymore. It’s not just theory or hype. It’s being used now, in admissions offices you know, by counselors who are seeing the benefit in their daily work. But not all systems are equal. And not every AI label leads to better outcomes.


The institutions that gain the most this fall will be the ones asking the right questions. Not just about what a tool is, but what it helps them do.


Missed the conversation? Watch the August 26 video podcast here: crowdcast.io/c/vpjohn

 
 
bottom of page