Most support teams think they're AI-fluent.
Most support teams think they're AI-fluent.
Zapier's new rubric says otherwise.
They just published an AI fluency framework for support, and it's the clearest bar I've seen for measuring where your team actually stands.
Four levels:
š“ šØš»š®š°š°š²š½šš®šÆš¹š²: "AI is a slightly faster Google, nothing more."
š” šš®š½š®šÆš¹š²: "I use AI to operate at a meaningfully higher level."
šµ šš±š¼š½šš¶šš²: "I orchestrate AI and build systems that elevate how I work."
š£ š§šæš®š»šš³š¼šæšŗš®šš¶šš²: "I re-engineer how work happens."
That last one is the bar. Not just faster work. Different work. Categories of tasks that no longer exist or run without human involvement.
Most teams I talk to are somewhere between Unacceptable and Capable. Using AI for drafts and lookups, but not changing how work actually gets done.
And here's the part managers need to hear: a leader who's personally AI-fluent but whose team is still doing things the old way doesn't meet Zapier's bar. You have to prove you've led your team there too.
This is exactly why I just opened a new role focused on internal AI enablement. Not customer-facing AI. Internal. Helping my team level up, build workflows, and move from Capable to Transformative.
More on that this week.
Where does your team fall on this rubric? š