Resources / AI Readiness Self-Assessment
AI Readiness Self-Assessment
A practical guide to understanding where you are — and what to do next.
This assessment is for you. It's not a scorecard for your institution or a test of what your university is "ready" for. It's a personal snapshot to help you understand your current AI literacy and identify concrete next steps.
Rate each statement honestly based on your current daily practice — not your ideal workflow. There are no wrong answers.
Section 1: Delegation & Task Proficiency
"Can I Hand This Off?"
This section measures whether you can effectively delegate work to AI — scoping tasks, providing context, and evaluating output like a good manager would.
I use AI to extract structured information from funding announcements (deadlines, eligibility, budget caps, review criteria) rather than reading them manually cover to cover.
I use AI to draft or refine professional communications (emails to faculty, opportunity summaries, policy explanations) and then edit the output with my own judgment before sending.
When I ask AI to do work, I provide it with clear context — the task, the audience, the constraints, and what 'done well' looks like — rather than just asking a question.
I can look at AI-generated output and quickly identify what's right, what's wrong, and what's missing — the same way I'd review work from a new hire.
I use AI for more than one type of task in my regular work (e.g., summarization AND analysis AND drafting AND data formatting — not just one thing).
Section 2: Workflow Thinking
"Am I Rebuilding or Just Bolting On?"
This section measures whether you're moving beyond 'paste and pray' — using AI to rethink how work gets done, not just speed up existing processes.
I have at least one recurring workflow where AI is a built-in step (not an occasional experiment — a regular part of how I do the work).
I've taken a task or workflow and redesigned it from scratch with AI as a partner — rather than just using AI to speed up the old process.
I can look at a problem and roughly identify what kind of AI solution it needs: a quick chat interaction, a reusable prompt/tool, a structured data approach, or something custom-built.
I use AI to do things I couldn't do at all before (not just things I could do but slower) — like analyzing data in new ways, building tools, or exploring questions I never had time for.
I save or document effective approaches (prompts, workflows, templates) so I can reuse and improve them rather than starting from scratch each time.
Section 3: Adaptation & Mindset
"Am I Building the Muscle?"
This section measures the traits and habits that determine whether you'll keep up as the tools keep changing — because they will.
If an AI tool gives me a bad answer, I adjust my approach (rephrasing, providing more context, trying a different tool) rather than giving up or concluding 'AI doesn't work.'
I can clearly identify where AI is safe to use in my work and where it isn't — and I can explain why to a colleague.
I have updated my understanding of what AI can do at least once in the last 3 months — by trying new tools, reading about new capabilities, or experimenting with new approaches.
I can honestly assess which parts of my work require my judgment and which parts an AI could handle as well or better — without feeling threatened by the answer.
I feel motivated to meaningfully deepen my AI practice in the next 90 days — not just try it once, but build it into how I work on an ongoing basis.