Indian campuses are racing to adopt AI, but teachers and students are split between AI-ready and AI shy. This article shows how uneven AI literacy is creating a new digital divide and proposes a simple ladder of skills and quick-win interventions that any university can deploy in the next academic year.
SwissCognitive Guest Blogger: Dr. Hemachandran Kannan, Director, AI Research Centre & Vice Dean, School of Business, Woxsen University and Dr. Raul V. Rodriguez, Vice President, Woxsen University – “AI-Ready vs AI Shy: The New Divide”
On paper, both young lecturers look the same.
They are in their early thirties, teach second-year BBA students, and work at busy private universities that proudly call themselves “AI-ready”. Their students all have smartphones, most know how to open a chatbot, and nearly everyone has tried at least one generative AI tool.
But their nights look very different.
One of them spends her evening with her students’ ideas. She has built an assignment where they are allowed to use an AI tool for brainstorming and first drafts, if they clearly mark which parts came from the model and then explain what the model got wrong. In class, they laugh at some absurd answers, debate whether a confident output is actually true, and talk about why hallucinations happen.
The other spends her evening alone, pasting paragraphs from essays into AI detectors. No one has trained her on AI. She is not sure what her university’s AI policy really says. She only knows that plagiarism must be stopped and that she will be blamed if “AI cheating” spreads. To stay safe, she bans AI in her course and quietly hopes students obey.
Both care deeply about learning. Only one is being supported to teach in an AI-present world.
The new AI divide on Indian campuses
Across India, AI has arrived in higher education faster than most of us expected. A recent EY–FICCI–P survey shows that over half of Indian higher education institutions already have some form of AI policy, and more than 80% see widespread student use of generative AI tools. World Bank and other reports underscore how quickly AI is reshaping teaching, assessment and student support.
At the same time, a global faculty survey finds that only a small minority of educators consider themselves “advanced” in AI, and even fewer say their institutions give them enough AI literacy support. On many Indian campuses, this gap is visible in daily practice:
- AI‑ready campuses, where staff have at least basic AI literacy, students get guidance, and AI use is discussed openly in classrooms, policies and governance forums.
- AI‑shy campuses, where faculty avoid or punish AI mainly out of fear or confusion, while students learn in the shadows—from friends, influencers and public models, without guardrails.
India’s national story is about becoming an AI and digital skills powerhouse. Letting this new divide grow between AI-ready and AI-shy institutions would undercut that ambition at the campus level.
What AI literacy really means for a BBA student
AI literacy can sound like a big, abstract term from a UNESCO report. In reality, it can be boiled down to a simple test.
For students, ask: Can your average second-year BBA student:
- Explain, in their own words, what an AI “hallucination” is and why it matters.
- Recognise when not to trust a model’s answer without checking it somewhere else?
- State clearly, in an assignment, where they used AI, for what purpose, and what they did themselves.
For faculty, ask: can your average lecturer:
- Read the university’s AI policy and understand what it means for their course this semester.
- Design at least one assignment that assumes students will use AI—and turns that into a learning moment instead of a threat?
- Talk to students about bias, data privacy and overreliance in the tools they are already using?
If the honest answer on your campus is “not really” to most of these, then AI literacy is not yet where it needs to be, regardless of how many AI tools have been procured.
A simple AI literacy ladder you can adopt now
Universities do not need a perfect national blueprint to begin. They can start with a light-weight AI literacy ladder for students and staff that fits into the next one or two academic years.
For students, three rungs:
Level 1 – Understand
Students know what generative AI is, can name common tools, and understand basics like prompts, hallucinations, and bias.
Level 2 – Use with care
Students can use AI for specific tasks, summaries, ideas, and language support while checking outputs and clearly disclosing where and how they used the tool.
Level 3 – Create and critique
Students can design AI‑supported workflows for projects, explain limitations, and reflect on ethical and professional implications in their field (business, law, engineering, design, etc.).
For faculty, three parallel rungs:
Level A – Aware
Faculty understand key AI terms, know their institution’s AI policy, and can recognise typical student use cases and risks in their discipline.
Level B – Integrate
Faculty have redesigned at least one assignment or activity to assume AI is present, using it as a teaching tool rather than pretending it doesn’t exist.
Level C – Lead
Faculty champions co-run workshops, support peers, and represent their departments in AI governance discussions, connecting classroom realities to institutional strategy.
The aim is simple: no graduate should leave campus AI-illiterate, and no faculty member should be left alone to fight AI in the dark.
Three formats Indian HEIs can roll out quickly
Most campuses are already stretched. The question is how to build this ladder without overwhelming people. Three practical formats can move the needle within one academic cycle.
Small, credit-bearing AI literacy modules for students
Offer 1–2 credit AI literacy modules that any student can take in the second year, regardless of programme. These should be hands-on: students experiment with tools, see real hallucinations, analyse biased outputs, practise disclosure, and learn how to combine AI with credible sources. This is where the “hallucination” and “when not to trust a model” questions are answered in practice, not just in theory.
Focused, discipline-specific FDPs for faculty
Move beyond generic “Future of AI in Education” webinars. Instead, run short Faculty Development Programmes where teachers from the same discipline redesign one assessment together: a case study in management, a lab report in engineering, a moot problem in law, a studio brief in design, each with explicit AI assumptions. Recognise participation in appraisal or teaching excellence criteria so that this work is seen as core, not optional.
Peer labs and student–faculty AI clinics
Create informal “AI labs” on campus, regular sessions where interested faculty and students explore tools, share classroom examples, and discuss policy and ethics. These can be anchored by the AI Research Centre, teaching–learning centres, or student clubs. The goal is to lower the temperature: help AI-shy teachers see how colleagues are handling AI, and give AI-curious students a safe space to ask questions.
In 2025, Indian higher education showed that it could move quickly on AI adoption and policy drafts. In 2026, the harder task is making sure that speed does not leave people behind, especially the teachers who hold the classroom, and the students whose careers will depend on how wisely they can use, question, and sometimes say “no” to AI.
The choice is not between being AI-positive or AI-negative. It is between a future where only a few campuses are truly AI-ready, and one where every graduate, from every institution, has the minimum AI literacy needed to stand confidently in an AI-shaped world.
About the Author:
Dr. Raul Villamarin Rodriguez is the Vice President of Woxsen University. He is an Adjunct Professor at Universidad del Externado, Colombia, a member of the International Advisory Board at IBS Ranepa, Russian Federation, and a member of the IAB, University of Pécs Faculty of Business and Economics. He is also a member of the Advisory Board at PUCPR, Brazil, Johannesburg Business School, SA, and Milpark Business School, South Africa, along with PetThinQ Inc, Upmore Global and SpaceBasic, Inc. His specific areas of expertise and interest are Machine Learning, Deep Learning, Natural Language Processing, Computer Vision, Robotic Process Automation, Multi-agent Systems, Knowledge Engineering, and Quantum Artificial Intelligence.
Dr. Hemachandran Kannan is the Director of AI Research Centre and Professor at Woxsen University. He has been a passionate teacher with 15 years of teaching experience and 5 years of research experience. A strong educational professional with a scientific bent of mind, highly skilled in AI & Business Analytics. He served as an effective resource person at various national and international scientific conferences and also gave lectures on topics related to Artificial Intelligence. He has rich working experience in Natural Language Processing, Computer Vision, Building Video recommendation systems, Building Chatbots for HR policies and Education Sector, Automatic Interview processes, and Autonomous Robots.
Indian campuses are racing to adopt AI, but teachers and students are split between AI-ready and AI shy. This article shows how uneven AI literacy is creating a new digital divide and proposes a simple ladder of skills and quick-win interventions that any university can deploy in the next academic year.
SwissCognitive Guest Blogger: Dr. Hemachandran Kannan, Director, AI Research Centre & Vice Dean, School of Business, Woxsen University and Dr. Raul V. Rodriguez, Vice President, Woxsen University – “AI-Ready vs AI Shy: The New Divide”
They are in their early thirties, teach second-year BBA students, and work at busy private universities that proudly call themselves “AI-ready”. Their students all have smartphones, most know how to open a chatbot, and nearly everyone has tried at least one generative AI tool.
But their nights look very different.
One of them spends her evening with her students’ ideas. She has built an assignment where they are allowed to use an AI tool for brainstorming and first drafts, if they clearly mark which parts came from the model and then explain what the model got wrong. In class, they laugh at some absurd answers, debate whether a confident output is actually true, and talk about why hallucinations happen.
The other spends her evening alone, pasting paragraphs from essays into AI detectors. No one has trained her on AI. She is not sure what her university’s AI policy really says. She only knows that plagiarism must be stopped and that she will be blamed if “AI cheating” spreads. To stay safe, she bans AI in her course and quietly hopes students obey.
Both care deeply about learning. Only one is being supported to teach in an AI-present world.
The new AI divide on Indian campuses
Across India, AI has arrived in higher education faster than most of us expected. A recent EY–FICCI–P survey shows that over half of Indian higher education institutions already have some form of AI policy, and more than 80% see widespread student use of generative AI tools. World Bank and other reports underscore how quickly AI is reshaping teaching, assessment and student support.
At the same time, a global faculty survey finds that only a small minority of educators consider themselves “advanced” in AI, and even fewer say their institutions give them enough AI literacy support. On many Indian campuses, this gap is visible in daily practice:
India’s national story is about becoming an AI and digital skills powerhouse. Letting this new divide grow between AI-ready and AI-shy institutions would undercut that ambition at the campus level.
What AI literacy really means for a BBA student
AI literacy can sound like a big, abstract term from a UNESCO report. In reality, it can be boiled down to a simple test.
For students, ask: Can your average second-year BBA student:
For faculty, ask: can your average lecturer:
If the honest answer on your campus is “not really” to most of these, then AI literacy is not yet where it needs to be, regardless of how many AI tools have been procured.
A simple AI literacy ladder you can adopt now
Universities do not need a perfect national blueprint to begin. They can start with a light-weight AI literacy ladder for students and staff that fits into the next one or two academic years.
For students, three rungs:
Level 1 – Understand
Students know what generative AI is, can name common tools, and understand basics like prompts, hallucinations, and bias.
Level 2 – Use with care
Students can use AI for specific tasks, summaries, ideas, and language support while checking outputs and clearly disclosing where and how they used the tool.
Level 3 – Create and critique
Students can design AI‑supported workflows for projects, explain limitations, and reflect on ethical and professional implications in their field (business, law, engineering, design, etc.).
For faculty, three parallel rungs:
Level A – Aware
Faculty understand key AI terms, know their institution’s AI policy, and can recognise typical student use cases and risks in their discipline.
Level B – Integrate
Faculty have redesigned at least one assignment or activity to assume AI is present, using it as a teaching tool rather than pretending it doesn’t exist.
Level C – Lead
Faculty champions co-run workshops, support peers, and represent their departments in AI governance discussions, connecting classroom realities to institutional strategy.
The aim is simple: no graduate should leave campus AI-illiterate, and no faculty member should be left alone to fight AI in the dark.
Three formats Indian HEIs can roll out quickly
Most campuses are already stretched. The question is how to build this ladder without overwhelming people. Three practical formats can move the needle within one academic cycle.
Small, credit-bearing AI literacy modules for students
Offer 1–2 credit AI literacy modules that any student can take in the second year, regardless of programme. These should be hands-on: students experiment with tools, see real hallucinations, analyse biased outputs, practise disclosure, and learn how to combine AI with credible sources. This is where the “hallucination” and “when not to trust a model” questions are answered in practice, not just in theory.
Focused, discipline-specific FDPs for faculty
Move beyond generic “Future of AI in Education” webinars. Instead, run short Faculty Development Programmes where teachers from the same discipline redesign one assessment together: a case study in management, a lab report in engineering, a moot problem in law, a studio brief in design, each with explicit AI assumptions. Recognise participation in appraisal or teaching excellence criteria so that this work is seen as core, not optional.
Peer labs and student–faculty AI clinics
Create informal “AI labs” on campus, regular sessions where interested faculty and students explore tools, share classroom examples, and discuss policy and ethics. These can be anchored by the AI Research Centre, teaching–learning centres, or student clubs. The goal is to lower the temperature: help AI-shy teachers see how colleagues are handling AI, and give AI-curious students a safe space to ask questions.
In 2025, Indian higher education showed that it could move quickly on AI adoption and policy drafts. In 2026, the harder task is making sure that speed does not leave people behind, especially the teachers who hold the classroom, and the students whose careers will depend on how wisely they can use, question, and sometimes say “no” to AI.
The choice is not between being AI-positive or AI-negative. It is between a future where only a few campuses are truly AI-ready, and one where every graduate, from every institution, has the minimum AI literacy needed to stand confidently in an AI-shaped world.
About the Author:
Share this: