Interview

“Critical Thinking, Creativity, and Communication Will Become Even More Valuable in an AI-Driven Workplace”

If we do this well, AI doesn’t just automate work—it upgrades roles, improves decision-making, and keeps people at the center of outcomes.

Rajeev Ranjan

As artificial intelligence rapidly reshapes industries, the future of work is becoming deeply connected with continuous learning, adaptability, and human-centric innovation. In this exclusive interaction, Rajeev Ranjan, Editor, Digital Terminal speaks with Vinay Kumar Swamy, Country Head, Pearson India about how organizations must rethink learning in the AI era, the growing importance of critical human skills, emerging AI-driven competencies, and the collaborative efforts needed to build a future-ready workforce in India.

Rajeev: With the acceleration of AI-led disruption, how should organizations rethink learning as a continuous, embedded part of work rather than a one-time investment?

Vinay: Look, the biggest shift leaders need to make is to stop treating learning like an annual program and start treating it as part of the operating system of the business. What Pearson’s AI Readiness report calls “pace friction” is real: work is changing faster than our training cycles. In India, where digital adoption is broad and roles are evolving quickly, that gap shows up fast in productivity and quality.

So, I’d focus on a few practical shifts. Make learning task-based: identify where AI can remove friction in real-world workflows, and train people on the judgment moments—when to verify, when to override, and who stays accountable. Bring learning into the flow of work. Our “Mind the Learning Gap” report is clear: if you deploy tools without investing in human learning at the same time, adoption stalls or becomes risky. And measure capability, not just course completion, so you can link upskilling to business outcomes and responsible use.

If we do this well, AI doesn’t just automate work—it upgrades roles, improves decision-making, and keeps people at the center of outcomes.

Rajeev: Personalization is becoming central to learning. How is AI enabling more adaptive, learner-centric experiences at scale?

Vinay: What AI enables, at scale, is structured practice: a learner can attempt a task, get immediate feedback, and try again—almost like having a coach on demand. That matters because employers are telling us the missing piece is hands-on application. And where institutions don’t provide that structure, students build “shadow” habits. The same AI Readiness report shows learners often rely on tools they’ve found themselves—49% use independent tools for writing compared to 20% using university-provided tools—which raises real questions about quality, integrity, and data privacy if we don’t guide responsible use.

AI can personalise learning at scale by adapting practice to each learner’s level, language, and goals. It can recommend the next best activity, provide instant feedback, and adjust difficulty based on performance—so learners spend time where they need it most.

Rajeev: How do you see the balance between human skills (critical thinking, creativity) and technical skills shifting in the AI era?

Vinay: If you step back, the balance is shifting in a very clear way: technical fluency is becoming baseline, and human judgment is becoming the differentiator. The AI Readiness report puts numbers on what employers value most in graduates: communication and collaboration (50%) and adaptability (45%) come out ahead of pure technical depth, alongside the ability to combine human judgment with AI. And there’s a sharper warning in the data too—58% of employers rate graduates’ ability to critically verify AI outputs as their weakest competency. That tells you exactly where the premium is moving.

What that means in the real world is: yes, people need enough AI and data fluency to be productive—that’s becoming baseline. But the real step up is evaluative judgment: can you question an output, spot errors or bias, and make accountability explicit? And then the durable human skills—critical thinking, creativity, collaboration, communication—become even more valuable, because they’re what help you apply AI well in messy, ambiguous situations where there isn’t a single right answer.

From a leadership standpoint, the implication is straightforward: we need talent models that reward both—AI fluency and the human capability to lead intelligent systems responsibly.

Rajeev: What are the biggest barriers preventing large-scale upskilling in India, and how can they be addressed collaboratively?

Vinay: In India, the constraint isn’t ambition—it’s execution at scale. The big barriers are very practical: not enough hands-on learning tied to real job workflows, a persistent gap between what’s taught and what’s used at work, and limited time and incentives for working adults to keep reskilling. Until we fix these three, upskilling will stay fragmented and slower than the pace of change.

The solution has to be collaborative and practical. We should co-design learning with employers—so curricula, tools, and assessments reflect real workflows. We should embed learning into the job—short, contextual pathways plus manager coaching—because that’s how you close the learning gap between technology and people readiness. And we need trusted credentials and skill signals that are portable, so learners can prove capability in a fast-moving market.

Rajeev: What new skill categories are emerging because of AI, and how should learners prioritise them?

Vinay: AI is creating what I’d call “compound skill” requirements—people need to create value with these tools, manage the risks, and keep adapting as the tools change. The mistake learners make is chasing every new platform. What lasts is the underlying capability, and the AI Readiness framework is helpful because it’s very clear about the skill categories that matter: Functional AI Proficiency, Strategic Intelligence, Ethical Stewardship, and Critical Human Skills.

When learners ask me what to prioritise, I keep it simple. Start with Functional AI Proficiency—can you use AI tools for real tasks like research, drafting and analysis, and can you prompt or instruct them well enough to get reliable outputs? Then build Strategic Intelligence—do you know where AI genuinely adds value in a workflow, and where it creates new risk or doesn’t belong at all? Alongside that, treat Ethical Stewardship as non-negotiable: verifying outputs for accuracy, understanding bias and limitations, and protecting privacy. And finally, don’t underestimate Critical Human Skills—adaptability, communication, collaboration and judgment—because accountability can’t be automated.

And finally, I always tell learners: build a portfolio of applied work. Credentials matter, but what really stands out is evidence that you used AI responsibly to solve a real problem—because that judgment travels with you even as tools evolve.

𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐥𝐚𝐭𝐞𝐬𝐭 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐛𝐲 𝐣𝐨𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 WhatsApp Channel now! 👈📲

𝑭𝒐𝒍𝒍𝒐𝒘 𝑶𝒖𝒓 𝑺𝒐𝒄𝒊𝒂𝒍 𝑴𝒆𝒅𝒊𝒂 𝑷𝒂𝒈𝒆𝐬 👉 FacebookLinkedInTwitterInstagram