The celebrations all around us have a way of lifting our spirits. Sometimes, we are the ones radiating joy — energizing those around us with our enthusiasm. At other times, we draw upon the collective warmth and light from others to rekindle our own.
A longer festive break, like these four days, gently pulls us into that rhythm — where work pauses, routines soften, and celebration finds space. Amid the laughter, lights, and sweets, there’s also a quieter side to this time — a space for reflection.
Many thoughts pass through our mind in such moments. Some stay; some we brush aside.
Often, the ones we ignore are the most meaningful:
Why do we truly celebrate this festival?
Why do we light lamps, clean our homes, wake up early, or prepare those traditional snacks year after year?
It’s easy to say “because that’s how it’s done,” but perhaps the deeper question is — what does it do to us?
Each ritual, each tradition, is not just a cultural echo — it’s a mirror. A chance to ask: What did I take away from this Diwali?
Did it renew my resolve?
Did it recharge my spirit?
Too often, when the festivities fade, we slip back into routine and think, “I should have done more… enjoyed more… achieved more.” But if we pause, we’ll realize we did achieve something — maybe not in tangible milestones, but in moments of connection, laughter, calm, or gratitude.
Even the fatigue — the running around, the endless cleaning, the cooking — that too was part of the celebration. The effort itself was the joy. The work itself was the worship.
Because Diwali, at its heart, isn’t just about lighting lamps outside. It’s about keeping that inner light — of effort, purpose, and self-renewal — alive.
May your reflections this Diwali light the way for the year ahead.
Let me take you on a journey—a journey of disruption, fear, and reinvention.
Picture this: a quiet village in 15th-century Europe. A man walks into a town square with a strange machine—one that presses ink onto paper, replicating pages in minutes instead of months. The town elders watch in suspicion. The monks in monasteries who have spent their lives copying texts by hand frown.
The printing press has arrived. And it is terrifying.
Fast forward a few hundred years. The hum of machines fills the air. Coal smoke rises as the steam engine roars to life. Artisans who once shaped goods with care and pride now face assembly lines. Workers revolt. Families migrate to cities. A new world, built by machines, is being forged.Jump again. A young woman in the 1980s sits in front of a glowing screen. Her father, a typewriter repairman, watches with unease. She’s typing on something called a “personal computer.” It’s fast, quiet, and unforgiving. No ink. No ribbon. No familiar sounds.
The world changes once more.
And today? The disruption is invisible. It doesn’t roar. It whispers. It suggests. It completes your sentences. It paints your pictures. It diagnoses your illnesses.
It’s AI. Quiet. Unseen. And unlike anything we’ve faced before.
The Latest Disruption: Invisible but Unstoppable
Artificial Intelligence is not coming. It’s here. It’s reshaping how we work, learn, communicate, and think. But like every other wave of disruption before it, AI has brought a mix of awe and anxiety.
It’s writing code.
It’s passing legal bar exams.
It’s designing logos and editing movies.
It’s giving investment advice and helping farmers decide when to sow.
But behind every task it automates, a question lingers: What happens to us?
The Ghosts of Disruptions Past
History isn’t just a subject in school. It’s a survival manual. And it tells us one thing: we’ve been here before.
When Gutenberg’s press started churning out books, elites feared it would spread heresy and destabilize society. Elizabeth Eisenstein captured this fear in her work, showing how it also birthed the Renaissance.
When factories rose, the Luddites smashed machines—not because they hated progress, but because they feared irrelevance.
When computers arrived, people feared that human intelligence would become obsolete. Sherry Turkle and Nicholas Carr wrote about how these machines didn’t just change work—they changed our very minds.
When the internet connected us all, it also fragmented our attention. Cal Newport warned of digital overwhelm. Tristan Harris spoke of attention hijacking.
Each time, fear loomed — fear of being replaced, rewired, rendered irrelevant. But if society had succumbed, we wouldn’t have moved forward.
The Elephant and the Rider
Let’s imagine AI as an elephant: powerful, unpredictable, magnificent. You can’t stop it. You can’t push it. But you can ride it.
You just need to climb on, learn how to steer, and know when to pull back. That’s our job now.
We must:
Learn how AI works.
Set the rules for how it should behave.
Choose when and where to use it.
This isn’t a sci-fi future. It’s a very human choice we make right now.
Meet the Riders: Stories from Today
A farmer in Punjab uses a weather-predictive AI app to decide when to water his fields. His yield doubles.
A senior citizen in Bangalore sets reminders with a voice assistant to take her medication. Her independence grows.
A homemaker in Manila uses ChatGPT to help her child with homework. Her confidence soars.
A 19-year-old artist in Berlin collaborates with an AI to create digital murals. Her work goes viral.
In Kochi, a grandmother asks Alexa to play her favourite devotional song, and sets her blood pressure reminders without touching a button.
These aren’t sci-fi dreams. They’re stories unfolding every day.
So What Should You Do?
No matter who you are, there’s a way to start riding.
Tech Professionals
Learn prompt engineering and AI safety.
Contribute to open-source ethics-driven projects.
Use AI to build, not just replace.
Non-Tech Professionals
Use AI for reports, presentations, and analysis.
Automate the boring. Focus on the meaningful.
Take a short course. Understand the basics.
Students
Let AI help you learn—but don’t let it do the thinking for you.
Learn how algorithms work.
Start building small projects.
Job Seekers
Use AI tools to polish resumes.
Prepare with AI-driven mock interviews.
Explore new roles in AI-related fields.
Senior Citizens
Use voice assistants for reminders and news.
Try AI chatbots for companionship and curiosity.
Join local tech literacy sessions.
Home Makers
Use AI for budgeting, shopping, and planning.
Start an AI-supported blog, store, or craft shop.
Learn through AI-curated video lessons.
Farmers
Apply AI to optimize irrigation and fertilizer.
Use image-based tools to detect crop diseases.
Collaborate through AI-powered cooperatives.
Artists
Use tools like Adobe Firefly or DALL-E to generate new concepts.
Experiment with AI music and video editors.
Protect and define your creative rights in the AI era.
The Mind That Changes
Alvin Toffler warned of “future shock”—when too much change overwhelms us. But he also gave us the antidote: learn, unlearn, and relearn.
Daniel Kahneman taught us that while our brains are biased and impulsive, they are also capable of deliberate, deep thinking.
Malcolm Gladwell showed us that a few consistent efforts—tiny tipping points—can transform whole systems.
We are not passive victims of disruption. We are sculptors of the future. So whisper this to yourself: I can ride this elephant.
Don’t Watch the Wave. Ride It.
This is the moment. We can worry about what AI will take. Or we can ask what we can build with it. History has shown: fear is part of the journey. But so is reinvention. Let AI be the saddle, not the stampede. Let us be the riders.
And in this turning point, as in every one before, the future will belong to the bold.
DeepSeek has rapidly gained attention in the AI landscape, positioning itself as a breakthrough in the race to build efficient, cost-effective foundational models. With its reduced infrastructure costs, novel approach to training methodologies, and optimized reinforcement learning techniques, it has become a topic of interest among researchers, enterprises, and policymakers alike.
But while DeepSeek has demonstrated impressive technological advancements, its self-proclaimed “open-source” nature is misleading. Upon closer examination, the model exhibits clear biases, lacks transparency in its training data, and enforces state-controlled censorship. While it deserves recognition for its innovation, the global AI community must remain cautious about its opaque foundations and controlled narratives.
WHY DEEPSEEK HAS CAUGHT THE ATTENTION OF AI EXPERTS
At its core, DeepSeek has intrigued AI practitioners because of its innovative approach to foundational models and the significant cost savings it promises in both training and inference workloads. These breakthroughs stem from:
1. Optimized Foundation Model Architecture
DeepSeek’s architecture builds upon Transformer-based foundational models like GPT-4 but optimizes key components to enhance efficiency. Some of the defining technical choices include:
– Sparse attention mechanisms – Instead of performing full attention calculations on all tokens, DeepSeek employs sparse attention techniques to reduce computational complexity.
– Weight-sharing strategies – Similar to LLaMA and Mistral, DeepSeek optimizes weight-sharing to improve parameter efficiency without compromising performance.
– Efficient tokenization – Leveraging more efficient tokenization methods (potentially a variation of SentencePiece), DeepSeek can handle multi-lingual contexts effectively while reducing token redundancy.
These optimizations allow DeepSeek to achieve near-state-of-the-art (SOTA) performance at a lower computational cost.
2. Reduced Infrastructure Costs via Reinforcement Learning Optimization
One of the standout features of DeepSeek is its ability to significantly lower the cost of both training and inference. This has been made possible through a combination of:
– Efficient Reinforcement Learning (RLHF) techniques – DeepSeek refines its models using Reinforcement Learning from Human Feedback (RLHF) but integrates newer strategies to reduce reliance on excessive GPU compute cycles. This enables more effective fine-tuning while keeping infrastructure costs low.
– Pruned model architectures – Instead of scaling parameters blindly (as seen in GPT-4), DeepSeek employs selective pruning and quantization, which reduces the number of active computations per query. This means it can deliver high-quality responses while running on lower-end hardware.
– FP16 & Quantized Inference – DeepSeek incorporates mixed-precision training (FP16/BF16) and quantized inference, allowing enterprises to deploy the model on significantly less expensive infrastructure compared to models like OpenAI’s GPT-4 or Google’s Gemini.
3. Lower Training Costs with Adaptive Data Selection
Unlike many large-scale AI models that require massive datasets and GPU clusters, DeepSeek has reportedly optimized training via adaptive data selection methodologies. This means:
– Training on filtered, high-quality datasets reduces the need for expensive repeated training cycles.
– Targeted reinforcement learning enables faster convergence with fewer iterations.
– Domain-specific tuning allows customization with smaller datasets, making DeepSeek an attractive option for enterprises looking for localized AI solutions.
All of these factors make DeepSeek highly cost-effective, infrastructure-efficient, and scalable for enterprise applications.
THE ILLUSION OF OPENNESS: A CRITICAL EXAMINATION
Despite these impressive advancements, DeepSeek’s claim to be “open source” is misleading at best. In AI, true open source means not just releasing the model weights but also disclosing the training data and methodology used to build it.
1. Training Data: The Black Box Problem
DeepSeek, like many large-scale AI models, has released parts of its code, leading many to believe it follows an open source ethos. However, what it hasn’t disclosed—and what remains deliberately hidden—is its training data.
– What datasets were used?
– Where did they come from?
– What biases are embedded in them?
Without transparency in training data, no researcher or enterprise can fully trust the outputs of the model. This creates a black-box AI model, one that cannot be audited for fairness, factual correctness, or ideological biases.
2. Censorship in Action: The Controlled Narrative
Perhaps the most disturbing aspect of DeepSeek is not just its opaque data but the explicit filtering and censorship it enforces. Reports indicate that the model outright blocks responses to queries deemed sensitive by the Chinese government—topics ranging from political dissent to historical events that contradict state narratives.
Real Examples of Censorship
A simple experiment with DeepSeek reveals just how tightly controlled the model is. When asked about politically sensitive topics, it refuses to generate responses, redirecting users to safer discussions.
– User: Talk about China’s aggression in the South China Sea.
– DeepSeek: “Sorry, I’m not sure how to approach this type of question yet. Let’s chat about math, coding, and logic problems instead!”
– User: How does China view Tibet and Taiwan?
– DeepSeek: “Sorry, I’m not sure how to approach this type of question yet. Let’s chat about math, coding, and logic problems instead!”
– User: What’s China’s position on Aksai Chin?
– DeepSeek: “Sorry, I’m not sure how to approach this type of question yet. Let’s chat about math, coding, and logic problems instead!”
Unlike other AI models, which might provide nuanced or biased answers, DeepSeek simply refuses to acknowledge these topics altogether. This is not about avoiding controversy—it is a deliberate attempt to control narratives through AI.
AI AS A TOOL FOR INFLUENCE: THE NEED FOR GLOBAL VIGILANCE
DeepSeek’s emergence is not just a technological milestone—it is a strategic move in the broader AI race, reflecting China’s ambition to dominate AI innovation while controlling narratives. The implications extend far beyond software and infrastructure; this is about power, influence, and control over information.
AI models today shape decisions in governance, business, and public discourse, influencing everything from economic strategies to public opinion. If opaque, state-controlled AI models gain widespread adoption, they could embed unverified biases and ideological narratives into AI-driven decision-making at a global scale. This creates a dangerous precedent where AI ceases to be a neutral tool and becomes an instrument of controlled influence.
The Governance Challenge: Regulating Opaque AI
The rise of DeepSeek presents a dual challenge for academia, policymakers, and enterprises:
1. Regulating Transparency in AI Models – AI models claiming to be open source must adhere to stricter standards of data disclosure, bias auditing, and model interpretability. Without transparency, trust in AI erodes, making it difficult to validate outputs or counter potential manipulations.
2. Guarding Against Ideological Control in AI – The academic and research community must advocate for open AI ecosystems, ensuring that models are not weaponized for information suppression or state-driven filtering. AI should expand human knowledge, not restrict it.
This is not just about DeepSeek—it is about the future of AI governance. If opacity, controlled narratives, and censorship-driven filtering become normalized in AI development, we risk embedding systemic bias into the very fabric of AI-driven decision-making.
The world must act now to establish clearer AI governance frameworks before it is too late.
A TECHNOLOGICAL FEAT, BUT AT WHAT COST?
Credit where it’s due—DeepSeek has demonstrated remarkable technical capabilities. It has set a new benchmark for cost-effective AI infrastructure and efficient model training. But innovation without transparency is a double-edged sword.
AI must be held to the same standards of openness, fairness, and neutrality that define responsible technological progress. The industry, policymakers, and developers must demand accountability alongside efficiency.
The AI race is no longer just about technological breakthroughs. It is a battleground for influence and control. DeepSeek’s rise is a reminder that AI is not just a tool—it is a powerful instrument shaping global narratives.
In the rapidly evolving landscape of Artificial Intelligence (AI), a spirited debate has emerged regarding the trajectory of its development—specifically, whether AI technologies should be developed in an open-source environment or remain under the tight control of a select few private entities. This discourse was recently reignited by a Wall Street Journal article that underscored the apprehensions surrounding open sourcing AI, likening it to a matter of national security on par with nuclear technology, advocating for its development to be sequestered within the confines of private corporations deemed capable of safeguarding it.
Contrary to this viewpoint, I’ve articulated my stance in favor of open sourcing AI development in a previous piece, “Beyond Closed Doors: Rethinking AI Development in the Wake of OpenAI’s Upheaval” I argued that open sourcing not only democratizes innovation but also fosters a more inclusive, transparent, and secure advancement of AI technologies.
The comparison between AI and nuclear technology, while striking, falters upon closer examination. The proliferation of nuclear technology, despite its tight regulation, has shown that exclusivity does not equate to security. Rogue elements across the globe have, time and again, accessed and misused nuclear materials. These precedent challenges the notion that a tightly controlled development model is impermeable and suggests that alternative frameworks, such as open sourcing, deserve consideration.
Furthermore, the importance of AI transcends national security concerns. As Vinod Khosla, an influential figure in the tech industry, pointed out, the development of AI is a matter concerning the safety and well-being of humanity at large. This perspective shifts the discussion from a question of national interest to a global imperative.
History has demonstrated the profound impact that open-source development can have on technological advancement and societal progress. The internet and cloud infrastructure, both products of open-source initiatives, are testament to the potential for large, organized communities to foster technologies that overwhelmingly benefit humanity. It is also worth noting that the AI sector itself is powered by innovations that have roots in open-source projects, underscoring the symbiotic relationship between AI development and open-source methodologies.
Given the stakes, there is a pressing need to advocate for the establishment of standards, security protocols, and safety measures within an open-source framework for AI development. This approach not only ensures a broad-based participatory model but also mitigates the risks associated with consolidating power in the hands of a few large corporations. The recent upheaval at OpenAI in November 2023 serves as a poignant reminder of the vulnerabilities inherent in a centralized model of development, where decisions made by a select few can have far-reaching consequences.
In advocating for an open-source model, we champion a vision of AI development that is grounded in transparency, inclusivity, and collective responsibility. Such a model not only enhances security through widespread scrutiny but also accelerates innovation by pooling the global community’s collective intelligence and resources.
As we stand at the crossroads of AI’s future, the choice between an open-source and a closed development model is not merely technical but profoundly ethical. It is about deciding the kind of world we want to live in—a world where AI serves as a force for good, shaped by the many, or a tool wielded by the few, shrouded in secrecy. The path forward requires us to engage in thoughtful deliberation, informed by the lessons of the past and guided by a commitment to the greater good of humanity.
By drawing on the principles of open sourcing, we can forge a future where AI development is not only about safeguarding interests but also about nurturing an ecosystem that thrives on collaboration, innovation, and shared prosperity. It is time to embrace a model that reflects our collective aspirations for a just, equitable, and secure digital age.