
A Conversation with Professor Bohuslav Přikryl
Professor Bohuslav Přikryl, former Rector of the University of Defence and retired Brigadier General, now serves as Vice President for Research, Development and Innovation at CSG Aerospace, a division of the Czechoslovak Group (CSG). From this unique vantage point, he monitors technological developments and identifies innovations that could shape the future of defense and society. With a career spanning academia, the military, and the private sector, he brings a profound understanding of how technology influences our world. In this thought-provoking interview, Professor Přikryl shares his insights on the accelerating pace of technological change, the ethical dilemmas it raises, and the crucial question: are we guiding technology—or is it guiding us?
Technologies are evolving exponentially, while human nature remains fundamentally the same. How do you perceive this dynamic? Are we adapting technologies to suit ourselves, or are technologies subtly changing us more than we realize?
This dynamic works both ways, but unevenly. Human nature evolves slowly—our needs and emotions have remained nearly unchanged for centuries. Meanwhile, technology races ahead, reshaping the world faster than our minds can fully absorb. At first glance, technologies appear as neutral tools—phones for easier communication, social networks for connection.
But tools never stay neutral. They begin to shape our habits, relationships, and even our values. Social media, originally designed to connect people, has changed how we communicate, understand privacy, and see ourselves.
Technologies may be born from our needs, but over time they shape us—often more than we admit. That’s why it’s crucial to stay aware of their influence and manage it consciously.
It is said that artificial intelligence can only be as good or as bad as its teachers—us, humans. If AI reflects our behavioral patterns, are we truly good teachers for it? Are we teaching it rationality and progress, or are we unconsciously passing on our biases and destructive tendencies?
AI is not neutral—it’s a mirror. It learns only from the data we give it, which reflects our history, culture, and behavior. We pass on both our knowledge and innovation—and our biases, stereotypes, and misrepresentations.
If we lean toward inequality, discrimination, or destructive tendencies, AI will detect and replicate them—often with alarming precision and scale. Without clear rules and corrections, AI becomes an amplifier of our flaws. And that makes responsible guidance more essential than ever.
If AI were to one day take on the role of a strategic decision-maker—such as in warfare—would that be better or worse than when humans make those decisions? Is humanity capable of creating a just and rational digital leader, or will we inevitably create a more sophisticated version of our own flaws?
This is a crucial—and deeply unsettling—ethical question. Handing over strategic decisions like warfare to AI presents two simplified scenarios, each with distinct risks and benefits.
Professor Bohuslav Přikryl, former Rector of the University of Defence and retired Brigadier General, now serves as Vice President for Research, Development and Innovation at CSG Aerospace, a division of the Czechoslovak Group (CSG).
Optimistic Scenario – AI as a Fairer, More Rational Leader
AI lacks emotions, ego, or fear of death, potentially allowing for more objective decisions.
It can process vast data sets far faster than humans, possibly making more precise choices and even preventing conflict.
With the right programming, AI could follow the rules of warfare strictly—no exceptions, no personal motives.
Pessimistic Scenario – AI as an Amplifier of Our Flaws
AI reflects its human creators. If we embed bias, errors, or questionable ethics, it will replicate and automate them.
Without empathy or conscience, it might make decisions that are efficient but morally indefensible.
If we give AI full autonomy, human oversight and accountability could vanish.
Reality will likely fall somewhere in between. Can we create a “good” digital leader? That depends on two key factors:
We must define clear ethical boundaries and values to embed into AI.
And we need robust control mechanisms to oversee and, when needed, override its decisions.
Right now, AI would likely become just a faster, more sophisticated version of our flaws. Before building perfect machines, we must first examine ourselves—and develop a framework to consciously transmit our values. Utopian, perhaps. But necessary.
At the recent Munich Security Conference, there were harsh words spoken about internal threats, democracy, and technological progress. Do you currently see technological dominance as a means of securing stability or as a potential catalyst for new global conflicts?
Technological dominance is a double-edged sword—it can stabilize, or it can provoke conflict. Technology is never neutral; it reflects the power and ambitions of those who control it.
Technology as a stabilizer:
Advanced military tech can deter adversaries through overwhelming superiority.
It can increase transparency, reduce misunderstandings, and limit escalation.
In democratic hands, it can promote values like freedom and privacy.
Technology as a destabilizer:
States without access to such tech may feel threatened, sparking arms races or preemptive moves.
New arenas—cyber, AI, information—make conflicts harder to control and easier to escalate.
And the same tools designed to protect democracies can be turned against them—from surveillance to disinformation.
Current tensions between the U.S., China, and Russia show that technological dominance is becoming a strategic fault line. It fuels mistrust, rivalry, and the risk that small incidents spiral into larger crises.
The Munich Security Conference captured this reality well. Technology is neither inherently stabilizing nor inherently dangerous—but in the wrong hands, or used without caution, it becomes a powerful accelerant.
In the past, military technologies shaped civilian innovations—such as the internet, GPS, and aviation. However, you have repeatedly stated that today the trend is reversed: civilian technologies are surpassing military ones. We see this with drones, AI, and cybernetics. Does this mean that states are losing control over technological development? And if so, what does that mean for the future of defense?
Exactly. Historically, the military drove innovation that later entered civilian life. Today, civilian sectors—especially private companies, startups, and universities—lead in key areas like AI, robotics, cybernetics, and quantum tech. This marks a fundamental shift.
What this means:
State control over strategic technologies is weakening. Private actors now drive progress and aren’t always aligned with national interests.
The balance of power is shifting. Democracies must rethink their relationship with the private sector and increasingly adopt, not direct, innovations.
Security risks are multiplying. Small actors can now access technologies once reserved for great powers, creating new threats.
Defense strategy must adapt. States must become integrators—collaborating closely with industry and academia, not acting as sole owners or drivers of innovation.
It’s a paradigm shift. Future defense must be flexible, fast, and based on civilian-military cooperation.
At the Munich Security Conference, JD Vance harshly questioned Europe’s ability to defend itself, but went even further—suggesting that the greatest threat is neither Russia nor China, but Europe itself. That it is losing internal cohesion and the strength of its own values. What do you think? What role does technology play in maintaining democratic principles—can it strengthen them, or paradoxically weaken them?
Vance touches a nerve. Europe faces not only external threats but internal ones—identity crises, rising populism, fragmentation, and declining trust in liberal-democratic values.
Why Europe could threaten itself:
It’s lost a unifying narrative. Foundational values like solidarity are under pressure.
It lacks strategic autonomy—still reliant on U.S. protection, underinvested in defense and resilience.
Polarization weakens unity and mutual trust.
Technology can both strengthen and undermine democracy:
On one hand, it boosts participation, transparency, and access to information.
But it also fragments societies through algorithmic bubbles, enables surveillance, and puts power in the hands of a few tech giants.
What Europe must do:
Invest in its own tech capacity—not just militarily, but to ensure digital sovereignty.
Rebuild a shared vision based on clear values and strategic goals.
Whether tech helps or harms democracy depends on how Europe chooses to use it. The next decade will be decisive.
Every major innovation in history has been a double-edged sword—gunpowder, nuclear energy, the internet. How do we distinguish between “good” and “bad” technology today? Who should be the arbiter in deciding how far humanity should go?
Indeed—every major innovation has held the potential for both progress and destruction. The same applies today.
How we might distinguish good from bad tech:
By its societal impact—does it increase freedom and safety, or foster inequality and control?
By control and transparency—can it be regulated, or will it escape oversight?
By ethical standards—are we considering its long-term consequences?
These decisions shouldn’t rest with experts alone. They require broad public debate. And for globally impactful tech, we need international agreements—like we did with nuclear weapons. Technology is never good or bad in itself. The key lies in how we choose to use it.
Europe is facing growing skepticism about its own defense capabilities while simultaneously dealing with internal disputes over the regulation of technologies, AI, and freedom of speech. Are we witnessing technological fragmentation between Europe, the USA, and China? And what will be the impact on global security?
Yes, we’re seeing growing fragmentation—three distinct technological ecosystems are emerging, shaped by different rules, values, and strategic goals.
Why fragmentation is happening:
The U.S. views tech through a national security lens.
China seeks dominance and self-sufficiency.
Europe is trying to find a regulated, value-based middle ground.
What this means for security:
It increases mistrust, reduces interoperability, and complicates alliances.
New tech standards may not align—making global regulation harder.
Fragmentation deepens geopolitical rivalry, even among allies.
Europe’s response must be:
Invest in its own tech development.
Strengthen partnerships, especially with the U.S.
Define clear value-based rules for freedom, security, and tech governance.
Without this, Europe risks becoming technologically dependent—and geopolitically sidelined.
Science once served primarily as a means of knowledge. Today, it is increasingly intertwined with geopolitics, economics, and defense. Is it possible for science to remain apolitical, or is that notion now a utopia?
Science in its purest form has the ambition to be a neutral and objective search for truth. However, in practice, especially in application, it has never been completely isolated from politics, economics or defence. Only the degree of this connection has varied from period to period. Today, these connections are becoming more intense and complex than ever before.
Why science can’t stay fully apolitical:
It’s now a strategic tool—used to gain geopolitical advantage.
It’s economically entangled—funded by entities with commercial agendas.
It has dual-use implications—advances in AI or biotech affect both civilian and military domains.
But neutrality is still possible—partially:
In basic research, free from immediate applications.
Through transparency and open collaboration.
Via strong ethical self-regulation within the scientific community.
The complete apoliticalisation of science is currently an idealistic notion rather than a realistic possibility. In short, science today operates in the context of politics, economics and security.Scientists and innovators often ask themselves: “Just because we can do it, should we do it?” How do you personally set ethical boundaries when deciding which areas of research to invest in?
I believe innovation must serve long-term values—not just progress for its own sake. If a project contradicts my principles, I step back or seek alternatives.
Every research decision is also a moral one. The question “Should we do it?” must be answered through reflection, integrity, and accountability—not just to our generation, but to the next.
In military research, the concept of “minimum necessary force” is often discussed—meaning that technology should be developed to cause only as much harm as is necessary to achieve the objective. But where is the line drawn? Who determines what is “necessary”?
This principle is sound—but its application is complex and context-dependent because it is based on subjective judgments and changing circumstances. The underlying principle is that military force should always be used as little as possible to achieve a legitimate aim.
It relies on:
Proportionality—force must match the significance of the target.
Necessity—only use force if no less harmful option exists.
Who decides?
Military commanders, who assess the situation.
Political leaders, who set the framework and objectives.
International law, which sets limits but leaves room for interpretation.
Public opinion, which increasingly influences what is deemed acceptable.
The line is never fixed. It always depends on the nature of the threat, the objectives of the operation, the availability of alternatives, and the cultural, ethical and moral values of the actor.
Throughout your life, you have witnessed many key technological leaps—from command automation to the advent of autonomous systems and the rise of AI. Which of these changes has fascinated you the most, and why?
Artificial intelligence, without a doubt. It’s not just a technological step forward—it’s a qualitative leap. AI changes how we perceive, decide, and define ourselves.
It’s the first tool that doesn’t merely extend our abilities—but interprets the world on its own terms. Its pace is breathtaking, and its trajectory unpredictable. That’s what fascinates—and unnerves—me most.
It forces us to ask: What kind of future do we want? And what kind of humans do we want to be?
Imagine traveling 30 years into the future and then returning. What technological changes would not surprise you? And which ones would shock you?
No surprise: autonomous systems, omnipresent AI, biotech breakthroughs, space expansion.
Shock: general AI, brain-machine integration, radical life extension, the collapse of the internet as we know it.
Some changes feel like natural evolution. Others—true revolutions—could alter our very nature. Anticipating the difference between the two is key to steering the future wisely.
You have had a long and successful career in academia, the military, and the private sector. What has been the most significant shift in your thinking over the years? Is there something you once believed in but now see differently?
Yes. I used to believe rationality and logic could solve everything—that clear data and methods would always lead to good outcomes.
But I’ve learned that’s only part of the picture. Often, we fail not because we don’t know what to do, but because of fear, ego, or inertia.
Each sector taught me something:
The military showed me that no tech works without strategy and clarity.
Academia taught me that elegant theories often fail in messy reality.
The private sector reminded me that innovation needs relevance and delivery.
This journey made me more humble—and more aware of the human element in every solution.
Do you think progress is inherently good? Or is there a point where excessive technological development becomes more of a threat than a benefit? And if so—how do we recognize when that point is reached?
Progress is not inherently good or bad—it’s change. And its value depends entirely on how we shape and use it.
It becomes dangerous when it outpaces our ability to manage it—when it disrupts values, freedoms, or the fabric of society. But this danger rarely arrives all at once. It creeps in, subtly.
How do we know the tipping point has come?
When we lose control, or stop asking the hard questions.
When tech’s side effects grow and go unaddressed.
When we prioritize speed over reflection.
We’ll only recognize the danger if we commit to ethical vigilance. Asking the right questions may matter more than finding fast answers.
Interviewed: Katerina Urbanova
Photo Credit: Bohuslav Přikryl