Türkiye’s 2026 AI strategy and the logic of middle power autonomy
"Türkiye’s 2026 program offers a revealing case study of how a middle power seeks to translate technological capability into strategic resilience." (Illustration by Erhan Yalvaç)

The country's 2026 program treats AI as a core element of state capacity and strategic autonomy



Türkiye’s technological development has been discussed for decades, especially in relation to the defense sector. However, the release of Türkiye’s 2026 Presidential Annual Program a few weeks ago marks more than a routine policy update when seen from a technological perspective. Indeed, this program signals a deeper transformation in how the Turkish state understands power, governance and technological change. Artificial intelligence (AI), long discussed as an enabler of modernization and efficiency, is now framed as a core component of state capacity itself. Rather than treating AI as a sectoral innovation tool, the 2026 program places it at the center of public administration, strategic autonomy, defense planning and more.

This shift matters not only for Türkiye but also for how middle powers navigate the emerging global order. As competition over data, computing and advanced technologies intensifies, AI is increasingly shaping how states preserve autonomy, absorb shocks and sustain institutional performance. Türkiye’s 2026 program offers a revealing case study of how a middle power seeks to translate technological capability into strategic resilience.

A shift in state logic

Presidential Annual Programs traditionally outline policy priorities for the coming year. Yet the 2026 document stands out for the coherence and depth with which AI is integrated across state functions. Previous programs acknowledged digital transformation and emerging technologies, but largely treated AI as an auxiliary upgrade – something to be adopted selectively or piloted in limited domains. The 2026 program departs from this approach.

AI is no longer confined to innovation chapters or technology road maps. It appears throughout the document as a cross-cutting capability embedded in fiscal administration, customs and risk management, health care delivery, agricultural planning, social security systems, public communication platforms and workforce training. This breadth reflects a conceptual shift: AI is not positioned as an optional enhancement, but as an operational layer of governance.

Equally important is the program’s emphasis on regulation and oversight. For the first time, a comprehensive framework is proposed to govern public-sector AI use, including ethical principles, risk assessment mechanisms, certification procedures and monitoring standards. This move suggests that AI is no longer seen as experimental. It is treated as a permanent feature of state operations – one that requires institutional discipline rather than ad hoc adoption.

AI as capacity insurance

To understand the logic behind this shift, it is useful to view AI not primarily as a growth accelerator, but as capacity insurance. For middle powers operating in an increasingly fragmented global system, AI offers a way to protect institutional effectiveness under conditions of uncertainty.

Unlike major technology powers, middle states face structural vulnerabilities: exposure to supply-chain disruptions, susceptibility to sanctions or export controls, demographic pressures on public services, and limited fiscal space to expand bureaucracies indefinitely. In this context, AI becomes a tool for preserving state capacity rather than projecting dominance.

The 2026 program reflects this logic clearly. Machine-learning-based risk analysis in taxation and customs, automated decision-support systems in health care, predictive analytics in social policy and continuous digital assistance for citizens are all designed to reduce overload on state institutions. These applications are less about technological prestige and more about ensuring that the state can continue to function effectively under strain.

Seen through this lens, Türkiye’s AI strategy is defensive as much as it is ambitious. It seeks to hedge against future shocks, economic volatility, geopolitical pressure, or administrative bottlenecks by embedding adaptive intelligence into the machinery of governance. For a middle power, this form of resilience can be as strategically significant as military modernization. It is exactly here were concept such as strategic autonomy or digital autonomy become more obvious and important than ever.

Strategic and digital autonomy

The concept of strategic autonomy indirectly runs throughout the 2026 program, and AI is increasingly central to how that autonomy is defined. Digital dependency today extends beyond hardware imports or software licenses. It encompasses data access, model architectures, cloud infrastructure and computational capacity. Dependence on any of these layers can translate into political or strategic vulnerability.

The program’s emphasis on domestic AI models, secure data infrastructures and national compute capacity reflects an awareness of these risks. Importantly, the objective is not technological isolation or self-sufficiency at all costs. Rather, it is to avoid forms of dependency that could constrain policy choices or expose critical systems to external leverage.

This approach aligns with a broader trend among middle powers seeking "operational sovereignty” rather than technological supremacy. Control over key digital infrastructures allows states to deploy AI in sensitive domains, such as public finance, security, or social services, without relying on opaque external systems. In this sense, digital autonomy is not only about technological innovation, but about the ability to translate innovation into coherent policy, strategic control, and effective application across state institutions, an approach explicitly reflected in the 2026 program.

Dual-use AI and military power

The defense dimension of Türkiye’s AI strategy is where these dynamics become most visible. The 2026 program explicitly identifies AI as a foundational element of defense modernization, supporting multiple research and development projects under the coordination of defense institutions. These initiatives include autonomous platforms, coordinated drone operations, advanced intelligence, surveillance and reconnaissance (ISR) capabilities, and cognitive electronic warfare systems.

What distinguishes this approach is its focus on horizontal and comprehensive integration rather than standalone systems. AI is not treated as an add-on to existing platforms but as a connective layer linking sensors, decision-support tools and operational coordination. In contemporary conflict environments, speed, adaptability and information superiority increasingly outweigh sheer firepower. AI enables precisely these qualities.

Equally significant is the program’s emphasis on dual-use technology circulation and civil-military application of AI. Military AI innovations, such as real-time analytics or autonomous coordination algorithms, can be transferred to civilian domains like logistics, energy optimization and smart manufacturing. Conversely, advances in civilian AI applications, including natural language processing or computer vision, can be rapidly adapted for defense needs.

For middle powers, the significance of dual-use AI lies not simply in efficiency gains or cost sharing but in how states respond to the broader technological imperative. Throughout modern history, states have been compelled to adopt transformative technologies to remain competitive economically, militarily and institutionally. Great powers, however, have rarely treated this compulsion as a burden. Instead, they have systematically converted it into an advantage by structuring innovation so that civilian and military applications reinforce one another. Dual-use technologies have thus functioned as accelerators of both strategic power and industrial development. Never before has the dual-use nature of a technology as visible as in the case of AI.

Within this context, the 2026 program reflects Türkiye’s awareness of this dynamic. By explicitly emphasizing dual-use AI and civil-military application, the program signals an effort to align innovation policy, public-sector deployment and defense requirements within a single strategic framework. In this model, military demand does not operate in isolation, nor does civilian innovation remain detached from national priorities. Advances in AI, whether developed for public administration, commercial use, or defense are treated as mutually reinforcing components of a unified technological trajectory.

If managed effectively, this approach could open significant opportunities for Türkiye. By organizing AI development around dual-use circulation rather than sectoral silos, the technological imperative itself can be turned into a source of leverage. The same dynamics that have long favored great powers, continuous innovation driven by cross-domain demand, rapid diffusion of capability and institutional learning, can, on a different scale, strengthen Türkiye’s strategic position. In this sense, the challenge posed by global technological competition is not merely something to adapt to, but something that can be actively shaped.

Institutionalizing AI at scale

Another important and consequential aspect of the 2026 program is its focus on the institutionalization of technology. Deploying AI in isolated projects is relatively straightforward. Governing AI at scale, across ministries, agencies and public services, is far more challenging. The program addresses this challenge directly. It proposes standardized certification processes for public-sector AI tools, formal risk assessment procedures, and ethical guidelines designed to prevent misuse or unintended consequences. These measures are intended to ensure consistency, accountability, and transparency as AI becomes embedded in daily administrative routines.

This institutional turn is critical. Without robust governance frameworks, AI adoption can undermine trust, create opacity in decision-making, or erode professional expertise within the bureaucracy. The program implicitly recognizes that AI should augment human judgment rather than replace it, and that sustaining institutional capacity requires a careful balance between automation and oversight.

In this respect, Türkiye’s approach reflects a lesson reinforced by both international experience and its own digital governance trajectory: the success of AI in government depends less on advanced algorithms than on institutional coordination, regulatory clarity, and organizational readiness. By foregrounding governance alongside deployment, the 2026 program signals an awareness of these long-term risks.

Ultimate signals

Taken as a whole, Türkiye’s 2026 program represents a strategic recalibration. AI is no longer framed as a technological trend to be followed, but as a structural component of state power. Its integration across public administration, defense and regulatory frameworks reflects a shift from reactive adaptation to proactive design.

For Türkiye, this strategy highlights a distinct path in the global AI landscape. Rather than competing for technological primacy, the emphasis is on resilience, autonomy and institutional effectiveness. AI is deployed not to replace the state, but to reinforce it.

Whether this model succeeds will depend not on algorithms alone, but on governance quality, coordination across institutions and sustained political commitment. What is clear, however, is that the 2026 program marks an important moment: a recognition that in the age of AI, state capacity itself has become a strategic asset, and one that must be deliberately cultivated.