Turkish Intelligence Academy report warns AI reshaping cyber threats
A temporary structure with an AI theme for a festival, Kolkata, India, Sept. 24, 2025. (Getty Images Photo)


A new report by Türkiye’s National Intelligence Academy was published Friday warning that rapid advances in artificial intelligence (AI) are fundamentally transforming cyber threats and require a coordinated, whole-of-society response to safeguard national security, public trust and critical infrastructure.

The report, titled "Cybersecurity in the Age of Artificial Intelligence and Türkiye’s Strategic Priorities,” said AI is no longer just a tool for boosting efficiency, automation and decision-making, but a "force multiplier” that is simultaneously increasing the scale, speed and complexity of cyberattacks.

"Artificial intelligence is not only enhancing productivity and analytical capacity, but also creating a new security domain where threats are more sophisticated and impactful,” the report said, underscoring the need to rethink cybersecurity beyond purely technical defenses.

According to the study, cyber risks can no longer be viewed solely as issues of system protection. Instead, they must be assessed in a broader framework that includes data security, institutional continuity, uninterrupted delivery of public services and the preservation of societal trust.

In a foreword to the report, academy head Talha Köse stressed that emerging AI-driven risks particularly those linked to large language model-based systems require a strategic approach that accounts for their potential impact on critical infrastructure and decision-making processes.

Köse said adapting to technological change alone is insufficient, emphasizing that institutions must anticipate risks and take preventive measures in a timely manner. He added that Türkiye’s digital transformation must advance simultaneously in areas such as regulation, coordination, security and human capital.

"The key issue today is not just keeping up with technology, but foreseeing the risks it may generate and taking the necessary institutional precautions in advance,” Köse said.

The report highlights that AI is lowering the cost of cyberattacks while increasing their effectiveness, enabling malicious actors to operate at unprecedented scale. At the same time, it notes that while AI adoption is accelerating across public institutions, the private sector and critical infrastructure, governance, oversight and security mechanisms are not evolving at the same pace.

This imbalance, the report warns, is increasing systemic vulnerabilities and deepening digital dependency. As a result, cybersecurity in the AI era must be addressed not only through technical safeguards, but also through national capacity-building, governance frameworks and strategic preparedness.

The study also identifies a new generation of vulnerabilities linked to AI systems, particularly large language models. These include prompt injection attacks, insecure output handling, sensitive data leakage, supply chain weaknesses and risks stemming from excessive system privileges or overreliance on automated outputs.

Such risks should not be treated merely as technical flaws, the report says, but as broader challenges involving data governance, accountability, auditability and the quality of institutional decision-making.

It further warns that AI-enabled cyber threats extend beyond digital systems, with direct implications for national security, institutional resilience and public confidence. One of the most pressing concerns is the rise of deepfakes and synthetic media, which can distort information ecosystems, erode institutional legitimacy and weaken public trust, particularly during times of crisis.

"AI-driven threats are not limited to cybersecurity. They also have strategic consequences for information security, public authority and social stability,” the report said.

Given these risks, the study advocates for a "human-AI collaboration model” in cybersecurity. While AI systems can rapidly detect anomalies, attack patterns and unusual behavior across large datasets, human expertise remains essential for interpreting context, filtering false positives and making critical decisions.

"The most realistic and sustainable approach is a hybrid defense model that combines the speed and scale advantages of AI with human oversight,” the report noted, adding that automation must be secure, auditable and clearly accountable.

Türkiye’s roadmap determined

The report also outlined a three-stage roadmap for Türkiye’s policy response, spanning short-, medium- and long-term priorities.

In the short-term, it calls for strengthening central coordination, establishing a unified institutional framework for AI-driven cyber risks and creating mandatory inventories of AI systems used in public institutions and critical infrastructure.

It also emphasizes the need for transparency regarding the types of data these systems process, the decisions they influence, their level of authority and their reliance on external service providers. Minimum security standards including data classification, logging, output verification and human oversight should be defined for large language models and agent-based systems.

In the medium term, the report highlights the importance of institutionalizing regulations, standards, auditing mechanisms and sectoral resilience frameworks. It recommends developing technical standards for AI systems used in critical infrastructure and public services, as well as incorporating strict requirements for security, auditability, record-keeping, incident reporting and supply chain transparency in public procurement processes.

Long-term priorities include building a strong national capacity capable of managing the security implications of reliance on foreign technologies. This involves developing testing, verification, certification and auditing capabilities, as well as strengthening the skilled workforce through sustainable cooperation between government, industry, academia and civil society.

The report also calls for supporting the development of a domestic cybersecurity and AI ecosystem, while increasing public awareness of risks such as identity manipulation, deepfakes and synthetic media.

Ultimately, the study concludes that cybersecurity in the age of AI is not just about protecting systems, but about managing state capacity, institutional decision-making quality, public trust and strategic autonomy in an integrated manner.