The Global AI Competition: Stargate, DeepSeek, and the EU AI Act in Collision
After 2025, the global AI competition is being reorganized into a three-pole structure — the United States, China, and Europe. The US is seeking to consolidate AI infrastructure dominance through the Stargate project, a public-private USD 500 billion initiative, while China declared "AI democratization" by delivering GPT-4-level performance at a fraction of the cost through its DeepSeek R1 and V3 models. Europe is pursuing a phased implementation of the EU AI Act (Artificial Intelligence Act) from February 2025, aiming to establish a high-risk AI regulatory framework as a global standard.
All three of these forces directly impact Korean public sector AI policy. Following the inauguration of the new government, the Ministry of Economy and Finance (MOEF) announced the "AI Utilization Policy Direction for Public Institutions," setting out concrete guidance on how Korean public agencies should adopt AI and manage it safely within the global AI competitive landscape. KOTRA, the Korea Trade Insurance Corporation, and other trade and investment public agencies are the direct targets of this policy, and it shapes the entire AI-based trade support infrastructure expansion strategy.
The US Stargate Project: Opening of the AI Infrastructure Supremacy Race
In January 2025, President Trump announced the "Stargate" project, a joint initiative with OpenAI, SoftBank, and Oracle. The plan calls for up to USD 500 billion in investment over four years to build AI data centers and supercomputing infrastructure across the United States. The first phase — construction of ten NVIDIA GB200-based data centers in Abilene, Texas, targeting completion by 2026 — is already underway.
Stargate goes beyond a private investment — it is directly tied to the US government's AI hegemony strategy. If US AI companies such as OpenAI, Anthropic, and Microsoft come to dominate the foundational infrastructure of global AI services, public agencies and corporations worldwide will become increasingly dependent on US cloud and AI services. Korean public institutions are no exception. MOEF explicitly identifies finding the balance between reducing this dependency and accessing state-of-the-art AI capabilities as one of the central challenges of public sector AI policy.
The DeepSeek Shock: Implications of Low-Cost, High-Performance AI for Public Agency Adoption
In January 2025, China's DeepSeek shocked the global AI industry with its R1 model. The claim of achieving reasoning performance matching OpenAI's o1 model at a training cost of just USD 6 million — combined with openly releasing the model weights for free use by anyone — fundamentally disrupted established assumptions. The V3 model delivered benchmark results surpassing GPT-4o in coding, mathematics, and natural language understanding, marking an inflection point where "democratization of AI costs" became real.
DeepSeek carries dual implications for Korean public sector AI policy. On the positive side, installing the open-source DeepSeek model on-premise within a public agency eliminates dependence on US cloud APIs while enabling high-performance AI at comparatively low cost. On the security and data sovereignty side, however, there are concerns about feeding public data into a Chinese AI model or using services connected to Chinese servers. MOEF's policy direction leaves open the possibility of using open-source models like DeepSeek, but presents rigorous security validation as a prerequisite.
| Model | Benchmark Performance | Training Cost | License | Public Agency Suitability |
|---|---|---|---|---|
| DeepSeek-R1 | AIME 79.8%, MATH-500 97.3% | Approx. $6M | MIT (open source) | On-premise deployment after security validation |
| DeepSeek-V3 | Coding/math surpasses GPT-4o | Approx. $5.5M | MIT (open source) | On-premise deployment after security validation |
| DeepSeek-R1-Distill | Qwen/Llama-based lightweight | Minimal additional cost | MIT (open source) | Edge server deployment feasible |
| GPT-4o (OpenAI) | Top-tier overall | API metered billing | Commercial API | Cloud dependency; personal data leakage risk |
| Claude 3.5 Sonnet | Top-tier reasoning and coding | API metered billing | Commercial API | Cloud dependency; US server storage |
| HyperCLOVA X (Naver) | Top-tier Korean language | High initial investment | Commercial (public contract) | Domestic servers; suitable for public procurement |
The EU AI Act: A Regulatory Framework Korean Public Agencies Must Understand
The EU AI Act (Artificial Intelligence Act) entered into force in August 2024, with prohibited AI use rules beginning to apply from February 2025. Full obligations for high-risk AI systems take effect in August 2026, and with certain exception provisions expiring in August 2027, an effectively comprehensive enforcement regime will be in place. While the EU AI Act applies only to AI systems operating within the EU, Korean export companies and public agencies face direct compliance obligations when they provide AI-based services to EU counterparts or target EU markets.
The EU AI Act compliance need for Korean public agencies arises through two pathways. First, when KOTRA, the Korea Trade Insurance Corporation, and similar agencies provide AI-based services — buyer matching, risk analysis, and so on — to customers or partners in EU member states, the extraterritorial application principle of the EU AI Act makes those AI systems subject to regulation. Second, when Korean companies operating in EU markets use KOTRA's AI support tools to transact with EU buyers, the AI compliance of the KOTRA platform may indirectly come under verification requirements. MOEF's policy direction actively recommends using EU AI Act standards as the reference framework for Korean public sector AI governance.
MOEF New Government Public Institution AI Policy Direction: Core Content Analysis
Following the inauguration of the new government, MOEF announced the "AI Utilization Policy Direction for Public Institutions," presenting three core pillars: active utilization (accelerating AI adoption), safe management (minimizing risk), and productivity innovation (improving operational efficiency). This direction was formulated with direct reference to the three global variables — Stargate, DeepSeek, and the EU AI Act — targeting a balance that keeps pace with global AI competition while maintaining public trustworthiness.
| Task Area | Details | Target | Implementation Timeline |
|---|---|---|---|
| AI Adoption Acceleration | Establish AI-First principle; target 10pp annual increase in task automation rate | All public institutions | Immediate from 2025 |
| Generative AI Utilization | Deploy guidelines for ChatGPT-type generative AI in government work; define permissible scope by security level | Individual staff usage | 2025 Q1 |
| Public-Sector Specialized AI | Support fine-tuned model development for institution-specific tasks; provide AI cloud platform | Medium-to-large public institutions | 2025-2026 |
| AI Safety Management Framework | Mandatory AI impact assessment; set algorithm audit cycles; Human-in-the-Loop (HITL) procedures | Institutions using high-risk AI | H1 2026 |
| Data Governance | Standardize procedures for using public data in AI training; personal data de-identification requirements | Data-holding institutions | 2025-2026 |
| AI Talent Development | Train 2,000 AI specialists across public institutions; AI literacy education for all staff | All public institutions | 2025-2028 |
| Performance Measurement | Standardize AI adoption KPIs; incorporate AI utilization metrics into institutional management evaluations | Management evaluation targets | Reflected in 2026 evaluation |
| International Cooperation | Adopt OECD AI Principles and EU AI Act standards; participate in G20 AI governance cooperation | Institutions with external AI service linkages | 2025-2027 |
The most notable shift in the policy direction is the "AI-First" principle. If the previous default for introducing new work systems was traditional IT-based approaches with AI as an option, the new procedure requires that every new system or service design begin with the question: "Can AI handle this?" KOTRA's Navi AI buyer matching and TriBIG market analysis platform are emerging as flagship examples of this AI-First principle and are becoming a model for AI adoption by other public agencies.
Public Institution AI Safety Management Framework: Practical Implementation Guide
One of the core elements of MOEF's policy direction is the mandatory AI safety management framework. From the first half of 2026, public agencies using high-risk AI must conduct a pre-deployment AI Impact Assessment and publish the results. The standard for "high-risk" is broadly aligned with the EU AI Act, with additional Korea-specific criteria covering administrative service AI and AI processing large volumes of personal data.
The three global forces of Stargate, DeepSeek, and the EU AI Act are exerting pressure on Korean public sector AI policy from different directions simultaneously. Stargate deepens Korean dependency as US AI infrastructure dominance consolidates; DeepSeek raises the possibility of public agencies running their own AI at low cost; and the EU AI Act introduces new compliance burdens. MOEF's AI-First principle and safety management framework represent a structural response designed to manage these three forces in balance. Trade and investment public agencies including KOTRA must accelerate AI adoption on this policy foundation while upholding three principles: data sovereignty, algorithmic transparency, and human oversight. Overseas outposts such as the KOTRA Dhaka Trade Center serve as the field-level embodiments of this AI infrastructure — taking on the role of proactively adapting to a shifting global regulatory environment.