Policy

Korean Public Sector AI Policy Direction: Stargate, DeepSeek, and the EU AI Act

The Global AI Competition: Stargate, DeepSeek, and the EU AI Act in Collision

After 2025, the global AI competition is being reorganized into a three-pole structure — the United States, China, and Europe. The US is seeking to consolidate AI infrastructure dominance through the Stargate project, a public-private USD 500 billion initiative, while China declared "AI democratization" by delivering GPT-4-level performance at a fraction of the cost through its DeepSeek R1 and V3 models. Europe is pursuing a phased implementation of the EU AI Act (Artificial Intelligence Act) from February 2025, aiming to establish a high-risk AI regulatory framework as a global standard.

All three of these forces directly impact Korean public sector AI policy. Following the inauguration of the new government, the Ministry of Economy and Finance (MOEF) announced the "AI Utilization Policy Direction for Public Institutions," setting out concrete guidance on how Korean public agencies should adopt AI and manage it safely within the global AI competitive landscape. KOTRA, the Korea Trade Insurance Corporation, and other trade and investment public agencies are the direct targets of this policy, and it shapes the entire AI-based trade support infrastructure expansion strategy.

$500B
Stargate Investment
US public-private AI infrastructure investment (4 years)
$6M
DeepSeek Training Cost
R1 training cost (approx. 1/100 of GPT-4)
Aug 2026
EU AI Act High-Risk
Full mandatory application for high-risk AI systems
38%
Korean Public AI Adoption
2025 public institution AI utilization rate (MOEF)
KRW 2.1T
Public AI Budget
2026 government AI-related budget allocation
350+ agencies
AI Safety Obligation
MOEF-designated AI safety management targets
$600B
Global AI Market
2030 forecast (IDC, 2025)
12 tools
KOTRA AI Tools Target
AI tools to be released sequentially by 2030

The US Stargate Project: Opening of the AI Infrastructure Supremacy Race

In January 2025, President Trump announced the "Stargate" project, a joint initiative with OpenAI, SoftBank, and Oracle. The plan calls for up to USD 500 billion in investment over four years to build AI data centers and supercomputing infrastructure across the United States. The first phase — construction of ten NVIDIA GB200-based data centers in Abilene, Texas, targeting completion by 2026 — is already underway.

Stargate goes beyond a private investment — it is directly tied to the US government's AI hegemony strategy. If US AI companies such as OpenAI, Anthropic, and Microsoft come to dominate the foundational infrastructure of global AI services, public agencies and corporations worldwide will become increasingly dependent on US cloud and AI services. Korean public institutions are no exception. MOEF explicitly identifies finding the balance between reducing this dependency and accessing state-of-the-art AI capabilities as one of the central challenges of public sector AI policy.

Stargate Phase 1 (2025-2026): Infrastructure Foundation
InvestmentFirst tranche of $100B deployed immediately
Key PartnersOpenAI, SoftBank, Oracle
Scale10 data centers in Texas under construction
Korea ImpactGrowing public services dependent on OpenAI APIs
Stargate Phase 2 (2027-2028): Global AI Service Expansion
InvestmentAdditional $400B across tranches 2-4
Key PartnersNVIDIA, ARM, Microsoft linkage
Scale20+ national hubs completed
Korea ImpactUS AI standards becoming global de facto norms
Korea's Response Strategy: Balancing Independence and Utilization
Domestic AI DevelopmentSupport for Korean LLMs and public-sector specialized models
Global AI UtilizationPermitting adoption of verified foreign AI APIs
Data SovereigntyStrengthening regulations on cross-border public data transfer
Infrastructure InvestmentConstruction of 4 national AI computing centers

The DeepSeek Shock: Implications of Low-Cost, High-Performance AI for Public Agency Adoption

In January 2025, China's DeepSeek shocked the global AI industry with its R1 model. The claim of achieving reasoning performance matching OpenAI's o1 model at a training cost of just USD 6 million — combined with openly releasing the model weights for free use by anyone — fundamentally disrupted established assumptions. The V3 model delivered benchmark results surpassing GPT-4o in coding, mathematics, and natural language understanding, marking an inflection point where "democratization of AI costs" became real.

DeepSeek carries dual implications for Korean public sector AI policy. On the positive side, installing the open-source DeepSeek model on-premise within a public agency eliminates dependence on US cloud APIs while enabling high-performance AI at comparatively low cost. On the security and data sovereignty side, however, there are concerns about feeding public data into a Chinese AI model or using services connected to Chinese servers. MOEF's policy direction leaves open the possibility of using open-source models like DeepSeek, but presents rigorous security validation as a prerequisite.

DeepSeek Key Model Performance and Cost Comparison (as of 2025)
ModelBenchmark PerformanceTraining CostLicensePublic Agency Suitability
DeepSeek-R1AIME 79.8%, MATH-500 97.3%Approx. $6MMIT (open source)On-premise deployment after security validation
DeepSeek-V3Coding/math surpasses GPT-4oApprox. $5.5MMIT (open source)On-premise deployment after security validation
DeepSeek-R1-DistillQwen/Llama-based lightweightMinimal additional costMIT (open source)Edge server deployment feasible
GPT-4o (OpenAI)Top-tier overallAPI metered billingCommercial APICloud dependency; personal data leakage risk
Claude 3.5 SonnetTop-tier reasoning and codingAPI metered billingCommercial APICloud dependency; US server storage
HyperCLOVA X (Naver)Top-tier Korean languageHigh initial investmentCommercial (public contract)Domestic servers; suitable for public procurement

The EU AI Act: A Regulatory Framework Korean Public Agencies Must Understand

The EU AI Act (Artificial Intelligence Act) entered into force in August 2024, with prohibited AI use rules beginning to apply from February 2025. Full obligations for high-risk AI systems take effect in August 2026, and with certain exception provisions expiring in August 2027, an effectively comprehensive enforcement regime will be in place. While the EU AI Act applies only to AI systems operating within the EU, Korean export companies and public agencies face direct compliance obligations when they provide AI-based services to EU counterparts or target EU markets.

EU AI Act Risk-Based Regulatory Tiers
Unacceptable Risk
Social credit systems, real-time biometric surveillance, emotion recognition (some) → Prohibited in principle; immediate application from February 2025
High Risk
Recruitment, university admissions, credit, medical, judicial, educational AI → CE marking, conformity assessment, human oversight obligation; full application from August 2026
Limited Risk
Chatbots, AI-generated content → Transparency obligation (disclose AI nature); deepfake labeling requirement
Minimal Risk
Spam filters, AI games, etc. → Self-regulation; no separate obligations
General-Purpose AI (GPAI)
GPT-4, DeepSeek-class models → Transparency obligation; high-impact models subject to additional obligations (system review, etc.)

The EU AI Act compliance need for Korean public agencies arises through two pathways. First, when KOTRA, the Korea Trade Insurance Corporation, and similar agencies provide AI-based services — buyer matching, risk analysis, and so on — to customers or partners in EU member states, the extraterritorial application principle of the EU AI Act makes those AI systems subject to regulation. Second, when Korean companies operating in EU markets use KOTRA's AI support tools to transact with EU buyers, the AI compliance of the KOTRA platform may indirectly come under verification requirements. MOEF's policy direction actively recommends using EU AI Act standards as the reference framework for Korean public sector AI governance.

01
Direct EU AI Act Application Requirements for Korean Public Agencies
The types of public agency AI use to which EU high-risk AI regulations apply are as follows: ① AI used in hiring and personnel evaluation (high-risk); ② AI that determines eligibility for public service benefits (high-risk); ③ AI supporting law enforcement and judicial functions (high-risk); ④ AI evaluating educational training outcomes (high-risk). KOTRA's buyer matching and market analysis AI is likely to be classified as "limited risk" or "minimal risk," but AI that directly supports credit or investment decisions for EU companies may be reviewed as high-risk.
02
Obligations When Using General-Purpose AI (GPAI) Models
EU AI Act GPAI provisions apply when public agencies use general-purpose AI models such as GPT-4, DeepSeek V3, or Claude. Core obligations are: ① label AI-generated outputs as AI-generated; ② establish policies to prevent copyright infringement; ③ prepare system technical documentation. High-impact GPAI models (training compute exceeding 10²⁵ FLOP) face additional system security assessment and incident reporting obligations. Korean public agencies must review in advance whether the AI models they use meet these standards.
03
EU AI Act Compliance and Alignment with Korean AI Safety Regulations
Through the 2025 AI Basic Act Enforcement Decree, the Korean government has introduced a mandatory pre-deployment impact assessment and post-deployment audit system for high-risk AI. Discussions on mutual recognition agreements with the EU AI Act are also underway. Building the system documentation and human oversight frameworks that meet EU AI Act standards is increasingly aligned with simultaneous satisfaction of Korea's AI Basic Act compliance — meaning rather than bearing double regulatory burden, one unified compliance system satisfies both regimes.
04
EU AI Act Penalty Levels for Violations
Using prohibited AI (unacceptable risk category) can result in fines of 7% of global annual revenue or EUR 35 million, whichever is higher. Violation of high-risk AI obligations: 3% or EUR 15 million. Providing false information: 1.5% or EUR 7.5 million. For Korean public agencies and companies connected to EU markets, this represents a realistic risk even if indirect. Especially after 2027, when KOTRA expands AI services targeting EU companies, EU AI Act compliance will become a non-negotiable operational requirement.
Complete Analysis: KOTRA AI Trade and Investment Infrastructure PlanExplore KOTRA's 2025-2030 AI infrastructure expansion roadmap — TriBIG big data platform, Navi AI buyer matching, DX Innovation Lab — in the context of the EU AI Act and the global regulatory environment.

MOEF New Government Public Institution AI Policy Direction: Core Content Analysis

Following the inauguration of the new government, MOEF announced the "AI Utilization Policy Direction for Public Institutions," presenting three core pillars: active utilization (accelerating AI adoption), safe management (minimizing risk), and productivity innovation (improving operational efficiency). This direction was formulated with direct reference to the three global variables — Stargate, DeepSeek, and the EU AI Act — targeting a balance that keeps pace with global AI competition while maintaining public trustworthiness.

MOEF Public Institution AI Utilization Policy Direction: Core Tasks (2025-2027)
Task AreaDetailsTargetImplementation Timeline
AI Adoption AccelerationEstablish AI-First principle; target 10pp annual increase in task automation rateAll public institutionsImmediate from 2025
Generative AI UtilizationDeploy guidelines for ChatGPT-type generative AI in government work; define permissible scope by security levelIndividual staff usage2025 Q1
Public-Sector Specialized AISupport fine-tuned model development for institution-specific tasks; provide AI cloud platformMedium-to-large public institutions2025-2026
AI Safety Management FrameworkMandatory AI impact assessment; set algorithm audit cycles; Human-in-the-Loop (HITL) proceduresInstitutions using high-risk AIH1 2026
Data GovernanceStandardize procedures for using public data in AI training; personal data de-identification requirementsData-holding institutions2025-2026
AI Talent DevelopmentTrain 2,000 AI specialists across public institutions; AI literacy education for all staffAll public institutions2025-2028
Performance MeasurementStandardize AI adoption KPIs; incorporate AI utilization metrics into institutional management evaluationsManagement evaluation targetsReflected in 2026 evaluation
International CooperationAdopt OECD AI Principles and EU AI Act standards; participate in G20 AI governance cooperationInstitutions with external AI service linkages2025-2027

The most notable shift in the policy direction is the "AI-First" principle. If the previous default for introducing new work systems was traditional IT-based approaches with AI as an option, the new procedure requires that every new system or service design begin with the question: "Can AI handle this?" KOTRA's Navi AI buyer matching and TriBIG market analysis platform are emerging as flagship examples of this AI-First principle and are becoming a model for AI adoption by other public agencies.

Public Institution AI Safety Management Framework: Practical Implementation Guide

One of the core elements of MOEF's policy direction is the mandatory AI safety management framework. From the first half of 2026, public agencies using high-risk AI must conduct a pre-deployment AI Impact Assessment and publish the results. The standard for "high-risk" is broadly aligned with the EU AI Act, with additional Korea-specific criteria covering administrative service AI and AI processing large volumes of personal data.

Public Institution AI Deployment Safety Management Procedure (MOEF Guidelines)
Step 1: AI Impact Assessment
Pre-deployment risk level classification → commission specialized agency assessment for high-risk cases → publish results
Step 2: Security Suitability Verification
NIS AI security certification or internal security review → obtain public network connection approval
Step 3: Human Oversight Framework
Design HITL (Human-in-the-Loop) procedures → designate AI decision review officer → establish objection channel
Step 4: Operational Monitoring
Monitor performance degradation, bias, and malfunction → quarterly algorithm audit → immediate service suspension protocol on anomaly
Step 5: Post-Evaluation and Improvement
Prepare and publish annual AI performance report → undergo external audit → incorporate improvements into following year plan
Low-Risk AI Use (Lightweight Procedure)
ExamplesDocument summarization, translation, scheduling, Q&A chatbot
Impact AssessmentInternal checklist (within 2 weeks)
Security VerificationInternal institutional security review
Disclosure ObligationAnnual operational status report
Medium-Risk AI Use (Standard Procedure)
ExamplesCivil complaint classification, subsidy screening support, recruitment document filtering
Impact AssessmentSpecialist-participated assessment (4-8 weeks)
Security VerificationKISA AI security review
Disclosure ObligationAssessment results and operational status published
High-Risk AI Use (Enhanced Procedure)
ExamplesWelfare benefit determination, civil servant performance evaluation, law enforcement support
Impact AssessmentIndependent agency in-depth assessment + National Assembly reporting
Security VerificationNIS AI security certification mandatory
Disclosure ObligationAlgorithm explanation, objection procedure — mandatory public disclosure
KOTRA AI Trade and Investment Infrastructure 2025-2030 RoadmapA comprehensive overview of KOTRA's three AI infrastructure pillars — TriBIG, Navi, and the DX Innovation Lab — plus the 2027 platform integration roadmap and Bangladesh trade support field application plans.

The three global forces of Stargate, DeepSeek, and the EU AI Act are exerting pressure on Korean public sector AI policy from different directions simultaneously. Stargate deepens Korean dependency as US AI infrastructure dominance consolidates; DeepSeek raises the possibility of public agencies running their own AI at low cost; and the EU AI Act introduces new compliance burdens. MOEF's AI-First principle and safety management framework represent a structural response designed to manage these three forces in balance. Trade and investment public agencies including KOTRA must accelerate AI adoption on this policy foundation while upholding three principles: data sovereignty, algorithmic transparency, and human oversight. Overseas outposts such as the KOTRA Dhaka Trade Center serve as the field-level embodiments of this AI infrastructure — taking on the role of proactively adapting to a shifting global regulatory environment.

AI policypublic sector AIStargateDeepSeekEU AI ActAI safetyMOEFAI governancepublic AIKOTRA
Korean Public Sector AI Policy Direction: Stargate, DeepSeek, and the EU AI Act | Dhaka Trade Portal