Defense AI Procurement: Why Sovereignty Is Now a Baseline Requirement
Defense and dual-use technology procurement has always been cautious about data sovereignty. The concern is not abstract: operational data, intelligence analysis, logistics systems, and planning tools all carry classification levels and handling requirements that reflect the real-world consequences of their exposure. The procurement frameworks that govern defense technology acquisition were built around this reality.
What has changed is that AI systems now touch these data categories at a scale and depth that previous software categories did not. A logistics planning tool accesses specific operational data when queried. An AI system that assists with planning, analysis, or decision support may ingest, index, and retrieve vast volumes of sensitive documentation as part of its normal operation. The sovereignty requirements that applied narrowly to specific classified systems must now apply broadly to AI infrastructure — and the procurement frameworks are catching up.
The Regulatory Shift: Sovereignty as Baseline
Across European defense and dual-use contexts, the past two years have seen a consistent movement: data sovereignty requirements shifting from optional premium features to standard procurement requirements. This is visible in procurement language, in the security frameworks that govern AI deployment for defense-adjacent organisations, and in the regulatory guidance that is emerging from national cyber security authorities.
The NIS2 Directive, which extends cybersecurity requirements across critical infrastructure sectors including defense-adjacent industries, requires that organisations manage risks in supply chains and across third-party service providers. For AI systems, this translates to requiring that organisations understand and control where AI inference occurs and what external dependencies exist. A system that sends queries to an external cloud provider for inference creates a supply chain dependency that NIS2 requires to be risk-assessed — and that assessment, in defense-adjacent contexts, tends to produce unfavourable conclusions about cloud AI.
France's SecNumCloud certification framework provides a concrete example of how national sovereignty requirements translate into AI procurement criteria. SecNumCloud requires, among other conditions, that service providers be free from extra-European legal obligations that could compel disclosure of data to foreign jurisdictions. This requirement effectively excludes cloud AI services from providers subject to extra-European legal frameworks — which includes the majority of commercial AI providers. For French defense-adjacent organisations, SecNumCloud alignment is increasingly a procurement requirement rather than an optional certification.
Similar frameworks are developing across other European jurisdictions, with varying specific requirements but consistent underlying logic: AI systems that handle sensitive data must operate in environments where sovereignty can be verified, not simply asserted by a vendor.
What Sovereignty Actually Means for AI Systems
Data sovereignty in the context of AI systems has three dimensions that must each be addressed independently.
Where data is stored. This is the dimension that procurement conversations most commonly address. Specifying that data must be stored within a particular geographic region is a standard contractual requirement. For AI systems, this applies to both the original documents and the derived representations — embeddings, indices, fine-tuning datasets. All of these contain information derived from the original sensitive data and must be subject to the same sovereignty requirements.
Where inference runs. This is the dimension that most procurement conversations underaddress. Inference — the process by which an AI system generates a response to a query — requires that the query content and retrieved document context be processed by the AI model. If inference runs on a cloud provider's infrastructure, query content and retrieved context leave the organisation's sovereignty perimeter during processing, even if the original documents are stored within it. Sovereign storage of documents combined with external inference is not sovereign AI — it is sovereign storage with a sovereignty gap at the point of use.
Who can access logs. AI systems generate logs: query logs, retrieval logs, output logs. These logs are potentially as sensitive as the documents themselves — they record what questions are being asked, what documents are being retrieved, and what outputs are being generated. In operational contexts, query patterns can reveal planning priorities, capability gaps, and analytical focus areas that are themselves sensitive. Sovereign control over AI logs means that log storage, access, and audit functions all operate within the same sovereignty perimeter as the data they record.
The Cloud AI Problem in Defense-Adjacent Contexts
The cloud AI problem for defense-adjacent organisations is not primarily a security competence question about cloud providers. Major cloud providers operate sophisticated security programs. The problem is structural: cloud infrastructure, by design, creates dependencies on provider operations, provider personnel, and provider legal obligations that are incompatible with the sovereignty requirements that defense and dual-use contexts impose.
Even private cloud deployments — dedicated infrastructure within a cloud provider's data centre — retain dependencies on provider personnel for maintenance, on provider security controls for physical and logical protection, and on provider legal obligations under the laws of their country of incorporation. A cloud provider incorporated under a jurisdiction with broad data access laws creates a potential legal pathway to data access that exists regardless of the provider's contractual commitments to the customer. Contractual protections cannot override legal obligations.
This is the structural argument for on-premise deployment in defense-adjacent AI contexts: it removes the external legal dependency entirely. On-premise AI infrastructure is operated under the sovereignty of the deploying organisation and the jurisdiction in which that organisation operates. There is no external provider whose legal obligations create a potential access pathway. The sovereignty of the infrastructure matches the sovereignty requirements of the data it processes.
What a Sovereign AI Procurement Checklist Looks Like
Procurement teams evaluating AI for defense-adjacent use are increasingly using structured checklists to assess sovereignty compliance. The critical questions for any AI procurement in this context include:
Inference location. Does inference run on the organisation's own infrastructure, on dedicated on-premise hardware, or on cloud provider infrastructure? Can inference location be verified through architecture documentation rather than contractual assertion? Is there any query content or retrieved context that leaves the organisation's infrastructure during inference?
Embedding sovereignty. Where are document embeddings stored? Are they in the same sovereignty perimeter as the original documents? Who has access to the embedding store? What are the vendor's obligations if the embedding store is compromised?
Log governance. Where are query and retrieval logs stored? Who can access them? Under what conditions, if any, can provider personnel access log content? Are logs subject to legal holds or disclosure obligations under provider jurisdiction?
Supply chain transparency. What third-party components or services are used in the AI pipeline? Do those components create their own external dependencies? Are all components in the inference pipeline subject to the same sovereignty requirements as the AI system itself?
Certification alignment. Does the AI system or its deployment architecture align with relevant national sovereignty certifications? Can compliance with applicable frameworks — NIS2 requirements, national cloud security certifications, sector-specific security standards — be demonstrated through audit rather than assertion?
The pattern in defense-adjacent procurement is clear: sovereignty requirements are being operationalised into specific, verifiable architectural requirements rather than broad policy commitments. The case for on-premise AI deployment in regulated industries applies with particular force in defense contexts, where the consequences of sovereignty failure are not merely regulatory but operational. Organisations that have not yet aligned their AI procurement criteria with this emerging standard are likely to find that it becomes mandatory in upcoming procurement cycles, as the frameworks that are currently guidance become requirements.
To see how Scabera approaches sovereign AI deployment for defense-adjacent organisations, book a demo.