mirror of
https://github.com/raspberrypi/linux.git
synced 2025-12-14 05:49:55 +00:00
Pull kvm updates from Paolo Bonzini:
"ARM:
- Initial infrastructure for shadow stage-2 MMUs, as part of nested
virtualization enablement
- Support for userspace changes to the guest CTR_EL0 value, enabling
(in part) migration of VMs between heterogenous hardware
- Fixes + improvements to pKVM's FF-A proxy, adding support for v1.1
of the protocol
- FPSIMD/SVE support for nested, including merged trap configuration
and exception routing
- New command-line parameter to control the WFx trap behavior under
KVM
- Introduce kCFI hardening in the EL2 hypervisor
- Fixes + cleanups for handling presence/absence of FEAT_TCRX
- Miscellaneous fixes + documentation updates
LoongArch:
- Add paravirt steal time support
- Add support for KVM_DIRTY_LOG_INITIALLY_SET
- Add perf kvm-stat support for loongarch
RISC-V:
- Redirect AMO load/store access fault traps to guest
- perf kvm stat support
- Use guest files for IMSIC virtualization, when available
s390:
- Assortment of tiny fixes which are not time critical
x86:
- Fixes for Xen emulation
- Add a global struct to consolidate tracking of host values, e.g.
EFER
- Add KVM_CAP_X86_APIC_BUS_CYCLES_NS to allow configuring the
effective APIC bus frequency, because TDX
- Print the name of the APICv/AVIC inhibits in the relevant
tracepoint
- Clean up KVM's handling of vendor specific emulation to
consistently act on "compatible with Intel/AMD", versus checking
for a specific vendor
- Drop MTRR virtualization, and instead always honor guest PAT on
CPUs that support self-snoop
- Update to the newfangled Intel CPU FMS infrastructure
- Don't advertise IA32_PERF_GLOBAL_OVF_CTRL as an MSR-to-be-saved, as
it reads '0' and writes from userspace are ignored
- Misc cleanups
x86 - MMU:
- Small cleanups, renames and refactoring extracted from the upcoming
Intel TDX support
- Don't allocate kvm_mmu_page.shadowed_translation for shadow pages
that can't hold leafs SPTEs
- Unconditionally drop mmu_lock when allocating TDP MMU page tables
for eager page splitting, to avoid stalling vCPUs when splitting
huge pages
- Bug the VM instead of simply warning if KVM tries to split a SPTE
that is non-present or not-huge. KVM is guaranteed to end up in a
broken state because the callers fully expect a valid SPTE, it's
all but dangerous to let more MMU changes happen afterwards
x86 - AMD:
- Make per-CPU save_area allocations NUMA-aware
- Force sev_es_host_save_area() to be inlined to avoid calling into
an instrumentable function from noinstr code
- Base support for running SEV-SNP guests. API-wise, this includes a
new KVM_X86_SNP_VM type, encrypting/measure the initial image into
guest memory, and finalizing it before launching it. Internally,
there are some gmem/mmu hooks needed to prepare gmem-allocated
pages before mapping them into guest private memory ranges
This includes basic support for attestation guest requests, enough
to say that KVM supports the GHCB 2.0 specification
There is no support yet for loading into the firmware those signing
keys to be used for attestation requests, and therefore no need yet
for the host to provide certificate data for those keys.
To support fetching certificate data from userspace, a new KVM exit
type will be needed to handle fetching the certificate from
userspace.
An attempt to define a new KVM_EXIT_COCO / KVM_EXIT_COCO_REQ_CERTS
exit type to handle this was introduced in v1 of this patchset, but
is still being discussed by community, so for now this patchset
only implements a stub version of SNP Extended Guest Requests that
does not provide certificate data
x86 - Intel:
- Remove an unnecessary EPT TLB flush when enabling hardware
- Fix a series of bugs that cause KVM to fail to detect nested
pending posted interrupts as valid wake eents for a vCPU executing
HLT in L2 (with HLT-exiting disable by L1)
- KVM: x86: Suppress MMIO that is triggered during task switch
emulation
Explicitly suppress userspace emulated MMIO exits that are
triggered when emulating a task switch as KVM doesn't support
userspace MMIO during complex (multi-step) emulation
Silently ignoring the exit request can result in the
WARN_ON_ONCE(vcpu->mmio_needed) firing if KVM exits to userspace
for some other reason prior to purging mmio_needed
See commit 0dc902267c ("KVM: x86: Suppress pending MMIO write
exits if emulator detects exception") for more details on KVM's
limitations with respect to emulated MMIO during complex emulator
flows
Generic:
- Rename the AS_UNMOVABLE flag that was introduced for KVM to
AS_INACCESSIBLE, because the special casing needed by these pages
is not due to just unmovability (and in fact they are only
unmovable because the CPU cannot access them)
- New ioctl to populate the KVM page tables in advance, which is
useful to mitigate KVM page faults during guest boot or after live
migration. The code will also be used by TDX, but (probably) not
through the ioctl
- Enable halt poll shrinking by default, as Intel found it to be a
clear win
- Setup empty IRQ routing when creating a VM to avoid having to
synchronize SRCU when creating a split IRQCHIP on x86
- Rework the sched_in/out() paths to replace kvm_arch_sched_in() with
a flag that arch code can use for hooking both sched_in() and
sched_out()
- Take the vCPU @id as an "unsigned long" instead of "u32" to avoid
truncating a bogus value from userspace, e.g. to help userspace
detect bugs
- Mark a vCPU as preempted if and only if it's scheduled out while in
the KVM_RUN loop, e.g. to avoid marking it preempted and thus
writing guest memory when retrieving guest state during live
migration blackout
Selftests:
- Remove dead code in the memslot modification stress test
- Treat "branch instructions retired" as supported on all AMD Family
17h+ CPUs
- Print the guest pseudo-RNG seed only when it changes, to avoid
spamming the log for tests that create lots of VMs
- Make the PMU counters test less flaky when counting LLC cache
misses by doing CLFLUSH{OPT} in every loop iteration"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (227 commits)
crypto: ccp: Add the SNP_VLEK_LOAD command
KVM: x86/pmu: Add kvm_pmu_call() to simplify static calls of kvm_pmu_ops
KVM: x86: Introduce kvm_x86_call() to simplify static calls of kvm_x86_ops
KVM: x86: Replace static_call_cond() with static_call()
KVM: SEV: Provide support for SNP_EXTENDED_GUEST_REQUEST NAE event
x86/sev: Move sev_guest.h into common SEV header
KVM: SEV: Provide support for SNP_GUEST_REQUEST NAE event
KVM: x86: Suppress MMIO that is triggered during task switch emulation
KVM: x86/mmu: Clean up make_huge_page_split_spte() definition and intro
KVM: x86/mmu: Bug the VM if KVM tries to split a !hugepage SPTE
KVM: selftests: x86: Add test for KVM_PRE_FAULT_MEMORY
KVM: x86: Implement kvm_arch_vcpu_pre_fault_memory()
KVM: x86/mmu: Make kvm_mmu_do_page_fault() return mapped level
KVM: x86/mmu: Account pf_{fixed,emulate,spurious} in callers of "do page fault"
KVM: x86/mmu: Bump pf_taken stat only in the "real" page fault handler
KVM: Add KVM_PRE_FAULT_MEMORY vcpu ioctl to pre-populate guest memory
KVM: Document KVM_PRE_FAULT_MEMORY ioctl
mm, virt: merge AS_UNMOVABLE and AS_INACCESSIBLE
perf kvm: Add kvm-stat for loongarch64
LoongArch: KVM: Add PV steal time support in guest side
...
468 lines
12 KiB
C
468 lines
12 KiB
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
/*
|
|
* AMD Encrypted Register State Support
|
|
*
|
|
* Author: Joerg Roedel <jroedel@suse.de>
|
|
*/
|
|
|
|
#ifndef __ASM_ENCRYPTED_STATE_H
|
|
#define __ASM_ENCRYPTED_STATE_H
|
|
|
|
#include <linux/types.h>
|
|
#include <linux/sev-guest.h>
|
|
|
|
#include <asm/insn.h>
|
|
#include <asm/sev-common.h>
|
|
#include <asm/coco.h>
|
|
|
|
#define GHCB_PROTOCOL_MIN 1ULL
|
|
#define GHCB_PROTOCOL_MAX 2ULL
|
|
#define GHCB_DEFAULT_USAGE 0ULL
|
|
|
|
#define VMGEXIT() { asm volatile("rep; vmmcall\n\r"); }
|
|
|
|
struct boot_params;
|
|
|
|
enum es_result {
|
|
ES_OK, /* All good */
|
|
ES_UNSUPPORTED, /* Requested operation not supported */
|
|
ES_VMM_ERROR, /* Unexpected state from the VMM */
|
|
ES_DECODE_FAILED, /* Instruction decoding failed */
|
|
ES_EXCEPTION, /* Instruction caused exception */
|
|
ES_RETRY, /* Retry instruction emulation */
|
|
};
|
|
|
|
struct es_fault_info {
|
|
unsigned long vector;
|
|
unsigned long error_code;
|
|
unsigned long cr2;
|
|
};
|
|
|
|
struct pt_regs;
|
|
|
|
/* ES instruction emulation context */
|
|
struct es_em_ctxt {
|
|
struct pt_regs *regs;
|
|
struct insn insn;
|
|
struct es_fault_info fi;
|
|
};
|
|
|
|
/*
|
|
* AMD SEV Confidential computing blob structure. The structure is
|
|
* defined in OVMF UEFI firmware header:
|
|
* https://github.com/tianocore/edk2/blob/master/OvmfPkg/Include/Guid/ConfidentialComputingSevSnpBlob.h
|
|
*/
|
|
#define CC_BLOB_SEV_HDR_MAGIC 0x45444d41
|
|
struct cc_blob_sev_info {
|
|
u32 magic;
|
|
u16 version;
|
|
u16 reserved;
|
|
u64 secrets_phys;
|
|
u32 secrets_len;
|
|
u32 rsvd1;
|
|
u64 cpuid_phys;
|
|
u32 cpuid_len;
|
|
u32 rsvd2;
|
|
} __packed;
|
|
|
|
void do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code);
|
|
|
|
static inline u64 lower_bits(u64 val, unsigned int bits)
|
|
{
|
|
u64 mask = (1ULL << bits) - 1;
|
|
|
|
return (val & mask);
|
|
}
|
|
|
|
struct real_mode_header;
|
|
enum stack_type;
|
|
|
|
/* Early IDT entry points for #VC handler */
|
|
extern void vc_no_ghcb(void);
|
|
extern void vc_boot_ghcb(void);
|
|
extern bool handle_vc_boot_ghcb(struct pt_regs *regs);
|
|
|
|
/* PVALIDATE return codes */
|
|
#define PVALIDATE_FAIL_SIZEMISMATCH 6
|
|
|
|
/* Software defined (when rFlags.CF = 1) */
|
|
#define PVALIDATE_FAIL_NOUPDATE 255
|
|
|
|
/* RMUPDATE detected 4K page and 2MB page overlap. */
|
|
#define RMPUPDATE_FAIL_OVERLAP 4
|
|
|
|
/* PSMASH failed due to concurrent access by another CPU */
|
|
#define PSMASH_FAIL_INUSE 3
|
|
|
|
/* RMP page size */
|
|
#define RMP_PG_SIZE_4K 0
|
|
#define RMP_PG_SIZE_2M 1
|
|
#define RMP_TO_PG_LEVEL(level) (((level) == RMP_PG_SIZE_4K) ? PG_LEVEL_4K : PG_LEVEL_2M)
|
|
#define PG_LEVEL_TO_RMP(level) (((level) == PG_LEVEL_4K) ? RMP_PG_SIZE_4K : RMP_PG_SIZE_2M)
|
|
|
|
struct rmp_state {
|
|
u64 gpa;
|
|
u8 assigned;
|
|
u8 pagesize;
|
|
u8 immutable;
|
|
u8 rsvd;
|
|
u32 asid;
|
|
} __packed;
|
|
|
|
#define RMPADJUST_VMSA_PAGE_BIT BIT(16)
|
|
|
|
/* SNP Guest message request */
|
|
struct snp_req_data {
|
|
unsigned long req_gpa;
|
|
unsigned long resp_gpa;
|
|
unsigned long data_gpa;
|
|
unsigned int data_npages;
|
|
};
|
|
|
|
#define MAX_AUTHTAG_LEN 32
|
|
|
|
/* See SNP spec SNP_GUEST_REQUEST section for the structure */
|
|
enum msg_type {
|
|
SNP_MSG_TYPE_INVALID = 0,
|
|
SNP_MSG_CPUID_REQ,
|
|
SNP_MSG_CPUID_RSP,
|
|
SNP_MSG_KEY_REQ,
|
|
SNP_MSG_KEY_RSP,
|
|
SNP_MSG_REPORT_REQ,
|
|
SNP_MSG_REPORT_RSP,
|
|
SNP_MSG_EXPORT_REQ,
|
|
SNP_MSG_EXPORT_RSP,
|
|
SNP_MSG_IMPORT_REQ,
|
|
SNP_MSG_IMPORT_RSP,
|
|
SNP_MSG_ABSORB_REQ,
|
|
SNP_MSG_ABSORB_RSP,
|
|
SNP_MSG_VMRK_REQ,
|
|
SNP_MSG_VMRK_RSP,
|
|
|
|
SNP_MSG_TYPE_MAX
|
|
};
|
|
|
|
enum aead_algo {
|
|
SNP_AEAD_INVALID,
|
|
SNP_AEAD_AES_256_GCM,
|
|
};
|
|
|
|
struct snp_guest_msg_hdr {
|
|
u8 authtag[MAX_AUTHTAG_LEN];
|
|
u64 msg_seqno;
|
|
u8 rsvd1[8];
|
|
u8 algo;
|
|
u8 hdr_version;
|
|
u16 hdr_sz;
|
|
u8 msg_type;
|
|
u8 msg_version;
|
|
u16 msg_sz;
|
|
u32 rsvd2;
|
|
u8 msg_vmpck;
|
|
u8 rsvd3[35];
|
|
} __packed;
|
|
|
|
struct snp_guest_msg {
|
|
struct snp_guest_msg_hdr hdr;
|
|
u8 payload[4000];
|
|
} __packed;
|
|
|
|
struct sev_guest_platform_data {
|
|
u64 secrets_gpa;
|
|
};
|
|
|
|
/*
|
|
* The secrets page contains 96-bytes of reserved field that can be used by
|
|
* the guest OS. The guest OS uses the area to save the message sequence
|
|
* number for each VMPCK.
|
|
*
|
|
* See the GHCB spec section Secret page layout for the format for this area.
|
|
*/
|
|
struct secrets_os_area {
|
|
u32 msg_seqno_0;
|
|
u32 msg_seqno_1;
|
|
u32 msg_seqno_2;
|
|
u32 msg_seqno_3;
|
|
u64 ap_jump_table_pa;
|
|
u8 rsvd[40];
|
|
u8 guest_usage[32];
|
|
} __packed;
|
|
|
|
#define VMPCK_KEY_LEN 32
|
|
|
|
/* See the SNP spec version 0.9 for secrets page format */
|
|
struct snp_secrets_page {
|
|
u32 version;
|
|
u32 imien : 1,
|
|
rsvd1 : 31;
|
|
u32 fms;
|
|
u32 rsvd2;
|
|
u8 gosvw[16];
|
|
u8 vmpck0[VMPCK_KEY_LEN];
|
|
u8 vmpck1[VMPCK_KEY_LEN];
|
|
u8 vmpck2[VMPCK_KEY_LEN];
|
|
u8 vmpck3[VMPCK_KEY_LEN];
|
|
struct secrets_os_area os_area;
|
|
|
|
u8 vmsa_tweak_bitmap[64];
|
|
|
|
/* SVSM fields */
|
|
u64 svsm_base;
|
|
u64 svsm_size;
|
|
u64 svsm_caa;
|
|
u32 svsm_max_version;
|
|
u8 svsm_guest_vmpl;
|
|
u8 rsvd3[3];
|
|
|
|
/* Remainder of page */
|
|
u8 rsvd4[3744];
|
|
} __packed;
|
|
|
|
/*
|
|
* The SVSM Calling Area (CA) related structures.
|
|
*/
|
|
struct svsm_ca {
|
|
u8 call_pending;
|
|
u8 mem_available;
|
|
u8 rsvd1[6];
|
|
|
|
u8 svsm_buffer[PAGE_SIZE - 8];
|
|
};
|
|
|
|
#define SVSM_SUCCESS 0
|
|
#define SVSM_ERR_INCOMPLETE 0x80000000
|
|
#define SVSM_ERR_UNSUPPORTED_PROTOCOL 0x80000001
|
|
#define SVSM_ERR_UNSUPPORTED_CALL 0x80000002
|
|
#define SVSM_ERR_INVALID_ADDRESS 0x80000003
|
|
#define SVSM_ERR_INVALID_FORMAT 0x80000004
|
|
#define SVSM_ERR_INVALID_PARAMETER 0x80000005
|
|
#define SVSM_ERR_INVALID_REQUEST 0x80000006
|
|
#define SVSM_ERR_BUSY 0x80000007
|
|
#define SVSM_PVALIDATE_FAIL_SIZEMISMATCH 0x80001006
|
|
|
|
/*
|
|
* The SVSM PVALIDATE related structures
|
|
*/
|
|
struct svsm_pvalidate_entry {
|
|
u64 page_size : 2,
|
|
action : 1,
|
|
ignore_cf : 1,
|
|
rsvd : 8,
|
|
pfn : 52;
|
|
};
|
|
|
|
struct svsm_pvalidate_call {
|
|
u16 num_entries;
|
|
u16 cur_index;
|
|
|
|
u8 rsvd1[4];
|
|
|
|
struct svsm_pvalidate_entry entry[];
|
|
};
|
|
|
|
#define SVSM_PVALIDATE_MAX_COUNT ((sizeof_field(struct svsm_ca, svsm_buffer) - \
|
|
offsetof(struct svsm_pvalidate_call, entry)) / \
|
|
sizeof(struct svsm_pvalidate_entry))
|
|
|
|
/*
|
|
* The SVSM Attestation related structures
|
|
*/
|
|
struct svsm_loc_entry {
|
|
u64 pa;
|
|
u32 len;
|
|
u8 rsvd[4];
|
|
};
|
|
|
|
struct svsm_attest_call {
|
|
struct svsm_loc_entry report_buf;
|
|
struct svsm_loc_entry nonce;
|
|
struct svsm_loc_entry manifest_buf;
|
|
struct svsm_loc_entry certificates_buf;
|
|
|
|
/* For attesting a single service */
|
|
u8 service_guid[16];
|
|
u32 service_manifest_ver;
|
|
u8 rsvd[4];
|
|
};
|
|
|
|
/*
|
|
* SVSM protocol structure
|
|
*/
|
|
struct svsm_call {
|
|
struct svsm_ca *caa;
|
|
u64 rax;
|
|
u64 rcx;
|
|
u64 rdx;
|
|
u64 r8;
|
|
u64 r9;
|
|
u64 rax_out;
|
|
u64 rcx_out;
|
|
u64 rdx_out;
|
|
u64 r8_out;
|
|
u64 r9_out;
|
|
};
|
|
|
|
#define SVSM_CORE_CALL(x) ((0ULL << 32) | (x))
|
|
#define SVSM_CORE_REMAP_CA 0
|
|
#define SVSM_CORE_PVALIDATE 1
|
|
#define SVSM_CORE_CREATE_VCPU 2
|
|
#define SVSM_CORE_DELETE_VCPU 3
|
|
|
|
#define SVSM_ATTEST_CALL(x) ((1ULL << 32) | (x))
|
|
#define SVSM_ATTEST_SERVICES 0
|
|
#define SVSM_ATTEST_SINGLE_SERVICE 1
|
|
|
|
#ifdef CONFIG_AMD_MEM_ENCRYPT
|
|
|
|
extern u8 snp_vmpl;
|
|
|
|
extern void __sev_es_ist_enter(struct pt_regs *regs);
|
|
extern void __sev_es_ist_exit(void);
|
|
static __always_inline void sev_es_ist_enter(struct pt_regs *regs)
|
|
{
|
|
if (cc_vendor == CC_VENDOR_AMD &&
|
|
cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))
|
|
__sev_es_ist_enter(regs);
|
|
}
|
|
static __always_inline void sev_es_ist_exit(void)
|
|
{
|
|
if (cc_vendor == CC_VENDOR_AMD &&
|
|
cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))
|
|
__sev_es_ist_exit();
|
|
}
|
|
extern int sev_es_setup_ap_jump_table(struct real_mode_header *rmh);
|
|
extern void __sev_es_nmi_complete(void);
|
|
static __always_inline void sev_es_nmi_complete(void)
|
|
{
|
|
if (cc_vendor == CC_VENDOR_AMD &&
|
|
cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))
|
|
__sev_es_nmi_complete();
|
|
}
|
|
extern int __init sev_es_efi_map_ghcbs(pgd_t *pgd);
|
|
extern void sev_enable(struct boot_params *bp);
|
|
|
|
/*
|
|
* RMPADJUST modifies the RMP permissions of a page of a lesser-
|
|
* privileged (numerically higher) VMPL.
|
|
*
|
|
* If the guest is running at a higher-privilege than the privilege
|
|
* level the instruction is targeting, the instruction will succeed,
|
|
* otherwise, it will fail.
|
|
*/
|
|
static inline int rmpadjust(unsigned long vaddr, bool rmp_psize, unsigned long attrs)
|
|
{
|
|
int rc;
|
|
|
|
/* "rmpadjust" mnemonic support in binutils 2.36 and newer */
|
|
asm volatile(".byte 0xF3,0x0F,0x01,0xFE\n\t"
|
|
: "=a"(rc)
|
|
: "a"(vaddr), "c"(rmp_psize), "d"(attrs)
|
|
: "memory", "cc");
|
|
|
|
return rc;
|
|
}
|
|
static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate)
|
|
{
|
|
bool no_rmpupdate;
|
|
int rc;
|
|
|
|
/* "pvalidate" mnemonic support in binutils 2.36 and newer */
|
|
asm volatile(".byte 0xF2, 0x0F, 0x01, 0xFF\n\t"
|
|
CC_SET(c)
|
|
: CC_OUT(c) (no_rmpupdate), "=a"(rc)
|
|
: "a"(vaddr), "c"(rmp_psize), "d"(validate)
|
|
: "memory", "cc");
|
|
|
|
if (no_rmpupdate)
|
|
return PVALIDATE_FAIL_NOUPDATE;
|
|
|
|
return rc;
|
|
}
|
|
|
|
struct snp_guest_request_ioctl;
|
|
|
|
void setup_ghcb(void);
|
|
void early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr,
|
|
unsigned long npages);
|
|
void early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr,
|
|
unsigned long npages);
|
|
void snp_set_memory_shared(unsigned long vaddr, unsigned long npages);
|
|
void snp_set_memory_private(unsigned long vaddr, unsigned long npages);
|
|
void snp_set_wakeup_secondary_cpu(void);
|
|
bool snp_init(struct boot_params *bp);
|
|
void __noreturn snp_abort(void);
|
|
void snp_dmi_setup(void);
|
|
int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct snp_guest_request_ioctl *rio);
|
|
int snp_issue_svsm_attest_req(u64 call_id, struct svsm_call *call, struct svsm_attest_call *input);
|
|
void snp_accept_memory(phys_addr_t start, phys_addr_t end);
|
|
u64 snp_get_unsupported_features(u64 status);
|
|
u64 sev_get_status(void);
|
|
void sev_show_status(void);
|
|
void snp_update_svsm_ca(void);
|
|
|
|
#else /* !CONFIG_AMD_MEM_ENCRYPT */
|
|
|
|
#define snp_vmpl 0
|
|
static inline void sev_es_ist_enter(struct pt_regs *regs) { }
|
|
static inline void sev_es_ist_exit(void) { }
|
|
static inline int sev_es_setup_ap_jump_table(struct real_mode_header *rmh) { return 0; }
|
|
static inline void sev_es_nmi_complete(void) { }
|
|
static inline int sev_es_efi_map_ghcbs(pgd_t *pgd) { return 0; }
|
|
static inline void sev_enable(struct boot_params *bp) { }
|
|
static inline int pvalidate(unsigned long vaddr, bool rmp_psize, bool validate) { return 0; }
|
|
static inline int rmpadjust(unsigned long vaddr, bool rmp_psize, unsigned long attrs) { return 0; }
|
|
static inline void setup_ghcb(void) { }
|
|
static inline void __init
|
|
early_snp_set_memory_private(unsigned long vaddr, unsigned long paddr, unsigned long npages) { }
|
|
static inline void __init
|
|
early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr, unsigned long npages) { }
|
|
static inline void snp_set_memory_shared(unsigned long vaddr, unsigned long npages) { }
|
|
static inline void snp_set_memory_private(unsigned long vaddr, unsigned long npages) { }
|
|
static inline void snp_set_wakeup_secondary_cpu(void) { }
|
|
static inline bool snp_init(struct boot_params *bp) { return false; }
|
|
static inline void snp_abort(void) { }
|
|
static inline void snp_dmi_setup(void) { }
|
|
static inline int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct snp_guest_request_ioctl *rio)
|
|
{
|
|
return -ENOTTY;
|
|
}
|
|
static inline int snp_issue_svsm_attest_req(u64 call_id, struct svsm_call *call, struct svsm_attest_call *input)
|
|
{
|
|
return -ENOTTY;
|
|
}
|
|
static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { }
|
|
static inline u64 snp_get_unsupported_features(u64 status) { return 0; }
|
|
static inline u64 sev_get_status(void) { return 0; }
|
|
static inline void sev_show_status(void) { }
|
|
static inline void snp_update_svsm_ca(void) { }
|
|
|
|
#endif /* CONFIG_AMD_MEM_ENCRYPT */
|
|
|
|
#ifdef CONFIG_KVM_AMD_SEV
|
|
bool snp_probe_rmptable_info(void);
|
|
int snp_lookup_rmpentry(u64 pfn, bool *assigned, int *level);
|
|
void snp_dump_hva_rmpentry(unsigned long address);
|
|
int psmash(u64 pfn);
|
|
int rmp_make_private(u64 pfn, u64 gpa, enum pg_level level, u32 asid, bool immutable);
|
|
int rmp_make_shared(u64 pfn, enum pg_level level);
|
|
void snp_leak_pages(u64 pfn, unsigned int npages);
|
|
void kdump_sev_callback(void);
|
|
void snp_fixup_e820_tables(void);
|
|
#else
|
|
static inline bool snp_probe_rmptable_info(void) { return false; }
|
|
static inline int snp_lookup_rmpentry(u64 pfn, bool *assigned, int *level) { return -ENODEV; }
|
|
static inline void snp_dump_hva_rmpentry(unsigned long address) {}
|
|
static inline int psmash(u64 pfn) { return -ENODEV; }
|
|
static inline int rmp_make_private(u64 pfn, u64 gpa, enum pg_level level, u32 asid,
|
|
bool immutable)
|
|
{
|
|
return -ENODEV;
|
|
}
|
|
static inline int rmp_make_shared(u64 pfn, enum pg_level level) { return -ENODEV; }
|
|
static inline void snp_leak_pages(u64 pfn, unsigned int npages) {}
|
|
static inline void kdump_sev_callback(void) { }
|
|
static inline void snp_fixup_e820_tables(void) {}
|
|
#endif
|
|
|
|
#endif
|