| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add an option to dirty_log_perf_test that configures the vCPUs to run in
L2 instead of L1. This makes it possible to benchmark the dirty logging
performance of nested virtualization, which is particularly interesting
because KVM must shadow L1's EPT/NPT tables.
For now this support only works on x86_64 CPUs with VMX. Otherwise
passing -n results in the test being skipped.
Signed-off-by: David Matlack <dmatlack@google.com>
Message-Id: <20220520233249.3776001-11-dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This is a VMX-related macro so move it to vmx.h. While here, open code
the mask like the rest of the VMX bitmask macros.
No functional change intended.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: David Matlack <dmatlack@google.com>
Message-Id: <20220520233249.3776001-6-dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
x86_page_size is an enum used to communicate the desired page size with
which to map a range of memory. Under the hood they just encode the
desired level at which to map the page. This ends up being clunky in a
few ways:
- The name suggests it encodes the size of the page rather than the
level.
- In other places in x86_64/processor.c we just use a raw int to encode
the level.
Simplify this by adopting the kernel style of PG_LEVEL_XX enums and pass
around raw ints when referring to the level. This makes the code easier
to understand since these macros are very common in KVM MMU code.
Signed-off-by: David Matlack <dmatlack@google.com>
Message-Id: <20220520233249.3776001-2-dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Clean up code that was hardcoding masks for various fields,
now that the masks are included in processor.h.
For more cleanup, define PAGE_SIZE and PAGE_MASK just like in Linux.
PAGE_SIZE in particular was defined by several tests.
Suggested-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Red Hat's QE team reported test failure on access_tracking_perf_test:
Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages
guest physical test memory offset: 0x3fffbffff000
Populating memory : 0.684014577s
Writing to populated memory : 0.006230175s
Reading from populated memory : 0.004557805s
==== Test Assertion Failure ====
lib/kvm_util.c:1411: false
pid=125806 tid=125809 errno=4 - Interrupted system call
1 0x0000000000402f7c: addr_gpa2hva at kvm_util.c:1411
2 (inlined by) addr_gpa2hva at kvm_util.c:1405
3 0x0000000000401f52: lookup_pfn at access_tracking_perf_test.c:98
4 (inlined by) mark_vcpu_memory_idle at access_tracking_perf_test.c:152
5 (inlined by) vcpu_thread_main at access_tracking_perf_test.c:232
6 0x00007fefe9ff81ce: ?? ??:0
7 0x00007fefe9c64d82: ?? ??:0
No vm physical memory at 0xffbffff000
I can easily reproduce it with a Intel(R) Xeon(R) CPU E5-2630 with 46 bits
PA.
It turns out that the address translation for clearing idle page tracking
returned a wrong result; addr_gva2gpa()'s last step, which is based on
"pte[index[0]].pfn", did the calculation with 40 bits length and the
high 12 bits got truncated. In above case the GPA address to be returned
should be 0x3fffbffff000 for GVA 0xc0000000, but it got truncated into
0xffbffff000 and the subsequent gpa2hva lookup failed.
The width of operations on bit fields greater than 32-bit is
implementation defined, and differs between GCC (which uses the bitfield
precision) and clang (which uses 64-bit arithmetic), so this is a
potential minefield. Remove the bit fields and using manual masking
instead.
Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2075036
Reported-by: Nana Liu <nanliu@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
| |
Add cpu_relax() for s390 and x86 for use in arch-agnostic tests. arm64
already defines its own version.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220226001546.360188-28-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main thing that the selftest verifies is that KVM copies x2APIC's
ICR[63:32] to/from ICR2 when userspace accesses the vAPIC page via
KVM_{G,S}ET_LAPIC. KVM previously split x2APIC ICR to ICR+ICR2 at the
time of write (from the guest), and so KVM must preserve that behavior
for backwards compatibility between different versions of KVM.
It will also test other invariants, e.g. that KVM clears the BUSY
flag on ICR writes, that the reserved bits in ICR2 are dropped on writes
from the guest, etc...
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20220204214205.3306634-12-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce a new test for Hyper-V nSVM extensions (Hyper-V on KVM) and add
a test for enlightened MSR-Bitmap feature:
- Intercept access to MSR_FS_BASE in L1 and check that this works
with enlightened MSR-Bitmap disabled.
- Enabled enlightened MSR-Bitmap and check that the intercept still works
as expected.
- Intercept access to MSR_GS_BASE but don't clear the corresponding bit
from clean fields mask, KVM is supposed to skip updating MSR-Bitmap02 and
thus the consequent access to the MSR from L2 will not get intercepted.
- Finally, clear the corresponding bit from clean fields mask and check
that access to MSR_GS_BASE is now intercepted.
The test works with the assumption, that access to MSR_FS_BASE/MSR_GS_BASE
is not intercepted for L1. If this ever becomes not true the test will
fail as nested_svm_exit_handled_msr() always checks L1's MSR-Bitmap for
L2 irrespective of clean fields. The behavior is correct as enlightened
MSR-Bitmap feature is just an optimization, KVM is not obliged to ignore
updates when the corresponding bit in clean fields stays clear.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220203104620.277031-7-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
| |
There's a copy of 'struct vmcb_control_area' definition in KVM selftests,
update it to allow testing of the newly introduced features.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220203104620.277031-6-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
| |
Similar to VMX, allocate memory for MSR-Bitmap and fill in 'msrpm_base_pa'
in VMCB. To use it, tests will need to set INTERCEPT_MSR_PROT interception
along with the required bits in the MSR-Bitmap.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220203104620.277031-5-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
Instead of just resetting 'hv_clean_fields' to 0 on every enlightened
vmresume, do the expected cleaning of the corresponding bit on enlightened
vmwrite. Avoid direct access to 'current_evmcs' from evmcs_test to support
the change.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220203104620.277031-3-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
| |
There is no need for tests other than amx_test to enable dynamic xsave
states. Remove the call to vm_xsave_req_perm from generic code,
and move it inside the test. While at it, allow customizing the bit
that is requested, so that future tests can use it differently.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
| |
Some indentation with spaces crept in, likely due to terminal-based
cut and paste. Clean it up.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
| |
Extract the x86 model number from CPUID.01H:EAX.
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20220115052431.447232-6-jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Move this static inline function to processor.h, so that it can be
used in individual tests, as needed.
Opportunistically replace the bare 'unsigned' with 'unsigned int.'
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20220115052431.447232-5-jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
| |
Replace the one ad hoc "AuthenticAMD" CPUID vendor string comparison
with a new function, is_amd_cpu().
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20220115052431.447232-4-jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
| |
KVM forbids KVM_SET_CPUID2 after KVM_RUN was performed on a vCPU unless
the supplied CPUID data is equal to what was previously set. Test this.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20220117150542.2176196-5-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
Those changes can avoid dereferencing pointer compile issue
when amx_test.c reference state->xsave.
Move struct kvm_x86_state definition to processor.h.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20211223145322.2914028-3-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
When KVM_CAP_XSAVE2 is supported, userspace is expected to allocate
buffer for KVM_GET_XSAVE2 and KVM_SET_XSAVE using the size returned
by KVM_CHECK_EXTENSION(KVM_CAP_XSAVE2).
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Guang Zeng <guang.zeng@intel.com>
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <20220105123532.12586-20-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactors out open path support from open_kvm_dev_path_or_exit() and
adds new helper for SEV device path.
Signed-off-by: Peter Gonda <pgonda@google.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Cc: Marc Orr <marcorr@google.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Message-Id: <20211021174303.385706-5-pgonda@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
| |
There is no need to clobber a register that is only being read from.
Oops. Drop the XMM register from the clobbers list.
Signed-off-by: Oliver Upton <oupton@google.com>
Message-Id: <20210927223621.50178-1-oupton@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Compiling the KVM selftests with clang emits the following warning:
>> include/x86_64/processor.h:297:25: error: variable 'xmm0' is uninitialized when used here [-Werror,-Wuninitialized]
>> return (unsigned long)xmm0;
where xmm0 is accessed via an uninitialized register variable.
Indeed, this is a misuse of register variables, which really should only
be used for specifying register constraints on variables passed to
inline assembly. Rather than attempting to read xmm registers via
register variables, just explicitly perform the movq from the desired
xmm register.
Fixes: 783e9e51266e ("kvm: selftests: add API testing infrastructure")
Signed-off-by: Oliver Upton <oupton@google.com>
Message-Id: <20210924005147.1122357-1-oupton@google.com>
Reviewed-by: Ricardo Koller <ricarkol@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
Check that #UD is raised if bit 16 is clear in
HYPERV_CPUID_FEATURES.EDX and an 'XMM fast' hypercall is issued.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reviewed-by: Siddharth Chandrasekaran <sidcha@amazon.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210730122625.112848-5-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD
KVM/arm64 updates for v5.14.
- Add MTE support in guests, complete with tag save/restore interface
- Reduce the impact of CMOs by moving them in the page-table code
- Allow device block mappings at stage-2
- Reduce the footprint of the vmemmap in protected mode
- Support the vGIC on dumb systems such as the Apple M1
- Add selftest infrastructure to support multiple configuration
and apply that to PMU/non-PMU setups
- Add selftests for the debug architecture
- The usual crop of PMU fixes
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
x86, the only arch implementing exception handling, reports unhandled
vectors using port IO at a specific port number. This replicates what
ucall already does.
Introduce a new ucall type, UCALL_UNHANDLED, for guests to report
unhandled exceptions. Then replace the x86 unhandled vector exception
reporting to use it instead of port IO. This new ucall type will be
used in the next commits by arm64 to report unhandled vectors as well.
Tested: Forcing a page fault in the ./x86_64/xapic_ipi_test
halter_guest_code() shows this:
$ ./x86_64/xapic_ipi_test
...
Unexpected vectored event in guest (vector:0xe)
Signed-off-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210611011020.3420067-4-ricarkol@google.com
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Rename the vm_handle_exception function to a name that indicates more
clearly that it installs something: vm_install_exception_handler.
Reported-by: kernel test robot <oliver.sang@intel.com>
Suggested-by: Marc Zyngier <maz@kernel.org>
Suggested-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20210611011020.3420067-2-ricarkol@google.com
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This test exercises the feature KVM_CAP_EXIT_ON_EMULATION_FAILURE. When
enabled, errors in the in-kernel instruction emulator are forwarded to
userspace with the instruction bytes stored in the exit struct for
KVM_EXIT_INTERNAL_ERROR. So, when the guest attempts to emulate an
'flds' instruction, which isn't able to be emulated in KVM, instead
of failing, KVM sends the instruction to userspace to handle.
For this test to work properly the module parameter
'allow_smaller_maxphyaddr' has to be set.
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Message-Id: <20210510144834.658457-3-aaronlewis@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add an x86-only test to verify that x86's MMU reacts to CPUID updates
that impact the MMU. KVM has had multiple bugs where it fails to
reconfigure the MMU after the guest's vCPU model changes.
Sadly, this test is effectively limited to shadow paging because the
hardware page walk handler doesn't support software disabling of GBPAGES
support, and KVM doesn't manually walk the GVA->GPA on faults for
performance reasons (doing so would large defeat the benefits of TDP).
Don't require !TDP for the tests as there is still value in running the
tests with TDP, even though the tests will fail (barring KVM hacks).
E.g. KVM should not completely explode if MAXPHYADDR results in KVM using
4-level vs. 5-level paging for the guest.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210622200529.3650424-20-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add x86-64 hugepage support in the form of a x86-only variant of
virt_pg_map() that takes an explicit page size. To keep things simple,
follow the existing logic for 4k pages and disallow creating a hugepage
if the upper-level entry is present, even if the desired pfn matches.
Opportunistically fix a double "beyond beyond" reported by checkpatch.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210622200529.3650424-19-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Drop the EPTP memslot param from all EPT helpers and shove the hardcoded
'0' down to the vm_phy_page_alloc() calls.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-Id: <20210622200529.3650424-14-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The initial implementation of the test only tests that access to Hyper-V
MSRs and hypercalls is in compliance with guest visible CPUID feature bits.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210521095204.2161214-31-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
evmcs.h is x86_64 only thing, move it to x86_64/ subdirectory.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210521095204.2161214-30-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
These defines can be shared by multiple tests, move them to a dedicated
header.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210521095204.2161214-29-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
Standardize reads and writes of the x2APIC MSRs.
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Oliver Upton <oupton@google.com>
Message-Id: <20210604172611.281819-11-jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Move the APIC functions into the library to encourage code reuse and
to avoid unintended deviations.
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Oliver Upton <oupton@google.com>
Message-Id: <20210604172611.281819-10-jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |/
|
|
|
|
|
|
|
|
|
| |
Processor.h is a hodgepodge of definitions. Though the local APIC is
technically built into the CPU these days, move the APIC definitions
into a new header file: apic.h.
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Oliver Upton <oupton@google.com>
Message-Id: <20210604172611.281819-9-jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Hyper-V emulation is enabled in KVM unconditionally. This is bad at least
from security standpoint as it is an extra attack surface. Ideally, there
should be a per-VM capability explicitly enabled by VMM but currently it
is not the case and we can't mandate one without breaking backwards
compatibility. We can, however, check guest visible CPUIDs and only enable
Hyper-V emulation when "Hv#1" interface was exposed in
HYPERV_CPUID_INTERFACE.
Note, VMMs are free to act in any sequence they like, e.g. they can try
to set MSRs first and CPUIDs later so we still need to allow the host
to read/write Hyper-V specific MSRs unconditionally.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210126134816.1880136-14-vkuznets@redhat.com>
[Add selftest vcpu_set_hv_cpuid API to avoid breaking xen_vmcall_test. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
| |
kvm_get_supported_hv_cpuid() may come handy in all Hyper-V related tests.
Split it off hyperv_cpuid test, create system-wide and vcpu versions.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210126134816.1880136-2-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
KVM_GET_SUPPORTED_CPUID
Commit 181f494888d5 ("KVM: x86: fix CPUID entries returned by
KVM_GET_CPUID2 ioctl") revealed that we're not testing KVM_GET_CPUID2
ioctl at all. Add a test for it and also check that from inside the guest
visible CPUIDs are equal to it's output.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20210129161821.74635-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
This test will check the effect of various CPUID settings on the
MSR_IA32_PERF_CAPABILITIES MSR, check that whatever user space writes
with KVM_SET_MSR is _not_ modified from the guest and can be retrieved
with KVM_GET_MSR, and check that invalid LBR formats are rejected.
Signed-off-by: Like Xu <like.xu@linux.intel.com>
Message-Id: <20210201051039.255478-12-like.xu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a guest is using xAPIC KVM allocates a backing page for the required
EPT entry for the APIC access address set in the VMCS. If mm decides to
move that page the KVM mmu notifier will update the VMCS with the new
HPA. This test induces a page move to test that APIC access continues to
work correctly. It is a directed test for
commit e649b3f0188f "KVM: x86: Fix APIC page invalidation race".
Tested: ran for 1 hour on a skylake, migrating backing page every 1ms
Depends on patch "selftests: kvm: Add exception handling to selftests"
from aaronlewis@google.com that has not yet been queued.
Signed-off-by: Peter Shier <pshier@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Ricardo Koller <ricarkol@google.com>
Message-Id: <20201105223823.850068-1-pshier@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extend the KVM_SET_SREGS test to verify that all supported CR4 bits, as
enumerated by KVM, can be set before KVM_SET_CPUID2, i.e. without first
defining the vCPU model. KVM is supposed to skip guest CPUID checks
when host userspace is stuffing guest state.
Check the inverse as well, i.e. that KVM rejects KVM_SET_REGS if CR4
has one or more unsupported bits set.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20201007014417.29276-7-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a set of tests that ensure the guest cannot access paravirtual msrs
and hypercalls that have been disabled in the KVM_CPUID_FEATURES leaf.
Expect a #GP in the case of msr accesses and -KVM_ENOSYS from
hypercalls.
Cc: Jim Mattson <jmattson@google.com>
Signed-off-by: Oliver Upton <oupton@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Reviewed-by: Aaron Lewis <aaronlewis@google.com>
Message-Id: <20201027231044.655110-7-oupton@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
Add the infrastructure needed to enable exception handling in selftests.
This allows any of the exception and interrupt vectors to be overridden
in the guest.
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Reviewed-by: Alexander Graf <graf@amazon.com>
Message-Id: <20201012194716.3950330-4-aaronlewis@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Fix the layout of 'struct desc64' to match the layout described in the
SDM Vol 3, Chapter 3 "Protected-Mode Memory Management", section 3.4.5
"Segment Descriptors", Figure 3-8 "Segment Descriptor". The test added
later in this series relies on this and crashes if this layout is not
correct.
Signed-off-by: Aaron Lewis <aaronlewis@google.com>
Reviewed-by: Alexander Graf <graf@amazon.com>
Message-Id: <20201012194716.3950330-2-aaronlewis@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a regression test for commit 671ddc700fd0 ("KVM: nVMX: Don't leak
L1 MMIO regions to L2").
First, check to see that an L2 guest can be launched with a valid
APIC-access address that is backed by a page of L1 physical memory.
Next, set the APIC-access address to a (valid) L1 physical address
that is not backed by memory. KVM can't handle this situation, so
resuming L2 should result in a KVM exit for internal error
(emulation).
Signed-off-by: Jim Mattson <jmattson@google.com>
Reviewed-by: Ricardo Koller <ricarkol@google.com>
Reviewed-by: Peter Shier <pshier@google.com>
Message-Id: <20201026180922.3120555-1-jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Rename SECONDARY_EXEC_RDTSCP to SECONDARY_EXEC_ENABLE_RDTSCP in
preparation for consolidating the logic for adjusting secondary exec
controls based on the guest CPUID model.
No functional change intended.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200923165048.20486-4-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
| |
state_test/smm_test use KVM_CAP_NESTED_STATE check as an indicator for
nested VMX/SVM presence and this is incorrect. Check for the required
features dirrectly.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20200610135847.754289-2-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GCC10 fails to build vmx_preemption_timer_test:
gcc -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99
-fno-stack-protector -fno-PIE -I../../../../tools/include
-I../../../../tools/arch/x86/include -I../../../../usr/include/
-Iinclude -Ix86_64 -Iinclude/x86_64 -I.. -pthread -no-pie
x86_64/evmcs_test.c ./linux/tools/testing/selftests/kselftest_harness.h
./linux/tools/testing/selftests/kselftest.h
./linux/tools/testing/selftests/kvm/libkvm.a
-o ./linux/tools/testing/selftests/kvm/x86_64/evmcs_test
/usr/bin/ld: ./linux/tools/testing/selftests/kvm/libkvm.a(vmx.o):
./linux/tools/testing/selftests/kvm/include/x86_64/vmx.h:603:
multiple definition of `ctrl_exit_rev'; /tmp/ccMQpvNt.o:
./linux/tools/testing/selftests/kvm/include/x86_64/vmx.h:603:
first defined here
/usr/bin/ld: ./linux/tools/testing/selftests/kvm/libkvm.a(vmx.o):
./linux/tools/testing/selftests/kvm/include/x86_64/vmx.h:602:
multiple definition of `ctrl_pin_rev'; /tmp/ccMQpvNt.o:
./linux/tools/testing/selftests/kvm/include/x86_64/vmx.h:602:
first defined here
...
ctrl_exit_rev/ctrl_pin_rev/basic variables are only used in
vmx_preemption_timer_test.c, just move them there.
Fixes: 8d7fbf01f9af ("KVM: selftests: VMX preemption timer migration test")
Reported-by: Marcelo Bandeira Condotta <mcondotta@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20200608112346.593513-2-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vmx_tsc_adjust_test fails with:
IA32_TSC_ADJUST is -4294969448 (-1 * TSC_ADJUST_VALUE + -2152).
IA32_TSC_ADJUST is -4294969448 (-1 * TSC_ADJUST_VALUE + -2152).
IA32_TSC_ADJUST is 281470681738540 (65534 * TSC_ADJUST_VALUE + 4294962476).
==== Test Assertion Failure ====
x86_64/vmx_tsc_adjust_test.c:153: false
pid=19738 tid=19738 - Interrupted system call
1 0x0000000000401192: main at vmx_tsc_adjust_test.c:153
2 0x00007fe1ef8583d4: ?? ??:0
3 0x0000000000401201: _start at ??:?
Failed guest assert: (adjust <= max)
The problem is that is 'tsc_val' should be u64, not u32 or the reading
gets truncated.
Fixes: 8d7fbf01f9afc ("KVM: selftests: VMX preemption timer migration test")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20200601154726.261868-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|