summaryrefslogtreecommitdiffstats
path: root/arch/x86/kvm/svm/vmenter.S
Commit message (Collapse)AuthorAgeFilesLines
* Do a single prime in vmenter.. multiple passes added inside the macroHEADmasterLouis Burda2023-02-091-3/+1
|
* fixup! Save registers to xmm to lower baseline counts and avoid timing ↵Louis Burda2023-02-061-0/+4
| | | | | | issues with apic_oneshot The cpu register state is cleared after vmrun in sev-es, as such we need to reload cpc_ds for probing and cpc_prime_probe. Since the access locations are constant, these extra loads will simply end up in the baseline. Additionally, the apic precision is not affected as the acceses happen *after* vmrun
* Set interrupt flag after prime / clear before probe in vmenterLouis Burda2023-02-061-13/+8
|
* Save registers to xmm to lower baseline counts and avoid timing issues with ↵Louis Burda2023-02-061-46/+45
| | | | | | apic_oneshot Swapping general-purpose registers with xmm registers should be constant-time, while writing and reading from memory after prime will cause cache misses with varying servicing time. This added uncertainty decreases stepping accuracy with apic.
* Fix stepping inconsistency by moving oneshot after primeLouis Burda2023-02-061-12/+27
|
* Handle instruction loads on page boundaries more cleanlyLouis Burda2023-02-021-5/+1
|
* Consistent use of cpc shorthand instead of cachepcLouis Burda2023-01-271-12/+12
|
* Add signalled stepping track modeLouis Burda2023-01-261-0/+11
|
* Use prime returned address for probeLouis Burda2023-01-251-4/+1
|
* Enable single stepping non sev-es guests and long KVM_RUNs to prevent interruptsLouis Burda2023-01-231-8/+3
|
* Implement prime+probe without vcall, move vm pausingLouis Burda2023-01-211-58/+72
|
* Minimize diff to 0aaa1e5 and small restructureLouis Burda2023-01-111-6/+0
|
* Refactor out sevstep into cachepc repositoryLouis Burda2022-10-051-0/+0
|
* Add page trackingLouis Burda2022-10-041-63/+66
|
* Migrate patchLouis Burda2022-09-261-2/+89
|
* x86: Prepare asm files for straight-line-speculationPeter Zijlstra2021-12-081-2/+2
| | | | | | | | | | | | | | | Replace all ret/retq instructions with RET in preparation of making RET a macro. Since AS is case insensitive it's a big no-op without RET defined. find arch/x86/ -name \*.S | while read file do sed -i 's/\<ret[q]*\>/RET/' $file done Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211204134907.905503893@infradead.org
* KVM/SVM: Move vmenter.S exception fixups out of lineUros Bizjak2021-03-151-15/+20
| | | | | | | | | | Avoid jump by moving exception fixups out of line. Cc: Sean Christopherson <seanjc@google.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Message-Id: <20210226125621.111723-1-ubizjak@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: SVM: move VMLOAD/VMSAVE to C codePaolo Bonzini2021-03-151-13/+1
| | | | | | | | | Thanks to the new macros that handle exception handling for SVM instructions, it is easier to just do the VMLOAD/VMSAVE in C. This is safe, as shown by the fact that the host reload is already done outside the assembly source. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: SVM: Provide an updated VMRUN invocation for SEV-ES guestsTom Lendacky2020-12-151-0/+50
| | | | | | | | | | | | | | | | | | | | | The run sequence is different for an SEV-ES guest compared to a legacy or even an SEV guest. The guest vCPU register state of an SEV-ES guest will be restored on VMRUN and saved on VMEXIT. There is no need to restore the guest registers directly and through VMLOAD before VMRUN and no need to save the guest registers directly and through VMSAVE on VMEXIT. Update the svm_vcpu_run() function to skip register state saving and restoring and provide an alternative function for running an SEV-ES guest in vmenter.S Additionally, certain host state is restored across an SEV-ES VMRUN. As a result certain register states are not required to be restored upon VMEXIT (e.g. FS, GS, etc.), so only do that if the guest is not an SEV-ES guest. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Message-Id: <fb1c66d32f2194e171b95fc1a8affd6d326e10c1.1607620209.git.thomas.lendacky@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* x86/kvm/svm: Move guest enter/exit into .noinstr.textThomas Gleixner2020-07-091-1/+1
| | | | | | | | | | | | | Move the functions which are inside the RCU off region into the non-instrumentable text section. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20200708195322.144607767@linutronix.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: SVM: Do not setup frame pointer in __svm_vcpu_runUros Bizjak2020-04-151-1/+0
| | | | | | | | | | | __svm_vcpu_run is a leaf function and does not need a frame pointer. %rbp is also destroyed a few instructions later when guest registers are loaded. Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Message-Id: <20200409120440.1427215-1-ubizjak@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: SVM: move more vmentry code to assemblyPaolo Bonzini2020-04-141-0/+9
| | | | | | | | | Manipulate IF around vmload/vmsave to remove the confusing usage of local_irq_enable where interrupts are actually disabled via GIF. And stuff the RSB immediately without waiting for a RET to avoid Spectre-v2 attacks. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* KVM: SVM: Split svm_vcpu_run inline assembly to separate fileUros Bizjak2020-04-031-0/+162
The compiler (GCC) does not like the situation, where there is inline assembly block that clobbers all available machine registers in the middle of the function. This situation can be found in function svm_vcpu_run in file kvm/svm.c and results in many register spills and fills to/from stack frame. This patch fixes the issue with the same approach as was done for VMX some time ago. The big inline assembly is moved to a separate assembly .S file, taking into account all ABI requirements. There are two main benefits of the above approach: * elimination of several register spills and fills to/from stack frame, and consequently smaller function .text size. The binary size of svm_vcpu_run is lowered from 2019 to 1626 bytes. * more efficient access to a register save array. Currently, register save array is accessed as: 7b00: 48 8b 98 28 02 00 00 mov 0x228(%rax),%rbx 7b07: 48 8b 88 18 02 00 00 mov 0x218(%rax),%rcx 7b0e: 48 8b 90 20 02 00 00 mov 0x220(%rax),%rdx and passing ia pointer to a register array as an argument to a function one gets: 12: 48 8b 48 08 mov 0x8(%rax),%rcx 16: 48 8b 50 10 mov 0x10(%rax),%rdx 1a: 48 8b 58 18 mov 0x18(%rax),%rbx As a result, the total size, considering that the new function size is 229 bytes, gets lowered by 164 bytes. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>