aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm64/kvm/sys_regs.c
AgeCommit message (Collapse)Author
2020-06-17KVM: arm64: Stop writing aarch32's CSSELR into ACTLRJames Morse
commit 7c582bf4ed84f3eb58bdd1f63024a14c17551e7d upstream. aarch32 has pairs of registers to access the high and low parts of 64bit registers. KVM has a union of 64bit sys_regs[] and 32bit copro[]. The 32bit accessors read the high or low part of the 64bit sys_reg[] value through the union. Both sys_reg_descs[] and cp15_regs[] list access_csselr() as the accessor for CSSELR{,_EL1}. access_csselr() is only aware of the 64bit sys_regs[], and expects r->reg to be 'CSSELR_EL1' in the enum, index 2 of the 64bit array. cp15_regs[] uses the 32bit copro[] alias of sys_regs[]. Here CSSELR is c0_CSSELR which is the same location in sys_reg[]. r->reg is 'c0_CSSELR', index 4 in the 32bit array. access_csselr() uses the 32bit r->reg value to access the 64bit array, so reads and write the wrong value. sys_regs[4], is ACTLR_EL1, which is subsequently save/restored when we enter the guest. ACTLR_EL1 is supposed to be read-only for the guest. This register only affects execution at EL1, and the host's value is restored before we return to host EL1. Convert the 32bit register index back to the 64bit version. Suggested-by: Marc Zyngier <maz@kernel.org> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200529150656.7339-2-james.morse@arm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-01-15arm64: Introduce ID_ISAR6 CPU registerAnshuman Khandual
This adds basic building blocks required for ID_ISAR6 CPU register which identifies support for various instruction implementation on AArch32 state. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-kernel@vger.kernel.org Cc: kvmarm@lists.cs.columbia.edu Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> [will: Ensure SPECRES is treated the same as on A64] Signed-off-by: Will Deacon <will@kernel.org>
2019-12-18Merge tag 'kvmarm-fixes-5.5-1' of ↵Paolo Bonzini
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm-master KVM/arm fixes for .5.5, take #1 - Fix uninitialised sysreg accessor - Fix handling of demand-paged device mappings - Stop spamming the console on IMPDEF sysregs - Relax mappings of writable memslots - Assorted cleanups
2019-12-12KVM: arm64: Ensure 'params' is initialised when looking up sys registerWill Deacon
Commit 4b927b94d5df ("KVM: arm/arm64: vgic: Introduce find_reg_by_id()") introduced 'find_reg_by_id()', which looks up a system register only if the 'id' index parameter identifies a valid system register. As part of the patch, existing callers of 'find_reg()' were ported over to the new interface, but this breaks 'index_to_sys_reg_desc()' in the case that the initial lookup in the vCPU target table fails because we will then call into 'find_reg()' for the system register table with an uninitialised 'param' as the key to the lookup. GCC 10 is bright enough to spot this (amongst a tonne of false positives, but hey!): | arch/arm64/kvm/sys_regs.c: In function ‘index_to_sys_reg_desc.part.0.isra’: | arch/arm64/kvm/sys_regs.c:983:33: warning: ‘params.Op2’ may be used uninitialized in this function [-Wmaybe-uninitialized] | 983 | (u32)(x)->CRn, (u32)(x)->CRm, (u32)(x)->Op2); | [...] Revert the hunk of 4b927b94d5df which breaks 'index_to_sys_reg_desc()' so that the old behaviour of checking the index upfront is restored. Fixes: 4b927b94d5df ("KVM: arm/arm64: vgic: Introduce find_reg_by_id()") Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20191212094049.12437-1-will@kernel.org
2019-12-06KVM: arm64: Don't log IMP DEF sysreg trapsMark Rutland
We don't intend to support IMPLEMENATION DEFINED system registers, but have to trap them (and emulate them as UNDEFINED). These traps aren't interesting to the system administrator or to the KVM developers, so let's not bother logging when we do so. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20191205180652.18671-3-mark.rutland@arm.com
2019-12-06KVM: arm64: Sanely ratelimit sysreg messagesMark Rutland
Currently kvm_pr_unimpl() is ratelimited, so print_sys_reg_instr() won't spam the console. However, someof its callers try to print some contextual information with kvm_err(), which is not ratelimited. This means that in some cases the context may be printed without the sysreg encoding, which isn't all that useful. Let's ensure that both are consistently printed together and ratelimited, by refactoring print_sys_reg_instr() so that some callers can provide it with an arbitrary format string. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20191205180652.18671-2-mark.rutland@arm.com
2019-10-20arm64: KVM: Handle PMCR_EL0.LC as RES1 on pure AArch64 systemsMarc Zyngier
Of PMCR_EL0.LC, the ARMv8 ARM says: "In an AArch64 only implementation, this field is RES 1." So be it. Fixes: ab9468340d2bc ("arm64: KVM: Add access handler for PMCR register") Reviewed-by: Andrew Murray <andrew.murray@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-08-09KVM: arm64: Don't write junk to sysregs on resetMarc Zyngier
At the moment, the way we reset system registers is mildly insane: We write junk to them, call the reset functions, and then check that we have something else in them. The "fun" thing is that this can happen while the guest is running (PSCI, for example). If anything in KVM has to evaluate the state of a system register while junk is in there, bad thing may happen. Let's stop doing that. Instead, we track that we have called a reset function for that register, and assume that the reset function has done something. This requires fixing a couple of sysreg refinition in the trap table. In the end, the very need of this reset check is pretty dubious, as it doesn't check everything (a lot of the sysregs leave outside of the sys_regs[] array). It may well be axed in the near future. Tested-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2019-07-05KVM: arm64: Migrate _elx sysreg accessors to msr_s/mrs_sDave Martin
Currently, the {read,write}_sysreg_el*() accessors for accessing particular ELs' sysregs in the presence of VHE rely on some local hacks and define their system register encodings in a way that is inconsistent with the core definitions in <asm/sysreg.h>. As a result, it is necessary to add duplicate definitions for any system register that already needs a definition in sysreg.h for other reasons. This is a bit of a maintenance headache, and the reasons for the _el*() accessors working the way they do is a bit historical. This patch gets rid of the shadow sysreg definitions in <asm/kvm_hyp.h>, converts the _el*() accessors to use the core __msr_s/__mrs_s interface, and converts all call sites to use the standard sysreg #define names (i.e., upper case, with SYS_ prefix). This patch will conflict heavily anyway, so the opportunity to clean up some bad whitespace in the context of the changes is taken. The change exposes a few system registers that have no sysreg.h definition, due to msr_s/mrs_s being used in place of msr/mrs: additions are made in order to fill in the gaps. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Link: https://www.spinics.net/lists/kvm-arm/msg31717.html [Rebased to v4.21-rc1] Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> [Rebased to v5.2-rc5, changelog updates] Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-07-05KVM: arm/arm64: Rename kvm_pmu_{enable/disable}_counter functionsAndrew Murray
The kvm_pmu_{enable/disable}_counter functions can enable/disable multiple counters at once as they operate on a bitmask. Let's make this clearer by renaming the function. Suggested-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-01KVM: arm64: Fix ptrauth ID register masking logicKristina Martsenko
When a VCPU doesn't have pointer auth, we want to hide all four pointer auth ID register fields from the guest, not just one of them. Fixes: 384b40caa8af ("KVM: arm/arm64: Context-switch ptrauth registers") Reported-by: Andrew Murray <andrew.murray@arm.com> Fscked-up-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Tested-by: Andrew Murray <andrew.murray@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-04-24arm64: KVM: Enable VHE support for :G/:H perf event modifiersAndrew Murray
With VHE different exception levels are used between the host (EL2) and guest (EL1) with a shared exception level for userpace (EL0). We can take advantage of this and use the PMU's exception level filtering to avoid enabling/disabling counters in the world-switch code. Instead we just modify the counter type to include or exclude EL0 at vcpu_{load,put} time. We also ensure that trapped PMU system register writes do not re-enable EL0 when reconfiguring the backing perf events. This approach completely avoids blackout windows seen with !VHE. Suggested-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Andrew Murray <andrew.murray@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-04-24KVM: arm/arm64: Context-switch ptrauth registersMark Rutland
When pointer authentication is supported, a guest may wish to use it. This patch adds the necessary KVM infrastructure for this to work, with a semi-lazy context switch of the pointer auth state. Pointer authentication feature is only enabled when VHE is built in the kernel and present in the CPU implementation so only VHE code paths are modified. When we schedule a vcpu, we disable guest usage of pointer authentication instructions and accesses to the keys. While these are disabled, we avoid context-switching the keys. When we trap the guest trying to use pointer authentication functionality, we change to eagerly context-switching the keys, and enable the feature. The next time the vcpu is scheduled out/in, we start again. However the host key save is optimized and implemented inside ptrauth instruction/register access trap. Pointer authentication consists of address authentication and generic authentication, and CPUs in a system might have varied support for either. Where support for either feature is not uniform, it is hidden from guests via ID register emulation, as a result of the cpufeature framework in the host. Unfortunately, address authentication and generic authentication cannot be trapped separately, as the architecture provides a single EL2 trap covering both. If we wish to expose one without the other, we cannot prevent a (badly-written) guest from intermittently using a feature which is not uniformly supported (when scheduled on a physical CPU which supports the relevant feature). Hence, this patch expects both type of authentication to be present in a cpu. This switch of key is done from guest enter/exit assembly as preparation for the upcoming in-kernel pointer authentication support. Hence, these key switching routines are not implemented in C code as they may cause pointer authentication key signing error in some situations. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks , save host key in ptrauth exception trap] Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: kvmarm@lists.cs.columbia.edu [maz: various fixups] Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-04-18KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to WARNsDave Martin
Because of the logic in kvm_arm_sys_reg_{get,set}_reg() and sve_id_visibility(), we should never call {get,set}_id_aa64zfr0_el1() for a vcpu where !vcpu_has_sve(vcpu). To avoid the code giving the impression that it is valid for these functions to be called in this situation, and to help the compiler make the right optimisation decisions, this patch adds WARN_ON() for these cases. Given the way the logic is spread out, this seems preferable to dropping the checks altogether. Suggested-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-03-29KVM: arm64/sve: System register context switch and access supportDave Martin
This patch adds the necessary support for context switching ZCR_EL1 for each vcpu. ZCR_EL1 is trapped alongside the FPSIMD/SVE registers, so it makes sense for it to be handled as part of the guest FPSIMD/SVE context for context switch purposes instead of handling it as a general system register. This means that it can be switched in lazily at the appropriate time. No effort is made to track host context for this register, since SVE requires VHE: thus the hosts's value for this register lives permanently in ZCR_EL2 and does not alias the guest's value at any time. The Hyp switch and fpsimd context handling code is extended appropriately. Accessors are added in sys_regs.c to expose the SVE system registers and ID register fields. Because these need to be conditionally visible based on the guest configuration, they are implemented separately for now rather than by use of the generic system register helpers. This may be abstracted better later on when/if there are more features requiring this model. ID_AA64ZFR0_EL1 is RO-RAZ for MRS/MSR when SVE is disabled for the guest, but for compatibility with non-SVE aware KVM implementations the register should not be enumerated at all for KVM_GET_REG_LIST in this case. For consistency we also reject ioctl access to the register. This ensures that a non-SVE-enabled guest looks the same to userspace, irrespective of whether the kernel KVM implementation supports SVE. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Tested-by: zhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-03-29KVM: arm64: Support runtime sysreg visibility filteringDave Martin
Some optional features of the Arm architecture add new system registers that are not present in the base architecture. Where these features are optional for the guest, the visibility of these registers may need to depend on some runtime configuration, such as a flag passed to KVM_ARM_VCPU_INIT. For example, ZCR_EL1 and ID_AA64ZFR0_EL1 need to be hidden if SVE is not enabled for the guest, even though these registers may be present in the hardware and visible to the host at EL2. Adding special-case checks all over the place for individual registers is going to get messy as the number of conditionally- visible registers grows. In order to help solve this problem, this patch adds a new sysreg method visibility() that can be used to hook in any needed runtime visibility checks. This method can currently return REG_HIDDEN_USER to inhibit enumeration and ioctl access to the register for userspace, and REG_HIDDEN_GUEST to inhibit runtime access by the guest using MSR/MRS. Wrappers are added to allow these flags to be conveniently queried. This approach allows a conditionally modified view of individual system registers such as the CPU ID registers, in addition to completely hiding register where appropriate. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Tested-by: zhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-03-29KVM: arm64: Propagate vcpu into read_id_reg()Dave Martin
Architecture features that are conditionally visible to the guest will require run-time checks in the ID register accessor functions. In particular, read_id_reg() will need to perform checks in order to generate the correct emulated value for certain ID register fields such as ID_AA64PFR0_EL1.SVE for example. This patch propagates vcpu into read_id_reg() so that future patches can add run-time checks on the guest configuration here. For now, there is no functional change. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: zhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-03-15Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull KVM updates from Paolo Bonzini: "ARM: - some cleanups - direct physical timer assignment - cache sanitization for 32-bit guests s390: - interrupt cleanup - introduction of the Guest Information Block - preparation for processor subfunctions in cpu models PPC: - bug fixes and improvements, especially related to machine checks and protection keys x86: - many, many cleanups, including removing a bunch of MMU code for unnecessary optimizations - AVIC fixes Generic: - memcg accounting" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (147 commits) kvm: vmx: fix formatting of a comment KVM: doc: Document the life cycle of a VM and its resources MAINTAINERS: Add KVM selftests to existing KVM entry Revert "KVM/MMU: Flush tlb directly in the kvm_zap_gfn_range()" KVM: PPC: Book3S: Add count cache flush parameters to kvmppc_get_cpu_char() KVM: PPC: Fix compilation when KVM is not enabled KVM: Minor cleanups for kvm_main.c KVM: s390: add debug logging for cpu model subfunctions KVM: s390: implement subfunction processor calls arm64: KVM: Fix architecturally invalid reset value for FPEXC32_EL2 KVM: arm/arm64: Remove unused timer variable KVM: PPC: Book3S: Improve KVM reference counting KVM: PPC: Book3S HV: Fix build failure without IOMMU support Revert "KVM: Eliminate extra function calls in kvm_get_dirty_log_protect()" x86: kvmguest: use TSC clocksource if invariant TSC is exposed KVM: Never start grow vCPU halt_poll_ns from value below halt_poll_ns_grow_start KVM: Expose the initial start value in grow_halt_poll_ns() as a module parameter KVM: grow_halt_poll_ns() should never shrink vCPU halt_poll_ns KVM: x86/mmu: Consolidate kvm_mmu_zap_all() and kvm_mmu_zap_mmio_sptes() KVM: x86/mmu: WARN if zapping a MMIO spte results in zapping children ...
2019-02-22arm64: KVM: Fix architecturally invalid reset value for FPEXC32_EL2Dave Martin
Due to what looks like a typo dating back to the original addition of FPEXC32_EL2 handling, KVM currently initialises this register to an architecturally invalid value. As a result, the VECITR field (RES1) in bits [10:8] is initialised with 0, and the two reserved (RES0) bits [6:5] are initialised with 1. (In the Common VFP Subarchitecture as specified by ARMv7-A, these two bits were IMP DEF. ARMv8-A removes them.) This patch changes the reset value from 0x70 to 0x700, which reflects the architectural constraints and is presumably what was originally intended. Cc: <stable@vger.kernel.org> # 4.12.x- Cc: Christoffer Dall <christoffer.dall@arm.com> Fixes: 62a89c44954f ("arm64: KVM: 32bit handling of coprocessor traps") Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-02-19arm64: KVM: Describe data or unified caches as having 1 set and 1 wayArd Biesheuvel
On SMP ARM systems, cache maintenance by set/way should only ever be done in the context of onlining or offlining CPUs, which is typically done by bare metal firmware and never in a virtual machine. For this reason, we trap set/way cache maintenance operations and replace them with conditional flushing of the entire guest address space. Due to this trapping, the set/way arguments passed into the set/way ops are completely ignored, and thus irrelevant. This also means that the set/way geometry is equally irrelevant, and we can simply report it as 1 set and 1 way, so that legacy 32-bit ARM system software (i.e., the kind that only receives odd fixes) doesn't take a performance hit due to the trapping when iterating over the cachelines. Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-02-19arm64: KVM: Expose sanitised cache type register to guestArd Biesheuvel
We currently permit CPUs in the same system to deviate in the exact topology of the caches, and we subsequently hide this fact from user space by exposing a sanitised value of the cache type register CTR_EL0. However, guests running under KVM see the bare value of CTR_EL0, which could potentially result in issues with, e.g., JITs or other pieces of code that are sensitive to misreported cache line sizes. So let's start trapping cache ID instructions if there is a mismatch, and expose the sanitised version of CTR_EL0 to guests. Note that CTR_EL0 is treated as an invariant to KVM user space, so update that part as well. Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-02-19KVM: arm/arm64: consolidate arch timer trap handlersAndre Przywara
At the moment we have separate system register emulation handlers for each timer register. Actually they are quite similar, and we rely on kvm_arm_timer_[gs]et_reg() for the actual emulation anyways, so let's just merge all of those handlers into one function, which just marshalls the arguments and then hands off to a set of common accessors. This makes extending the emulation to include EL2 timers much easier. Signed-off-by: Andre Przywara <andre.przywara@arm.com> [Fixed 32-bit VM breakage and reduced to reworking existing code] Signed-off-by: Christoffer Dall <christoffer.dall@arm.com> [Fixed 32bit host, general cleanup] Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2019-02-19KVM: arm64: Reuse sys_reg() macro when searching the trap tableMarc Zyngier
Instead of having an open-coded macro, reuse the sys_reg() macro that does the exact same thing (the encoding is slightly different, but the ordering property is the same). Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
2019-02-07arm/arm64: KVM: Don't panic on failure to properly reset system registersMarc Zyngier
Failing to properly reset system registers is pretty bad. But not quite as bad as bringing the whole machine down... So warn loudly, but slightly more gracefully. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Christoffer Dall <christoffer.dall@arm.com>
2019-01-25arm64: KVM: Don't generate UNDEF when LORegion feature is presentMarc Zyngier
We currently hide the LORegion feature, and generate an UNDEF if the guest dares using the corresponding registers. This is a bit extreme, as ARMv8.1 guarantees the feature to be present. The guest should check the feature register before doing anything, but we could also give the guest some slack (read "allow the guest to be a bit stupid"). So instead of unconditionnaly deliver an exception, let's only do it when the host doesn't support LORegion at all (or when the feature has been sanitized out), and treat the registers as RAZ/WI otherwise (with the exception of LORID_EL1 being RO). Fixes: cc33c4e20185 ("arm64/kvm: Prohibit guest LOR accesses") Suggested-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
2018-12-26Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull KVM updates from Paolo Bonzini: "ARM: - selftests improvements - large PUD support for HugeTLB - single-stepping fixes - improved tracing - various timer and vGIC fixes x86: - Processor Tracing virtualization - STIBP support - some correctness fixes - refactorings and splitting of vmx.c - use the Hyper-V range TLB flush hypercall - reduce order of vcpu struct - WBNOINVD support - do not use -ftrace for __noclone functions - nested guest support for PAUSE filtering on AMD - more Hyper-V enlightenments (direct mode for synthetic timers) PPC: - nested VFIO s390: - bugfixes only this time" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (171 commits) KVM: x86: Add CPUID support for new instruction WBNOINVD kvm: selftests: ucall: fix exit mmio address guessing Revert "compiler-gcc: disable -ftracer for __noclone functions" KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines KVM: VMX: Explicitly reference RCX as the vmx_vcpu pointer in asm blobs KVM: x86: Use jmp to invoke kvm_spurious_fault() from .fixup MAINTAINERS: Add arch/x86/kvm sub-directories to existing KVM/x86 entry KVM/x86: Use SVM assembly instruction mnemonics instead of .byte streams KVM/MMU: Flush tlb directly in the kvm_zap_gfn_range() KVM/MMU: Flush tlb directly in kvm_set_pte_rmapp() KVM/MMU: Move tlb flush in kvm_set_pte_rmapp() to kvm_mmu_notifier_change_pte() KVM: Make kvm_set_spte_hva() return int KVM: Replace old tlb flush function with new one to flush a specified range. KVM/MMU: Add tlb flush with range helper function KVM/VMX: Add hv tlb range flush support x86/hyper-v: Add HvFlushGuestAddressList hypercall support KVM: Add tlb_remote_flush_with_range callback in kvm_x86_ops KVM: x86: Disable Intel PT when VMXON in L1 guest KVM: x86: Set intercept for Intel PT MSRs read/write KVM: x86: Implement Intel PT MSRs read/write emulation ...
2018-12-19arm64: KVM: Add trapped system register access tracepointMarc Zyngier
We're pretty blind when it comes to system register tracing, and rely on the ESR value displayed by kvm_handle_sys, which isn't much. Instead, let's add an actual name to the sysreg entries, so that we can finally print it as we're about to perform the access itself. The new tracepoint is conveniently called kvm_sys_access. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-12-19KVM: arm64: Make vcpu const in vcpu_read_sys_regChristoffer Dall
vcpu_read_sys_reg should not be modifying the VCPU structure. Eventually, to handle EL2 sysregs for nested virtualization, we will call vcpu_read_sys_reg from places that have a const vcpu pointer, which will complain about the lack of the const modifier on the read path. Signed-off-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-12-18KVM: arm/arm64: Log PSTATE for unhandled sysregsMark Rutland
When KVM traps an unhandled sysreg/coproc access from a guest, it logs the guest PC. To aid debugging, it would be helpful to know which exception level the trap came from, along with other PSTATE/CPSR bits, so let's log the PSTATE/CPSR too. Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-12-13arm64/kvm: hide ptrauth from guestsMark Rutland
In subsequent patches we're going to expose ptrauth to the host kernel and userspace, but things are a bit trickier for guest kernels. For the time being, let's hide ptrauth from KVM guests. Regardless of how well-behaved the guest kernel is, guest userspace could attempt to use ptrauth instructions, triggering a trap to EL2, resulting in noise from kvm_handle_unknown_ec(). So let's write up a handler for the PAC trap, which silently injects an UNDEF into the guest, as if the feature were really missing. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-08-12KVM: arm64: vgic-v3: Add support for ICC_SGI0R_EL1 and ICC_ASGI1R_EL1 accessesMarc Zyngier
In order to generate Group0 SGIs, let's add some decoding logic to access_gic_sgi(), and pass the generating group accordingly. Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-08-12KVM: arm/arm64: vgic-v3: Add core support for Group0 SGIsMarc Zyngier
Although vgic-v3 now supports Group0 interrupts, it still doesn't deal with Group0 SGIs. As usually with the GIC, nothing is simple: - ICC_SGI1R can signal SGIs of both groups, since GICD_CTLR.DS==1 with KVM (as per 8.1.10, Non-secure EL1 access) - ICC_SGI0R can only generate Group0 SGIs - ICC_ASGI1R sees its scope refocussed to generate only Group0 SGIs (as per the note at the bottom of Table 8-14) We only support Group1 SGIs so far, so no material change. Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-08-12KVM: arm64: Remove non-existent AArch32 ICC_SGI1R encodingMarc Zyngier
ICC_SGI1R is a 64bit system register, even on AArch32. It is thus pointless to have such an encoding in the 32bit cp15 array. Let's drop it. Reviewed-by: Eric Auger <eric.auger@redhat.com> Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-07-09arm64: KVM: Handle Set/Way CMOs as NOPs if FWB is presentMarc Zyngier
Set/Way handling is one of the ugliest corners of KVM. We shouldn't have to handle that, but better safe than sorry. Thankfully, FWB fixes this for us by not requiering any maintenance (the guest is forced to use cacheable memory, no matter what it says, and the whole system is garanteed to be cache coherent), which means we don't have to emulate S/W CMOs, and don't have to track VM ops either. We still have to trap S/W though, if only to prevent the guest from doing something bad. Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-05-25KVM: arm64: Repurpose vcpu_arch.debug_flags for general-purpose flagsDave Martin
In struct vcpu_arch, the debug_flags field is used to store debug-related flags about the vcpu state. Since we are about to add some more flags related to FPSIMD and SVE, it makes sense to add them to the existing flags field rather than adding new fields. Since there is only one debug_flags flag defined so far, there is plenty of free space for expansion. In preparation for adding more flags, this patch renames the debug_flags field to simply "flags", and updates comments appropriately. The flag definitions are also moved to <asm/kvm_host.h>, since their presence in <asm/kvm_asm.h> was for purely historical reasons: these definitions are not used from asm any more, and not very likely to be as more Hyp asm is migrated to C. KVM_ARM64_DEBUG_DIRTY_SHIFT has not been used since commit 1ea66d27e7b0 ("arm64: KVM: Move away from the assembly version of the world switch"), so this patch gets rid of that too. No functional change. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Christoffer Dall <christoffer.dall@arm.com> [maz: fixed minor conflict] Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-04-17arm64: KVM: Demote SVE and LORegion warnings to debug onlyMarc Zyngier
While generating a message about guests probing for SVE/LORegions is a useful debugging tool, considering it an error is slightly over the top, as this is the only way the guest can find out about the presence of the feature. Let's turn these message into kvm_debug so that they can only be seen if CONFIG_DYNAMIC_DEBUG, and kept quiet otherwise. Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-03-19KVM: arm64: Defer saving/restoring 32-bit sysregs to vcpu load/putChristoffer Dall
When running a 32-bit VM (EL1 in AArch32), the AArch32 system registers can be deferred to vcpu load/put on VHE systems because neither the host kernel nor host userspace uses these registers. Note that we can't save DBGVCR32_EL2 conditionally based on the state of the debug dirty flag on VHE after this change, because during vcpu_load() we haven't calculated a valid debug flag yet, and when we've restored the register during vcpu_load() we also have to save it during vcpu_put(). This means that we'll always restore/save the register for VHE on load/put, but luckily vcpu load/put are called rarely, so saving an extra register unconditionally shouldn't significantly hurt performance. We can also not defer saving FPEXC32_32 because this register only holds a guest-valid value for 32-bit guests during the exit path when the guest has used FPSIMD registers and restored the register in the early assembly handler from taking the EL2 fault, and therefore we have to check if fpsimd is enabled for the guest in the exit path and save the register then, for both VHE and non-VHE guests. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-03-19KVM: arm64: Defer saving/restoring 64-bit sysregs to vcpu load/put on VHEChristoffer Dall
Some system registers do not affect the host kernel's execution and can therefore be loaded when we are about to run a VCPU and we don't have to restore the host state to the hardware before the time when we are actually about to return to userspace or schedule out the VCPU thread. The EL1 system registers and the userspace state registers only affecting EL0 execution do not need to be saved and restored on every switch between the VM and the host, because they don't affect the host kernel's execution. We mark all registers which are now deffered as such in the vcpu_{read,write}_sys_reg accessors in sys-regs.c to ensure the most up-to-date copy is always accessed. Note MPIDR_EL1 (controlled via VMPIDR_EL2) is accessed from other vcpu threads, for example via the GIC emulation, and therefore must be declared as immediate, which is fine as the guest cannot modify this value. The 32-bit sysregs can also be deferred but we do this in a separate patch as it requires a bit more infrastructure. Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-03-19KVM: arm64: Introduce framework for accessing deferred sysregsChristoffer Dall
We are about to defer saving and restoring some groups of system registers to vcpu_put and vcpu_load on supported systems. This means that we need some infrastructure to access system registes which supports either accessing the memory backing of the register or directly accessing the system registers, depending on the state of the system when we access the register. We do this by defining read/write accessor functions, which can handle both "immediate" and "deferrable" system registers. Immediate registers are always saved/restored in the world-switch path, but deferrable registers are only saved/restored in vcpu_put/vcpu_load when supported and sysregs_loaded_on_cpu will be set in that case. Note that we don't use the deferred mechanism yet in this patch, but only introduce infrastructure. This is to improve convenience of review in the subsequent patches where it is clear which registers become deferred. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-03-19KVM: arm64: Rewrite system register accessors to read/write functionsChristoffer Dall
Currently we access the system registers array via the vcpu_sys_reg() macro. However, we are about to change the behavior to some times modify the register file directly, so let's change this to two primitives: * Accessor macros vcpu_write_sys_reg() and vcpu_read_sys_reg() * Direct array access macro __vcpu_sys_reg() The accessor macros should be used in places where the code needs to access the currently loaded VCPU's state as observed by the guest. For example, when trapping on cache related registers, a write to a system register should go directly to the VCPU version of the register. The direct array access macro can be used in places where the VCPU is known to never be running (for example userspace access) or for registers which are never context switched (for example all the PMU system registers). This rewrites all users of vcpu_sys_regs to one of the macros described above. No functional change. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-03-19KVM: arm64: Change 32-bit handling of VM system registersChristoffer Dall
We currently handle 32-bit accesses to trapped VM system registers using the 32-bit index into the coproc array on the vcpu structure, which is a union of the coproc array and the sysreg array. Since all the 32-bit coproc indices are created to correspond to the architectural mapping between 64-bit system registers and 32-bit coprocessor registers, and because the AArch64 system registers are the double in size of the AArch32 coprocessor registers, we can always find the system register entry that we must update by dividing the 32-bit coproc index by 2. This is going to make our lives much easier when we have to start accessing system registers that use deferred save/restore and might have to be read directly from the physical CPU. Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-02-26KVM: arm64: Enable the EL1 physical timer for AArch32 guestsJérémy Fanguède
Some 32bits guest OS can use the CNTP timer, however KVM does not handle the accesses, injecting a fault instead. Use the proper handlers to emulate the EL1 Physical Timer (CNTP) register accesses of AArch32 guests. Signed-off-by: Jérémy Fanguède <j.fanguede@virtualopensystems.com> Signed-off-by: Alvise Rigo <a.rigo@virtualopensystems.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2018-02-26arm64/kvm: Prohibit guest LOR accessesMark Rutland
We don't currently limit guest accesses to the LOR registers, which we neither virtualize nor context-switch. As such, guests are provided with unusable information/controls, and are not isolated from each other (or the host). To prevent these issues, we can trap register accesses and present the illusion LORegions are unssupported by the CPU. To do this, we mask ID_AA64MMFR1.LO, and set HCR_EL2.TLOR to trap accesses to the following registers: * LORC_EL1 * LOREA_EL1 * LORID_EL1 * LORN_EL1 * LORSA_EL1 ... when trapped, we inject an UNDEFINED exception to EL1, simulating their non-existence. As noted in D7.2.67, when no LORegions are implemented, LoadLOAcquire and StoreLORelease must behave as LoadAcquire and StoreRelease respectively. We can ensure this by clearing LORC_EL1.EN when a CPU's EL2 is first initialized, as the host kernel will not modify this. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Vladimir Murzin <vladimir.murzin@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2018-01-16KVM: arm64: Emulate RAS error registers and set HCR_EL2's TERR & TEADongjiu Geng
ARMv8.2 adds a new bit HCR_EL2.TEA which routes synchronous external aborts to EL2, and adds a trap control bit HCR_EL2.TERR which traps all Non-secure EL1&0 error record accesses to EL2. This patch enables the two bits for the guest OS, guaranteeing that KVM takes external aborts and traps attempts to access the physical error registers. ERRIDR_EL1 advertises the number of error records, we return zero meaning we can treat all the other registers as RAZ/WI too. Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> [removed specific emulation, use trap_raz_wi() directly for everything, rephrased parts of the commit message] Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-01-16KVM: arm64: Save/Restore guest DISR_EL1James Morse
If we deliver a virtual SError to the guest, the guest may defer it with an ESB instruction. The guest reads the deferred value via DISR_EL1, but the guests view of DISR_EL1 is re-mapped to VDISR_EL2 when HCR_EL2.AMO is set. Add the KVM code to save/restore VDISR_EL2, and make it accessible to userspace as DISR_EL1. Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-11-16Merge tag 'kvm-4.15-1' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull KVM updates from Radim Krčmář: "First batch of KVM changes for 4.15 Common: - Python 3 support in kvm_stat - Accounting of slabs to kmemcg ARM: - Optimized arch timer handling for KVM/ARM - Improvements to the VGIC ITS code and introduction of an ITS reset ioctl - Unification of the 32-bit fault injection logic - More exact external abort matching logic PPC: - Support for running hashed page table (HPT) MMU mode on a host that is using the radix MMU mode; single threaded mode on POWER 9 is added as a pre-requisite - Resolution of merge conflicts with the last second 4.14 HPT fixes - Fixes and cleanups s390: - Some initial preparation patches for exitless interrupts and crypto - New capability for AIS migration - Fixes x86: - Improved emulation of LAPIC timer mode changes, MCi_STATUS MSRs, and after-reset state - Refined dependencies for VMX features - Fixes for nested SMI injection - A lot of cleanups" * tag 'kvm-4.15-1' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (89 commits) KVM: s390: provide a capability for AIS state migration KVM: s390: clear_io_irq() requests are not expected for adapter interrupts KVM: s390: abstract conversion between isc and enum irq_types KVM: s390: vsie: use common code functions for pinning KVM: s390: SIE considerations for AP Queue virtualization KVM: s390: document memory ordering for kvm_s390_vcpu_wakeup KVM: PPC: Book3S HV: Cosmetic post-merge cleanups KVM: arm/arm64: fix the incompatible matching for external abort KVM: arm/arm64: Unify 32bit fault injection KVM: arm/arm64: vgic-its: Implement KVM_DEV_ARM_ITS_CTRL_RESET KVM: arm/arm64: Document KVM_DEV_ARM_ITS_CTRL_RESET KVM: arm/arm64: vgic-its: Free caches when GITS_BASER Valid bit is cleared KVM: arm/arm64: vgic-its: New helper functions to free the caches KVM: arm/arm64: vgic-its: Remove kvm_its_unmap_device arm/arm64: KVM: Load the timer state when enabling the timer KVM: arm/arm64: Rework kvm_timer_should_fire KVM: arm/arm64: Get rid of kvm_timer_flush_hwstate KVM: arm/arm64: Avoid phys timer emulation in vcpu entry/exit KVM: arm/arm64: Move phys_timer_emulate function KVM: arm/arm64: Use kvm_arm_timer_set/get_reg for guest register traps ...
2017-11-06KVM: arm/arm64: Use kvm_arm_timer_set/get_reg for guest register trapsChristoffer Dall
When trapping on a guest access to one of the timer registers, we were messing with the internals of the timer state from the sysregs handling code, and that logic was about to receive more added complexity when optimizing the timer handling code. Therefore, since we already have timer register access functions (to access registers from userspace), reuse those for the timer register traps from a VM and let the timer code maintain its own consistency. Signed-off-by: Christoffer Dall <cdall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
2017-11-03arm64/sve: KVM: Hide SVE from CPU features exposed to guestsDave Martin
KVM guests cannot currently use SVE, because SVE is always configured to trap to EL2. However, a guest that sees SVE reported as present in ID_AA64PFR0_EL1 may legitimately expect that SVE works and try to use it. Instead of working, the guest will receive an injected undef exception, which may cause the guest to oops or go into a spin. To avoid misleading the guest into believing that SVE will work, this patch masks out the SVE field from ID_AA64PFR0_EL1 when a guest attempts to read this register. No support is explicitly added for ID_AA64ZFR0_EL1 either, so that is still emulated as reading as zero, which is consistent with SVE not being implemented. This is a temporary measure, and will be removed in a later series when full KVM support for SVE is implemented. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-11-03arm64: KVM: Hide unsupported AArch64 CPU features from guestsDave Martin
Currently, a guest kernel sees the true CPU feature registers (ID_*_EL1) when it reads them using MRS instructions. This means that the guest may observe features that are present in the hardware but the host doesn't understand or doesn't provide support for. A guest may legimitately try to use such a feature as per the architecture, but use of the feature may trap instead of working normally, triggering undef injection into the guest. This is not a problem for the host, but the guest may go wrong when running on newer hardware than the host knows about. This patch hides from guest VMs any AArch64-specific CPU features that the host doesn't support, by exposing to the guest the sanitised versions of the registers computed by the cpufeatures framework, instead of the true hardware registers. To achieve this, HCR_EL2.TID3 is now set for AArch64 guests, and emulation code is added to KVM to report the sanitised versions of the affected registers in response to MRS and register reads from userspace. The affected registers are removed from invariant_sys_regs[] (since the invariant_sys_regs handling is no longer quite correct for them) and added to sys_reg_desgs[], with appropriate access(), get_user() and set_user() methods. No runtime vcpu storage is allocated for the registers: instead, they are read on demand from the cpufeatures framework. This may need modification in the future if there is a need for userspace to customise the features visible to the guest. Attempts by userspace to write the registers are handled similarly to the current invariant_sys_regs handling: writes are permitted, but only if they don't attempt to change the value. This is sufficient to support VM snapshot/restore from userspace. Because of the additional registers, restoring a VM on an older kernel may not work unless userspace knows how to handle the extra VM registers exposed to the KVM user ABI by this patch. Under the principle of least damage, this patch makes no attempt to handle any of the other registers currently in invariant_sys_regs[], or to emulate registers for AArch32: however, these could be handled in a similar way in future, as necessary. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>