aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm64/include/asm/cpufeature.h
AgeCommit message (Collapse)Author
2019-10-11arm64: Always enable ssb vulnerability detectionJeremy Linton
[ Upstream commit d42281b6e49510f078ace15a8ea10f71e6262581 ] Ensure we are always able to detect whether or not the CPU is affected by SSB, so that we can later advertise this to userspace. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Reviewed-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Stefan Wahren <stefan.wahren@i2se.com> [will: Use IS_ENABLED instead of #ifdef] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06arm64: cpufeature: Fix feature comparison for CTR_EL0.{CWG,ERG}Will Deacon
commit 147b9635e6347104b91f48ca9dca61eb0fbf2a54 upstream. If CTR_EL0.{CWG,ERG} are 0b0000 then they must be interpreted to have their architecturally maximum values, which defeats the use of FTR_HIGHER_SAFE when sanitising CPU ID registers on heterogeneous machines. Introduce FTR_HIGHER_OR_ZERO_SAFE so that these fields effectively saturate at zero. Fixes: 3c739b571084 ("arm64: Keep track of CPU feature registers") Cc: <stable@vger.kernel.org> # 4.4.x- Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-06-12Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull KVM updates from Paolo Bonzini: "Small update for KVM: ARM: - lazy context-switching of FPSIMD registers on arm64 - "split" regions for vGIC redistributor s390: - cleanups for nested - clock handling - crypto - storage keys - control register bits x86: - many bugfixes - implement more Hyper-V super powers - implement lapic_timer_advance_ns even when the LAPIC timer is emulated using the processor's VMX preemption timer. - two security-related bugfixes at the top of the branch" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (79 commits) kvm: fix typo in flag name kvm: x86: use correct privilege level for sgdt/sidt/fxsave/fxrstor access KVM: x86: pass kvm_vcpu to kvm_read_guest_virt and kvm_write_guest_virt_system KVM: x86: introduce linear_{read,write}_system kvm: nVMX: Enforce cpl=0 for VMX instructions kvm: nVMX: Add support for "VMWRITE to any supported field" kvm: nVMX: Restrict VMX capability MSR changes KVM: VMX: Optimize tscdeadline timer latency KVM: docs: nVMX: Remove known limitations as they do not exist now KVM: docs: mmu: KVM support exposing SLAT to guests kvm: no need to check return value of debugfs_create functions kvm: Make VM ioctl do valloc for some archs kvm: Change return type to vm_fault_t KVM: docs: mmu: Fix link to NPT presentation from KVM Forum 2008 kvm: x86: Amend the KVM_GET_SUPPORTED_CPUID API documentation KVM: x86: hyperv: declare KVM_CAP_HYPERV_TLBFLUSH capability KVM: x86: hyperv: simplistic HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE}_EX implementation KVM: x86: hyperv: simplistic HVCALL_FLUSH_VIRTUAL_ADDRESS_{LIST,SPACE} implementation KVM: introduce kvm_make_vcpus_request_mask() API KVM: x86: hyperv: do rep check for each hypercall separately ...
2018-05-31arm64: ssbd: Restore mitigation status on CPU resumeMarc Zyngier
On a system where firmware can dynamically change the state of the mitigation, the CPU will always come up with the mitigation enabled, including when coming back from suspend. If the user has requested "no mitigation" via a command line option, let's enforce it by calling into the firmware again to disable it. Similarily, for a resume from hibernate, the mitigation could have been disabled by the boot kernel. Let's ensure that it is set back on in that case. Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-05-31arm64: ssbd: Add global mitigation state accessorMarc Zyngier
We're about to need the mitigation state in various parts of the kernel in order to do the right thing for userspace and guests. Let's expose an accessor that will let other subsystems know about the state. Reviewed-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-05-31arm64: Add 'ssbd' command-line optionMarc Zyngier
On a system where the firmware implements ARCH_WORKAROUND_2, it may be useful to either permanently enable or disable the workaround for cases where the user decides that they'd rather not get a trap overhead, and keep the mitigation permanently on or off instead of switching it on exception entry/exit. In any case, default to the mitigation being enabled. Reviewed-by: Julien Grall <julien.grall@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2018-05-25arm64/sve: Move read_zcr_features() out of cpufeature.hDave Martin
Having read_zcr_features() inline in cpufeature.h results in that header requiring #includes which make it hard to include <asm/fpsimd.h> elsewhere without triggering header inclusion cycles. This is not a hot-path function and arguably should not be in cpufeature.h in the first place, so this patch moves it to fpsimd.c, compiled conditionally if CONFIG_ARM64_SVE=y. This allows some SVE-related #includes to be dropped from cpufeature.h, which will ease future maintenance. A couple of missing #includes of <asm/fpsimd.h> are exposed by this change under arch/arm64/. This patch adds the missing #includes as necessary. No functional change. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2018-03-26arm64: capabilities: Handle shared entriesSuzuki K Poulose
Some capabilities have different criteria for detection and associated actions based on the matching criteria, even though they all share the same capability bit. So far we have used multiple entries with the same capability bit to handle this. This is prone to errors, as the cpu_enable is invoked for each entry, irrespective of whether the detection rule applies to the CPU or not. And also this complicates other helpers, e.g, __this_cpu_has_cap. This patch adds a wrapper entry to cover all the possible variations of a capability by maintaining list of matches + cpu_enable callbacks. To avoid complicating the prototypes for the "matches()", we use arm64_cpu_capabilities maintain the list and we ignore all the other fields except the matches & cpu_enable. This ensures : 1) The capabilitiy is set when at least one of the entry detects 2) Action is only taken for the entries that "matches". This avoids explicit checks in the cpu_enable() take some action. The only constraint here is that, all the entries should have the same "type" (i.e, scope and conflict rules). If a cpu_enable() method is associated with multiple matches for a single capability, care should be taken that either the match criteria are mutually exclusive, or that the method is robust against being called multiple times. This also reverts the changes introduced by commit 67948af41f2e6818ed ("arm64: capabilities: Handle duplicate entries for a capability"). Cc: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Add support for checks based on a list of MIDRsSuzuki K Poulose
Add helpers for detecting an errata on list of midr ranges of affected CPUs, with the same work around. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: Add helpers for checking CPU MIDR against a rangeSuzuki K Poulose
Add helpers for checking if the given CPU midr falls in a range of variants/revisions for a given model. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Change scope of VHE to Boot CPU featureSuzuki K Poulose
We expect all CPUs to be running at the same EL inside the kernel with or without VHE enabled and we have strict checks to ensure that any mismatch triggers a kernel panic. If VHE is enabled, we use the feature based on the boot CPU and all other CPUs should follow. This makes it a perfect candidate for a capability based on the boot CPU, which should be matched by all the CPUs (both when is ON and OFF). This saves us some not-so-pretty hooks and special code, just for verifying the conflict. The patch also makes the VHE capability entry depend on CONFIG_ARM64_VHE. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Add support for features enabled earlySuzuki K Poulose
The kernel detects and uses some of the features based on the boot CPU and expects that all the following CPUs conform to it. e.g, with VHE and the boot CPU running at EL2, the kernel decides to keep the kernel running at EL2. If another CPU is brought up without this capability, we use custom hooks (via check_early_cpu_features()) to handle it. To handle such capabilities add support for detecting and enabling capabilities based on the boot CPU. A bit is added to indicate if the capability should be detected early on the boot CPU. The infrastructure then ensures that such capabilities are probed and "enabled" early on in the boot CPU and, enabled on the subsequent CPUs. Cc: Julien Thierry <julien.thierry@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Restrict KPTI detection to boot-time CPUsSuzuki K Poulose
KPTI is treated as a system wide feature and is only detected if all the CPUs in the sysetm needs the defense, unless it is forced via kernel command line. This leaves a system with a mix of CPUs with and without the defense vulnerable. Also, if a late CPU needs KPTI but KPTI was not activated at boot time, the CPU is currently allowed to boot, which is a potential security vulnerability. This patch ensures that the KPTI is turned on if at least one CPU detects the capability (i.e, change scope to SCOPE_LOCAL_CPU). Also rejetcs a late CPU, if it requires the defense, when the system hasn't enabled it, Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Introduce weak features based on local CPUSuzuki K Poulose
Now that we have the flexibility of defining system features based on individual CPUs, introduce CPU feature type that can be detected on a local SCOPE and ignores the conflict on late CPUs. This is applicable for ARM64_HAS_NO_HW_PREFETCH, where it is fine for the system to have CPUs without hardware prefetch turning up later. We only suffer a performance penalty, nothing fatal. Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Filter the entries based on a given maskSuzuki K Poulose
While processing the list of capabilities, it is useful to filter out some of the entries based on the given mask for the scope of the capabilities to allow better control. This can be used later for handling LOCAL vs SYSTEM wide capabilities and more. All capabilities should have their scope set to either LOCAL_CPU or SYSTEM. No functional/flow change. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Add flags to handle the conflicts on late CPUSuzuki K Poulose
When a CPU is brought up, it is checked against the caps that are known to be enabled on the system (via verify_local_cpu_capabilities()). Based on the state of the capability on the CPU vs. that of System we could have the following combinations of conflict. x-----------------------------x | Type | System | Late CPU | |-----------------------------| | a | y | n | |-----------------------------| | b | n | y | x-----------------------------x Case (a) is not permitted for caps which are system features, which the system expects all the CPUs to have (e.g VHE). While (a) is ignored for all errata work arounds. However, there could be exceptions to the plain filtering approach. e.g, KPTI is an optional feature for a late CPU as long as the system already enables it. Case (b) is not permitted for errata work arounds that cannot be activated after the kernel has finished booting.And we ignore (b) for features. Here, yet again, KPTI is an exception, where if a late CPU needs KPTI we are too late to enable it (because we change the allocation of ASIDs etc). Add two different flags to indicate how the conflict should be handled. ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU - CPUs may have the capability ARM64_CPUCAP_OPTIONAL_FOR_LATE_CPU - CPUs may not have the cappability. Now that we have the flags to describe the behavior of the errata and the features, as we treat them, define types for ERRATUM and FEATURE. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Prepare for fine grained capabilitiesSuzuki K Poulose
We use arm64_cpu_capabilities to represent CPU ELF HWCAPs exposed to the userspace and the CPU hwcaps used by the kernel, which include cpu features and CPU errata work arounds. Capabilities have some properties that decide how they should be treated : 1) Detection, i.e scope : A cap could be "detected" either : - if it is present on at least one CPU (SCOPE_LOCAL_CPU) Or - if it is present on all the CPUs (SCOPE_SYSTEM) 2) When is it enabled ? - A cap is treated as "enabled" when the system takes some action based on whether the capability is detected or not. e.g, setting some control register, patching the kernel code. Right now, we treat all caps are enabled at boot-time, after all the CPUs are brought up by the kernel. But there are certain caps, which are enabled early during the boot (e.g, VHE, GIC_CPUIF for NMI) and kernel starts using them, even before the secondary CPUs are brought up. We would need a way to describe this for each capability. 3) Conflict on a late CPU - When a CPU is brought up, it is checked against the caps that are known to be enabled on the system (via verify_local_cpu_capabilities()). Based on the state of the capability on the CPU vs. that of System we could have the following combinations of conflict. x-----------------------------x | Type | System | Late CPU | ------------------------------| | a | y | n | ------------------------------| | b | n | y | x-----------------------------x Case (a) is not permitted for caps which are system features, which the system expects all the CPUs to have (e.g VHE). While (a) is ignored for all errata work arounds. However, there could be exceptions to the plain filtering approach. e.g, KPTI is an optional feature for a late CPU as long as the system already enables it. Case (b) is not permitted for errata work arounds which requires some work around, which cannot be delayed. And we ignore (b) for features. Here, yet again, KPTI is an exception, where if a late CPU needs KPTI we are too late to enable it (because we change the allocation of ASIDs etc). So this calls for a lot more fine grained behavior for each capability. And if we define all the attributes to control their behavior properly, we may be able to use a single table for the CPU hwcaps (which cover errata and features, not the ELF HWCAPs). This is a prepartory step to get there. More bits would be added for the properties listed above. We are going to use a bit-mask to encode all the properties of a capabilities. This patch encodes the "SCOPE" of the capability. As such there is no change in how the capabilities are treated. Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Move errata processing codeSuzuki K Poulose
We have errata work around processing code in cpu_errata.c, which calls back into helpers defined in cpufeature.c. Now that we are going to make the handling of capabilities generic, by adding the information to each capability, move the errata work around specific processing code. No functional changes. Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Dave Martin <dave.martin@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-26arm64: capabilities: Update prototype for enable call backDave Martin
We issue the enable() call back for all CPU hwcaps capabilities available on the system, on all the CPUs. So far we have ignored the argument passed to the call back, which had a prototype to accept a "void *" for use with on_each_cpu() and later with stop_machine(). However, with commit 0a0d111d40fd1 ("arm64: cpufeature: Pass capability structure to ->enable callback"), there are some users of the argument who wants the matching capability struct pointer where there are multiple matching criteria for a single capability. Clean up the declaration of the call back to make it clear. 1) Renamed to cpu_enable(), to imply taking necessary actions on the called CPU for the entry. 2) Pass const pointer to the capability, to allow the call back to check the entry. (e.,g to check if any action is needed on the CPU) 3) We don't care about the result of the call back, turning this to a void. Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: James Morse <james.morse@arm.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Dave Martin <dave.martin@arm.com> [suzuki: convert more users, rename call back and drop results] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-09arm64/errata: add REVIDR handling to frameworkArd Biesheuvel
In some cases, core variants that are affected by a certain erratum also exist in versions that have the erratum fixed, and this fact is recorded in a dedicated bit in system register REVIDR_EL1. Since the architecture does not require that a certain bit retains its meaning across different variants of the same model, each such REVIDR bit is tightly coupled to a certain revision/variant value, and so we need a list of revidr_mask/midr pairs to carry this information. So add the struct member and the associated macros and handling to allow REVIDR fixes to be taken into account. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-12-14arm64/sve: Report SVE to userspace via CPUID only if supportedDave Martin
Currently, the SVE field in ID_AA64PFR0_EL1 is visible unconditionally to userspace via the CPU ID register emulation, irrespective of the kernel config. This means that if a kernel configured with CONFIG_ARM64_SVE=n is run on SVE-capable hardware, userspace will see SVE reported as present in the ID regs even though the kernel forbids execution of SVE instructions. This patch makes the exposure of the SVE field in ID_AA64PFR0_EL1 conditional on CONFIG_ARM64_SVE=y. Since future architecture features are likely to encounter a similar requirement, this patch adds a suitable helper macros for use when declaring config-conditional ID register fields. Fixes: 43994d824e84 ("arm64/sve: Detect SVE and activate runtime support") Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reported-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-11-03arm64/sve: Detect SVE and activate runtime supportDave Martin
This patch enables detection of hardware SVE support via the cpufeatures framework, and reports its presence to the kernel and userspace via the new ARM64_SVE cpucap and HWCAP_SVE hwcap respectively. Userspace can also detect SVE using ID_AA64PFR0_EL1, using the cpufeatures MRS emulation. When running on hardware that supports SVE, this enables runtime kernel support for SVE, and allows user tasks to execute SVE instructions and make of the of the SVE-specific user/kernel interface extensions implemented by this series. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-11-03arm64/sve: Probe SVE capabilities and usable vector lengthsDave Martin
This patch uses the cpufeatures framework to determine common SVE capabilities and vector lengths, and configures the runtime SVE support code appropriately. ZCR_ELx is not really a feature register, but it is convenient to use it as a template for recording the maximum vector length supported by a CPU, using the LEN field. This field is similar to a feature field in that it is a contiguous bitfield for which we want to determine the minimum system-wide value. This patch adds ZCR as a pseudo-register in cpuinfo/cpufeatures, with appropriate custom code to populate it. Finding the minimum supported value of the LEN field is left to the cpufeatures framework in the usual way. The meaning of ID_AA64ZFR0_EL1 is not architecturally defined yet, so for now we just require it to be zero. Note that much of this code is dormant and SVE still won't be used yet, since system_supports_sve() remains hardwired to false. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-11-03arm64/sve: Kconfig update and conditional compilation supportDave Martin
This patch adds CONFIG_ARM64_SVE to control building of SVE support into the kernel, and adds a stub predicate system_supports_sve() to control conditional compilation and runtime SVE support. system_supports_sve() just returns false for now: it will be replaced with a non-trivial implementation in a later patch, once SVE support is complete enough to be enabled safely. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-05-17arm64/cpufeature: don't use mutex in bringup pathMark Rutland
Currently, cpus_set_cap() calls static_branch_enable_cpuslocked(), which must take the jump_label mutex. We call cpus_set_cap() in the secondary bringup path, from the idle thread where interrupts are disabled. Taking a mutex in this path "is a NONO" regardless of whether it's contended, and something we must avoid. We didn't spot this until recently, as ___might_sleep() won't warn for this case until all CPUs have been brought up. This patch avoids taking the mutex in the secondary bringup path. The poking of static keys is deferred until enable_cpu_capabilities(), which runs in a suitable context on the boot CPU. To account for the static keys being set later, cpus_have_const_cap() is updated to use another static key to check whether the const cap keys have been initialised, falling back to the caps bitmap until this is the case. This means that users of cpus_have_const_cap() gain should only gain a single additional NOP in the fast path once the const caps are initialised, but should always see the current cap value. The hyp code should never dereference the caps array, since the caps are initialized before we run the module initcall to initialise hyp. A check is added to the hyp init code to document this requirement. This change will sidestep a number of issues when the upcoming hotplug locking rework is merged. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyniger <marc.zyngier@arm.com> Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sebastian Sewior <bigeasy@linutronix.de> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-04-04arm64: cpufeature: Make ID reg accessor naming less counterintuitiveDave Martin
read_system_reg() can readily be confused with read_sysreg(), whereas these are really quite different in their meaning. This patches attempts to reduce the ambiguity be reserving "sysreg" for the actual system register accessors. read_system_reg() is instead renamed to read_sanitised_ftr_reg(), to make it more obvious that the Linux-defined sanitised feature register cache is being accessed here, not the underlying architectural system registers. cpufeature.c's internal __raw_read_system_reg() function is renamed in line with its actual purpose: a form of read_sysreg() that indexes on (non-compiletime-constant) encoding rather than symbolic register name. Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2017-03-10arm64: use const cap for system_uses_ttbr0_pan()Mark Rutland
Since commit 4b65a5db362783ab ("arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1"), system_uses_ttbr0_pan() has used cpus_have_cap() to determine whether PAN is present. Since commit a4023f682739439b ("arm64: Add hypervisor safe helper for checking constant capabilities"), which was introduced around the same time, cpus_have_cap() doesn't try to use a static key, and must always perform a load, test, and consitional branch (likely a tbnz for the latter two). Elsewhere, we moved to using cpus_have_const_cap(), which can use a static key (i.e. a non-conditional branch), which is patched at runtime when the feature is detected. This patch makes system_uses_ttbr0_pan() use cpus_have_const_cap(). The static key is likely a win for hot-paths like the uacccess primitives, and this makes our usage consistent regardless. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-02-24arm64/cpufeature: check correct field width when updating sys_valMark Rutland
When we're updating a register's sys_val, we use arm64_ftr_value() to find the new field value. We use cpuid_feature_extract_field() to find the new value, but this implicitly assumes a 4-bit field, so we may extract more bits than we mean to for fields like CTR_EL0.L1ip. This affects update_cpu_ftr_reg(), where we may extract erroneous values for ftr_cur and ftr_new. Depending on the additional bits extracted in either case, we may erroneously detect that the value is mismatched, and we'll try to compute a new safe value. Dependent on these extra bits and feature type, arm64_ftr_safe_value() may pessimistically select the always-safe value, or may erroneously choose either the extracted cur or new value as the safe option. The extra bits will subsequently be masked out in arm64_ftr_set_value(), so we may choose a higher value, yet write back a lower one. Fix this by passing the width down explicitly in arm64_ftr_value(), so we always extract the correct amount. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-10arm64: cpufeature: Track user visible fieldsSuzuki K Poulose
Track the user visible fields of a CPU feature register. This will be used for exposing the value to the userspace. All the user visible fields of a feature register will be passed on as it is, while the others would be filled with their respective safe value. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2017-01-10arm64: cpufeature: Document the rules of safe value for featuresSuzuki K Poulose
Document the rules for choosing the safe value for different types of features. Cc: Dave Martin <dave.martin@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-12-13Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: - struct thread_info moved off-stack (also touching include/linux/thread_info.h and include/linux/restart_block.h) - cpus_have_cap() reworked to avoid __builtin_constant_p() for static key use (also touching drivers/irqchip/irq-gic-v3.c) - uprobes support (currently only for native 64-bit tasks) - Emulation of kernel Privileged Access Never (PAN) using TTBR0_EL1 switching to a reserved page table - CPU capacity information passing via DT or sysfs (used by the scheduler) - support for systems without FP/SIMD (IOW, kernel avoids touching these registers; there is no soft-float ABI, nor kernel emulation for AArch64 FP/SIMD) - handling of hardware watchpoint with unaligned addresses, varied lengths and offsets from base - use of the page table contiguous hint for kernel mappings - hugetlb fixes for sizes involving the contiguous hint - remove unnecessary I-cache invalidation in flush_cache_range() - CNTHCTL_EL2 access fix for CPUs with VHE support (ARMv8.1) - boot-time checks for writable+executable kernel mappings - simplify asm/opcodes.h and avoid including the 32-bit ARM counterpart and make the arm64 kernel headers self-consistent (Xen headers patch merged separately) - Workaround for broken .inst support in certain binutils versions * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (60 commits) arm64: Disable PAN on uaccess_enable() arm64: Work around broken .inst when defective gas is detected arm64: Add detection code for broken .inst support in binutils arm64: Remove reference to asm/opcodes.h arm64: Get rid of asm/opcodes.h arm64: smp: Prevent raw_smp_processor_id() recursion arm64: head.S: Fix CNTHCTL_EL2 access on VHE system arm64: Remove I-cache invalidation from flush_cache_range() arm64: Enable HIBERNATION in defconfig arm64: Enable CONFIG_ARM64_SW_TTBR0_PAN arm64: xen: Enable user access before a privcmd hvc call arm64: Handle faults caused by inadvertent user access with PAN enabled arm64: Disable TTBR0_EL1 during normal kernel execution arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1 arm64: Factor out TTBR0_EL1 post-update workaround into a specific asm macro arm64: Factor out PAN enabling/disabling into separate uaccess_* macros arm64: Update the synchronous external abort fault description selftests: arm64: add test for unaligned/inexact watchpoint handling arm64: Allow hw watchpoint of length 3,5,6 and 7 arm64: hw_breakpoint: Handle inexact watchpoint addresses ...
2016-11-21arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1Catalin Marinas
This patch adds the uaccess macros/functions to disable access to user space by setting TTBR0_EL1 to a reserved zeroed page. Since the value written to TTBR0_EL1 must be a physical address, for simplicity this patch introduces a reserved_ttbr0 page at a constant offset from swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value adjusted by the reserved_ttbr0 offset. Enabling access to user is done by restoring TTBR0_EL1 with the value from the struct thread_info ttbr0 variable. Interrupts must be disabled during the uaccess_ttbr0_enable code to ensure the atomicity of the thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the get_thread_info asm macro from entry.S to assembler.h for reuse in the uaccess_ttbr0_* macros. Cc: Will Deacon <will.deacon@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-16arm64: Support systems without FP/ASIMDSuzuki K Poulose
The arm64 kernel assumes that FP/ASIMD units are always present and accesses the FP/ASIMD specific registers unconditionally. This could cause problems when they are absent. This patch adds the support for kernel handling systems without FP/ASIMD by skipping the register access within the kernel. For kvm, we trap the accesses to FP/ASIMD and inject an undefined instruction exception to the VM. The callers of the exported kernel_neon_begin_partial() should make sure that the FP/ASIMD is supported. Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> [catalin.marinas@arm.com: add comment on the ARM64_HAS_NO_FPSIMD conflict and the new location] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-16arm64: Add hypervisor safe helper for checking constant capabilitiesSuzuki K Poulose
The hypervisor may not have full access to the kernel data structures and hence cannot safely use cpus_have_cap() helper for checking the system capability. Add a safe helper for hypervisors to check a constant system capability, which *doesn't* fall back to checking the bitmap maintained by the kernel. With this, make the cpus_have_cap() only check the bitmask and force constant cap checks to use the new API for quicker checks. Cc: Robert Ritcher <rritcher@cavium.com> Cc: Tirumalesh Chalamarla <tchalamarla@cavium.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-11-05arm64: Fix circular include of asm/lse.h through linux/jump_label.hCatalin Marinas
Commit efd9e03facd0 ("arm64: Use static keys for CPU features") introduced support for static keys in asm/cpufeature.h, including linux/jump_label.h. When CC_HAVE_ASM_GOTO is not defined, this causes a circular dependency via linux/atomic.h, asm/lse.h and asm/cpufeature.h. This patch moves the capability macros out out of asm/cpufeature.h into a separate asm/cpucaps.h and modifies some of the #includes accordingly. Fixes: efd9e03facd0 ("arm64: Use static keys for CPU features") Reported-by: Artem Savkov <asavkov@redhat.com> Tested-by: Artem Savkov <asavkov@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-10-20arm64: cpufeature: Schedule enable() calls instead of calling them via IPIJames Morse
The enable() call for a cpufeature/errata is called using on_each_cpu(). This issues a cross-call IPI to get the work done. Implicitly, this stashes the running PSTATE in SPSR when the CPU receives the IPI, and restores it when we return. This means an enable() call can never modify PSTATE. To allow PAN to do this, change the on_each_cpu() call to use stop_machine(). This schedules the work on each CPU which allows us to modify PSTATE. This involves changing the protype of all the enable() functions. enable_cpu_capabilities() is called during boot and enables the feature on all online CPUs. This path now uses stop_machine(). CPU features for hotplug'd CPUs are enabled by verify_local_cpu_features() which only acts on the local CPU, and can already modify the running PSTATE as it is called from secondary_start_kernel(). Reported-by: Tony Thompson <anthony.thompson@arm.com> Reported-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: Work around systems with mismatched cache line sizesSuzuki K Poulose
Systems with differing CPU i-cache/d-cache line sizes can cause problems with the cache management by software when the execution is migrated from one to another. Usually, the application reads the cache size on a CPU and then uses that length to perform cache operations. However, if it gets migrated to another CPU with a smaller cache line size, things could go completely wrong. To prevent such cases, always use the smallest cache line size among the CPUs. The kernel CPU feature infrastructure already keeps track of the safe value for all CPUID registers including CTR. This patch works around the problem by : For kernel, dynamically patch the kernel to read the cache size from the system wide copy of CTR_EL0. For applications, trap read accesses to CTR_EL0 (by clearing the SCTLR.UCT) and emulate the mrs instruction to return the system wide safe value of CTR_EL0. For faster access (i.e, avoiding to lookup the system wide value of CTR_EL0 via read_system_reg), we keep track of the pointer to table entry for CTR_EL0 in the CPU feature infrastructure. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: Rearrange CPU errata workaround checksSuzuki K Poulose
Right now we run through the work around checks on a CPU from __cpuinfo_store_cpu. There are some problems with that: 1) We initialise the system wide CPU feature registers only after the Boot CPU updates its cpuinfo. Now, if a work around depends on the variance of a CPU ID feature (e.g, check for Cache Line size mismatch), we have no way of performing it cleanly for the boot CPU. 2) It is out of place, invoked from __cpuinfo_store_cpu() in cpuinfo.c. It is not an obvious place for that. This patch rearranges the CPU specific capability(aka work around) checks. 1) At the moment we use verify_local_cpu_capabilities() to check if a new CPU has all the system advertised features. Use this for the secondary CPUs to perform the work around check. For that we rename verify_local_cpu_capabilities() => check_local_cpu_capabilities() which: If the system wide capabilities haven't been initialised (i.e, the CPU is activated at the boot), update the system wide detected work arounds. Otherwise (i.e a CPU hotplugged in later) verify that this CPU conforms to the system wide capabilities. 2) Boot CPU updates the work arounds from smp_prepare_boot_cpu() after we have initialised the system wide CPU feature values. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: Use consistent naming for errata handlingSuzuki K Poulose
This is a cosmetic change to rename the functions dealing with the errata work arounds to be more consistent with their naming. 1) check_local_cpu_errata() => update_cpu_errata_workarounds() check_local_cpu_errata() actually updates the system's errata work arounds. So rename it to reflect the same. 2) verify_local_cpu_errata() => verify_local_cpu_errata_workarounds() Use errata_workarounds instead of _errata. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-09arm64: Set the safe value for L1 icache policySuzuki K Poulose
Right now we use 0 as the safe value for CTR_EL0:L1Ip, which is not defined at the moment. The safer value for the L1Ip should be the weakest of the policies, which happens to be AIVIVT. While at it, fix the comment about safe_val. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-09-07arm64: Use static keys for CPU featuresCatalin Marinas
This patch adds static keys transparently for all the cpu_hwcaps features by implementing an array of default-false static keys and enabling them when detected. The cpus_have_cap() check uses the static keys if the feature being checked is a constant, otherwise the compiler generates the bitmap test. Because of the early call to static_branch_enable() via check_local_cpu_errata() -> update_cpu_capabilities(), the jump labels are initialised in cpuinfo_store_boot_cpu(). Cc: Will Deacon <will.deacon@arm.com> Cc: Suzuki K. Poulose <Suzuki.Poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-08-31arm64: cpufeature: expose arm64_ftr_reg struct for CTR_EL0Ard Biesheuvel
Expose the arm64_ftr_reg struct covering CTR_EL0 outside of cpufeature.o so that other code can refer to it directly (i.e., without performing the binary search) Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-08-31arm64: cpufeature: constify arm64_ftr_regs arrayArd Biesheuvel
Constify the arm64_ftr_regs array, by moving the mutable arm64_ftr_reg fields out of the array itself. This also streamlines the bsearch, since the entire array can be covered by fewer cachelines. Moving the payload out of the array also allows us to have special explicitly defined struct instance in case other code needs to refer to it directly. Note that this replaces the runtime sorting of the array with a runtime BUG() check whether the array is sorted correctly in the code. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-08-31arm64: cpufeature: constify arm64_ftr_bits structuresArd Biesheuvel
The arm64_ftr_bits structures are never modified, so make them read-only. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-08-02Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull KVM updates from Paolo Bonzini: - ARM: GICv3 ITS emulation and various fixes. Removal of the old VGIC implementation. - s390: support for trapping software breakpoints, nested virtualization (vSIE), the STHYI opcode, initial extensions for CPU model support. - MIPS: support for MIPS64 hosts (32-bit guests only) and lots of cleanups, preliminary to this and the upcoming support for hardware virtualization extensions. - x86: support for execute-only mappings in nested EPT; reduced vmexit latency for TSC deadline timer (by about 30%) on Intel hosts; support for more than 255 vCPUs. - PPC: bugfixes. * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (302 commits) KVM: PPC: Introduce KVM_CAP_PPC_HTM MIPS: Select HAVE_KVM for MIPS64_R{2,6} MIPS: KVM: Reset CP0_PageMask during host TLB flush MIPS: KVM: Fix ptr->int cast via KVM_GUEST_KSEGX() MIPS: KVM: Sign extend MFC0/RDHWR results MIPS: KVM: Fix 64-bit big endian dynamic translation MIPS: KVM: Fail if ebase doesn't fit in CP0_EBase MIPS: KVM: Use 64-bit CP0_EBase when appropriate MIPS: KVM: Set CP0_Status.KX on MIPS64 MIPS: KVM: Make entry code MIPS64 friendly MIPS: KVM: Use kmap instead of CKSEG0ADDR() MIPS: KVM: Use virt_to_phys() to get commpage PFN MIPS: Fix definition of KSEGX() for 64-bit KVM: VMX: Add VMCS to CPU's loaded VMCSs before VMPTRLD kvm: x86: nVMX: maintain internal copy of current VMCS KVM: PPC: Book3S HV: Save/restore TM state in H_CEDE KVM: PPC: Book3S HV: Pull out TM state save/restore into separate procedures KVM: arm64: vgic-its: Simplify MAPI error handling KVM: arm64: vgic-its: Make vgic_its_cmd_handle_mapi similar to other handlers KVM: arm64: vgic-its: Turn device_id validation into generic ID validation ...
2016-07-03arm64: Add ARM64_HYP_OFFSET_LOW capabilityMarc Zyngier
As we need to indicate to the rest of the kernel which region of the HYP VA space is safe to use, add a capability that will indicate that KVM should use the [VA_BITS-2:0] range. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2016-07-01arm64: errata: Calling enable functions for CPU errata tooAndre Przywara
Currently we call the (optional) enable function for CPU _features_ only. As CPU _errata_ descriptions share the same data structure and having an enable function is useful for errata as well (for instance to set bits in SCTLR), lets call it when enumerating erratas too. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2016-04-25arm64: Verify CPU errata work arounds on hotplugged CPUSuzuki K Poulose
CPU Errata work arounds are detected and applied to the kernel code at boot time and the data is then freed up. If a new hotplugged CPU requires a work around which was not applied at boot time, there is nothing we can do but simply fail the booting. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-25arm64: Allow a capability to be checked on a single CPUMarc Zyngier
Now that the capabilities are only available once all the CPUs have booted, we're unable to check for a particular feature in any subsystem that gets initialized before then. In order to support this, introduce a local_cpu_has_cap() function that tests for the presence of a given capability independently of the whole framework. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> [ Added preemptible() check ] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> [will: remove duplicate initialisation of caps in this_cpu_has_cap] Signed-off-by: Will Deacon <will.deacon@arm.com>
2016-04-25arm64: cpufeature: Add scope for capability checkSuzuki K Poulose
Add scope parameter to the arm64_cpu_capabilities::matches(), so that this can be reused for checking the capability on a given CPU vs the system wide. The system uses the default scope associated with the capability for initialising the CPU_HWCAPs and ELF_HWCAPs. Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>