aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm64/include
AgeCommit message (Collapse)Author
2024-02-21Merge tag 'v4.19.264' into v4.19/standard/baseBruce Ashfield
This is the 4.19.264 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmNj1bgACgkQONu9yGCS # aT4ABQ/8CCO0lyrHTLSG/hmhXOaLkEq3q4sO5xI2hAAwaIGngivQpjR+qGicUxm2 # MmX9pLihi5uEpayVFM0Gb5+6NRObgUULAetiDk34gSqnWjFQksrS8WZdNzXSIq85 # yn3+1o87Nr+gOTGd1xighmRKw87Kaacqtt80MXBeTQ7SZt7ES7Oxn/I4zw1vjiDz # 2nfUp+w2yWsUoQHOUGITe3+ae8QmTX1dwQhXf/z0EX8VqgGEKD7Sv6aSxZfmb4cL # srBYw0D3bi5+cSH/auiN+rxkGQ7CZ24xsqaFZN4L5lAhO+TTfanp6XJAMAKFYm5N # 1sjeBfNNDq8LEThqBG1RULlaMOkflW7fXAZuA6oGJTr1+V0MP8h79jniKSJ8kGnN # xpprlnm7hKy0OazbRIcRBPim3hv5fvy+U7eiWQqZCOdYc93hTflzamXeu+OZNpsB # flAJbUGDUniApKFhyXhEWr8jCz7oQvf2VzQmKRB6KpbEOgaEJ5S8Ls5pGzt3JqdW # AOQHC4t4/EcyVvOBUcIiXYtnE3VQ4RuOU6bCU5soqDiWTYk1yeRWKFiFLpkzpMAR # FjKBLFez2Dc+T09DXcXbJZ7V3t510hhLw9Pai1iXuZlkYuk6jJGZ/VMmi8txxyYt # wChGDUcnv2W/Ub7hLSk9GkxtaXQ3zdLYVk2GtloCkuqorks8EFs= # =7RS7 # -----END PGP SIGNATURE----- # gpg: Signature made Thu 03 Nov 2022 10:52:40 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2022-11-03arm64: errata: Remove AES hwcap for COMPAT tasksJames Morse
commit 44b3834b2eed595af07021b1c64e6f9bc396398b upstream. Cortex-A57 and Cortex-A72 have an erratum where an interrupt that occurs between a pair of AES instructions in aarch32 mode may corrupt the ELR. The task will subsequently produce the wrong AES result. The AES instructions are part of the cryptographic extensions, which are optional. User-space software will detect the support for these instructions from the hwcaps. If the platform doesn't support these instructions a software implementation should be used. Remove the hwcap bits on affected parts to indicate user-space should not use the AES instructions. Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20220714161523.279570-3-james.morse@arm.com Signed-off-by: Will Deacon <will@kernel.org> [florian: resolved conflicts in arch/arm64/tools/cpucaps and cpu_errata.c] Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-10Merge tag 'v4.19.257' into v4.19/standard/baseBruce Ashfield
This is the 4.19.257 stable release # gpg: Signature made Mon 05 Sep 2022 04:26:42 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2022-09-10Merge tag 'v4.19.256' into v4.19/standard/baseBruce Ashfield
This is the 4.19.256 stable release # gpg: Signature made Thu 25 Aug 2022 05:15:59 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2022-09-09Merge tag 'v4.19.238' into v4.19/standard/baseBruce Ashfield
This is the 4.19.238 stable release # gpg: Signature made Fri 15 Apr 2022 08:15:13 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2022-09-09Merge tag 'v4.19.236' into v4.19/standard/baseBruce Ashfield
This is the 4.19.236 stable release # gpg: Signature made Wed 23 Mar 2022 04:11:04 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2022-09-05arm64: map FDT as RW for early_init_dt_scan()Hsin-Yi Wang
commit e112b032a72c78f15d0c803c5dc6be444c2e6c66 upstream. Currently in arm64, FDT is mapped to RO before it's passed to early_init_dt_scan(). However, there might be some codes (eg. commit "fdt: add support for rng-seed") that need to modify FDT during init. Map FDT to RO after early fixups are done. Signed-off-by: Hsin-Yi Wang <hsinyi@chromium.org> Reviewed-by: Stephen Boyd <swboyd@chromium.org> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Will Deacon <will@kernel.org> [mkbestas: fixed trivial conflicts for 4.19 backport] Signed-off-by: Michael Bestas <mkbestas@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-08-25arm64: Do not forget syscall when starting a new thread.Francis Laniel
[ Upstream commit de6921856f99c11d3986c6702d851e1328d4f7f6 ] Enable tracing of the execve*() system calls with the syscalls:sys_exit_execve tracepoint by removing the call to forget_syscall() when starting a new thread and preserving the value of regs->syscallno across exec. Signed-off-by: Francis Laniel <flaniel@linux.microsoft.com> Link: https://lore.kernel.org/r/20220608162447.666494-2-flaniel@linux.microsoft.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-04-15KVM: arm64: Check arm64_get_bp_hardening_data() didn't return NULLJames Morse
Will reports that with CONFIG_EXPERT=y and CONFIG_HARDEN_BRANCH_PREDICTOR=n, the kernel dereferences a NULL pointer during boot: [ 2.384444] Internal error: Oops: 96000004 [#1] PREEMPT SMP [ 2.384461] pstate: 20400085 (nzCv daIf +PAN -UAO) [ 2.384472] pc : cpu_hyp_reinit+0x114/0x30c [ 2.384476] lr : cpu_hyp_reinit+0x80/0x30c [ 2.384529] Call trace: [ 2.384533] cpu_hyp_reinit+0x114/0x30c [ 2.384537] _kvm_arch_hardware_enable+0x30/0x54 [ 2.384541] flush_smp_call_function_queue+0xe4/0x154 [ 2.384544] generic_smp_call_function_single_interrupt+0x10/0x18 [ 2.384549] ipi_handler+0x170/0x2b0 [ 2.384555] handle_percpu_devid_fasteoi_ipi+0x120/0x1cc [ 2.384560] __handle_domain_irq+0x9c/0xf4 [ 2.384563] gic_handle_irq+0x6c/0xe4 [ 2.384566] el1_irq+0xf0/0x1c0 [ 2.384570] arch_cpu_idle+0x28/0x44 [ 2.384574] do_idle+0x100/0x2a8 [ 2.384577] cpu_startup_entry+0x20/0x24 [ 2.384581] secondary_start_kernel+0x1b0/0x1cc [ 2.384589] Code: b9469d08 7100011f 540003ad 52800208 (f9400108) [ 2.384600] ---[ end trace 266d08dbf96ff143 ]--- [ 2.385171] Kernel panic - not syncing: Fatal exception in interrupt In this configuration arm64_get_bp_hardening_data() returns NULL. Add a check in kvm_get_hyp_vector(). Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/linux-arm-kernel/20220408120041.GB27685@willie-the-truck/ Fixes: a68912a3ae3 ("KVM: arm64: Add templates for BHB mitigation sequences") Cc: stable@vger.kernel.org # 4.19 Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-03-23arm64: Use the clearbhb instruction in mitigationsJames Morse
commit 228a26b912287934789023b4132ba76065d9491c upstream. Future CPUs may implement a clearbhb instruction that is sufficient to mitigate SpectreBHB. CPUs that implement this instruction, but not CSV2.3 must be affected by Spectre-BHB. Add support to use this instruction as the BHB mitigation on CPUs that support it. The instruction is in the hint space, so it will be treated by a NOP as older CPUs. Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> [ modified for stable: Use a KVM vector template instead of alternatives ] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: add ID_AA64ISAR2_EL1 sys registerJoey Gouly
commit 9e45365f1469ef2b934f9d035975dbc9ad352116 upstream. This is a new ID register, introduced in 8.7. Signed-off-by: Joey Gouly <joey.gouly@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Reiji Watanabe <reijiw@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211210165432.8106-3-joey.gouly@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23KVM: arm64: Allow SMCCC_ARCH_WORKAROUND_3 to be discovered and migratedJames Morse
commit a5905d6af492ee6a4a2205f0d550b3f931b03d03 upstream. KVM allows the guest to discover whether the ARCH_WORKAROUND SMCCC are implemented, and to preserve that state during migration through its firmware register interface. Add the necessary boiler plate for SMCCC_ARCH_WORKAROUND_3. Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> [ kvm code moved to virt/kvm/arm, removed fw regs ABI. Added 32bit stub ] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: Mitigate spectre style branch history side channelsJames Morse
commit 558c303c9734af5a813739cd284879227f7297d2 upstream. Speculation attacks against some high-performance processors can make use of branch history to influence future speculation. When taking an exception from user-space, a sequence of branches or a firmware call overwrites or invalidates the branch history. The sequence of branches is added to the vectors, and should appear before the first indirect branch. For systems using KPTI the sequence is added to the kpti trampoline where it has a free register as the exit from the trampoline is via a 'ret'. For systems not using KPTI, the same register tricks are used to free up a register in the vectors. For the firmware call, arch-workaround-3 clobbers 4 registers, so there is no choice but to save them to the EL1 stack. This only happens for entry from EL0, so if we take an exception due to the stack access, it will not become re-entrant. For KVM, the existing branch-predictor-hardening vectors are used. When a spectre version of these vectors is in use, the firmware call is sufficient to mitigate against Spectre-BHB. For the non-spectre versions, the sequence of branches is added to the indirect vector. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: <stable@kernel.org> # <v5.17.x 72bb9dcb6c33c arm64: Add Cortex-X2 CPU part definition Cc: <stable@kernel.org> # <v5.16.x 2d0d656700d67 arm64: Add Neoverse-N2, Cortex-A710 CPU part definition Cc: <stable@kernel.org> # <v5.10.x 8a6b88e66233f arm64: Add part number for Arm Cortex-A77 [ modified for stable, moved code to cpu_errata.c removed bitmap of mitigations, use kvm template infrastructure ] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23KVM: arm64: Add templates for BHB mitigation sequencesJames Morse
KVM writes the Spectre-v2 mitigation template at the beginning of each vector when a CPU requires a specific sequence to run. Because the template is copied, it can not be modified by the alternatives at runtime. As the KVM template code is intertwined with the bp-hardening callbacks, all templates must have a bp-hardening callback. Add templates for calling ARCH_WORKAROUND_3 and one for each value of K in the brancy-loop. Identify these sequences by a new parameter template_start, and add a copy of install_bp_hardening_cb() that is able to install them. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: proton-pack: Report Spectre-BHB vulnerabilities as part of Spectre-v2James Morse
commit dee435be76f4117410bbd90573a881fd33488f37 upstream. Speculation attacks against some high-performance processors can make use of branch history to influence future speculation as part of a spectre-v2 attack. This is not mitigated by CSV2, meaning CPUs that previously reported 'Not affected' are now moderately mitigated by CSV2. Update the value in /sys/devices/system/cpu/vulnerabilities/spectre_v2 to also show the state of the BHB mitigation. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> [ code move to cpu_errata.c for backport ] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: Add percpu vectors for EL1James Morse
commit bd09128d16fac3c34b80bd6a29088ac632e8ce09 upstream. The Spectre-BHB workaround adds a firmware call to the vectors. This is needed on some CPUs, but not others. To avoid the unaffected CPU in a big/little pair from making the firmware call, create per cpu vectors. The per-cpu vectors only apply when returning from EL0. Systems using KPTI can use the canonical 'full-fat' vectors directly at EL1, the trampoline exit code will switch to this_cpu_vector on exit to EL0. Systems not using KPTI should always use this_cpu_vector. this_cpu_vector will point at a vector in tramp_vecs or __bp_harden_el1_vectors, depending on whether KPTI is in use. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: entry: Add vectors that have the bhb mitigation sequencesJames Morse
commit ba2689234be92024e5635d30fe744f4853ad97db upstream. Some CPUs affected by Spectre-BHB need a sequence of branches, or a firmware call to be run before any indirect branch. This needs to go in the vectors. No CPU needs both. While this can be patched in, it would run on all CPUs as there is a single set of vectors. If only one part of a big/little combination is affected, the unaffected CPUs have to run the mitigation too. Create extra vectors that include the sequence. Subsequent patches will allow affected CPUs to select this set of vectors. Later patches will modify the loop count to match what the CPU requires. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: entry: Allow the trampoline text to occupy multiple pagesJames Morse
commit a9c406e6462ff14956d690de7bbe5131a5677dc9 upstream. Adding a second set of vectors to .entry.tramp.text will make it larger than a single 4K page. Allow the trampoline text to occupy up to three pages by adding two more fixmap slots. Previous changes to tramp_valias allowed it to reach beyond a single page. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: entry: Move the trampoline data page before the text pageJames Morse
commit c091fb6ae059cda563b2a4d93fdbc548ef34e1d6 upstream. The trampoline code has a data page that holds the address of the vectors, which is unmapped when running in user-space. This ensures that with CONFIG_RANDOMIZE_BASE, the randomised address of the kernel can't be discovered until after the kernel has been mapped. If the trampoline text page is extended to include multiple sets of vectors, it will be larger than a single page, making it tricky to find the data page without knowing the size of the trampoline text pages, which will vary with PAGE_SIZE. Move the data page to appear before the text page. This allows the data page to be found without knowing the size of the trampoline text pages. 'tramp_vectors' is used to refer to the beginning of the .entry.tramp.text section, do that explicitly. Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: Add Cortex-X2 CPU part definitionAnshuman Khandual
commit 72bb9dcb6c33cfac80282713c2b4f2b254cd24d1 upstream. Add the CPU Partnumbers for the new Arm designs. Cc: Will Deacon <will@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/1642994138-25887-2-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: Add Neoverse-N2, Cortex-A710 CPU part definitionSuzuki K Poulose
commit 2d0d656700d67239a57afaf617439143d8dac9be upstream. Add the CPU Partnumbers for the new Arm designs. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20211019163153.3692640-2-suzuki.poulose@arm.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: Add part number for Arm Cortex-A77Rob Herring
commit 8a6b88e66233f5f1779b0a1342aa9dc030dddcd5 upstream. Add the MIDR part number info for the Arm Cortex-A77. Signed-off-by: Rob Herring <robh@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201028182839.166037-1-robh@kernel.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-01-25Merge tag 'v4.19.218' into v4.19/standard/baseBruce Ashfield
This is the 4.19.218 stable release Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com> # gpg: Signature made Fri 26 Nov 2021 05:36:32 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key # Conflicts: # arch/arm/Makefile
2022-01-25Merge tag 'v4.19.207' into v4.19/standard/baseBruce Ashfield
This is the 4.19.207 stable release # gpg: Signature made Wed 22 Sep 2021 05:48:26 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-11-26arm64: pgtable: make __pte_to_phys/__phys_to_pte_val inline functionsArnd Bergmann
[ Upstream commit c7c386fbc20262c1d911c615c65db6a58667d92c ] gcc warns about undefined behavior the vmalloc code when building with CONFIG_ARM64_PA_BITS_52, when the 'idx++' in the argument to __phys_to_pte_val() is evaluated twice: mm/vmalloc.c: In function 'vmap_pfn_apply': mm/vmalloc.c:2800:58: error: operation on 'data->idx' may be undefined [-Werror=sequence-point] 2800 | *pte = pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot)); | ~~~~~~~~~^~ arch/arm64/include/asm/pgtable-types.h:25:37: note: in definition of macro '__pte' 25 | #define __pte(x) ((pte_t) { (x) } ) | ^ arch/arm64/include/asm/pgtable.h:80:15: note: in expansion of macro '__phys_to_pte_val' 80 | __pte(__phys_to_pte_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) | ^~~~~~~~~~~~~~~~~ mm/vmalloc.c:2800:30: note: in expansion of macro 'pfn_pte' 2800 | *pte = pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot)); | ^~~~~~~ I have no idea why this never showed up earlier, but the safest workaround appears to be changing those macros into inline functions so the arguments get evaluated only once. Cc: Matthew Wilcox <willy@infradead.org> Fixes: 75387b92635e ("arm64: handle 52-bit physical addresses in page table entries") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20211105075414.2553155-1-arnd@kernel.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-09-22arm64: head: avoid over-mapping in map_memoryMark Rutland
commit 90268574a3e8a6b883bd802d702a2738577e1006 upstream. The `compute_indices` and `populate_entries` macros operate on inclusive bounds, and thus the `map_memory` macro which uses them also operates on inclusive bounds. We pass `_end` and `_idmap_text_end` to `map_memory`, but these are exclusive bounds, and if one of these is sufficiently aligned (as a result of kernel configuration, physical placement, and KASLR), then: * In `compute_indices`, the computed `iend` will be in the page/block *after* the final byte of the intended mapping. * In `populate_entries`, an unnecessary entry will be created at the end of each level of table. At the leaf level, this entry will map up to SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend to map. As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may violate the boot protocol and map physical address past the 2MiB-aligned end address we are permitted to map. As we map these with Normal memory attributes, this may result in further problems depending on what these physical addresses correspond to. The final entry at each level may require an additional table at that level. As EARLY_ENTRIES() calculates an inclusive bound, we allocate enough memory for this. Avoid the extraneous mapping by having map_memory convert the exclusive end address to an inclusive end address by subtracting one, and do likewise in EARLY_ENTRIES() when calculating the number of required tables. For clarity, comments are updated to more clearly document which boundaries the macros operate on. For consistency with the other macros, the comments in map_memory are also updated to describe `vstart` and `vend` as virtual addresses. Fixes: 0370b31e4845 ("arm64: Extend early page table code to allow for larger kernels") Cc: <stable@vger.kernel.org> # 4.16.x Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210823101253.55567-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-06-10Merge tag 'v4.19.191' into v4.19/standard/baseBruce Ashfield
This is the 4.19.191 stable release # gpg: Signature made Sat 22 May 2021 05:07:03 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-06-10Merge tag 'v4.19.189' into v4.19/standard/baseBruce Ashfield
This is the 4.19.189 stable release # gpg: Signature made Wed 28 Apr 2021 07:18:17 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-06-10Merge tag 'v4.19.188' into v4.19/standard/baseBruce Ashfield
This is the 4.19.188 stable release # gpg: Signature made Fri 16 Apr 2021 05:50:28 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-06-10Merge tag 'v4.19.182' into v4.19/standard/baseBruce Ashfield
This is the 4.19.182 stable release # gpg: Signature made Sat 20 Mar 2021 05:38:58 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-06-10Merge tag 'v4.19.179' into v4.19/standard/baseBruce Ashfield
This is the 4.19.179 stable release # gpg: Signature made Sun 07 Mar 2021 06:19:26 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-05-22KVM: arm64: Initialize VCPU mdcr_el2 before loading itAlexandru Elisei
commit 263d6287da1433aba11c5b4046388f2cdf49675c upstream. When a VCPU is created, the kvm_vcpu struct is initialized to zero in kvm_vm_ioctl_create_vcpu(). On VHE systems, the first time vcpu.arch.mdcr_el2 is loaded on hardware is in vcpu_load(), before it is set to a sensible value in kvm_arm_setup_debug() later in the run loop. The result is that KVM executes for a short time with MDCR_EL2 set to zero. This has several unintended consequences: * Setting MDCR_EL2.HPMN to 0 is constrained unpredictable according to ARM DDI 0487G.a, page D13-3820. The behavior specified by the architecture in this case is for the PE to behave as if MDCR_EL2.HPMN is set to a value less than or equal to PMCR_EL0.N, which means that an unknown number of counters are now disabled by MDCR_EL2.HPME, which is zero. * The host configuration for the other debug features controlled by MDCR_EL2 is temporarily lost. This has been harmless so far, as Linux doesn't use the other fields, but that might change in the future. Let's avoid both issues by initializing the VCPU's mdcr_el2 field in kvm_vcpu_vcpu_first_run_init(), thus making sure that the MDCR_EL2 register has a consistent value after each vcpu_load(). Fixes: d5a21bcc2995 ("KVM: arm64: Move common VHE/non-VHE trap config in separate functions") Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210407144857.199746-3-alexandru.elisei@arm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-28arm64: alternatives: Move length validation in alternative_{insn, endif}Nathan Chancellor
commit 22315a2296f4c251fa92aec45fbbae37e9301b6c upstream. After commit 2decad92f473 ("arm64: mte: Ensure TIF_MTE_ASYNC_FAULT is set atomically"), LLVM's integrated assembler fails to build entry.S: <instantiation>:5:7: error: expected assembly-time absolute expression .org . - (664b-663b) + (662b-661b) ^ <instantiation>:6:7: error: expected assembly-time absolute expression .org . - (662b-661b) + (664b-663b) ^ The root cause is LLVM's assembler has a one-pass design, meaning it cannot figure out these instruction lengths when the .org directive is outside of the subsection that they are in, which was changed by the .arch_extension directive added in the above commit. Apply the same fix from commit 966a0acce2fc ("arm64/alternatives: move length validation inside the subsection") to the alternative_endif macro, shuffling the .org directives so that the length validation happen will always happen in the same subsections. alternative_insn has not shown any issue yet but it appears that it could have the same issue in the future so just preemptively change it. Fixes: f7b93d42945c ("arm64/alternatives: use subsections for replacement sequences") Cc: <stable@vger.kernel.org> # 5.8.x Link: https://github.com/ClangBuiltLinux/linux/issues/1347 Signed-off-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20210414000803.662534-1-nathan@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-28arm64: fix inline asm in load_unaligned_zeropad()Peter Collingbourne
commit 185f2e5f51c2029efd9dd26cceb968a44fe053c6 upstream. The inline asm's addr operand is marked as input-only, however in the case where an exception is taken it may be modified by the BIC instruction on the exception path. Fix the problem by using a temporary register as the destination register for the BIC instruction. Signed-off-by: Peter Collingbourne <pcc@google.com> Cc: stable@vger.kernel.org Link: https://linux-review.googlesource.com/id/I84538c8a2307d567b4f45bb20b715451005f9617 Link: https://lore.kernel.org/r/20210401165110.3952103-1-pcc@google.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-16KVM: arm64: Disable guest access to trace filter controlsSuzuki K Poulose
[ Upstream commit a354a64d91eec3e0f8ef0eed575b480fd75b999c ] Disable guest access to the Trace Filter control registers. We do not advertise the Trace filter feature to the guest (ID_AA64DFR0_EL1: TRACE_FILT is cleared) already, but the guest can still access the TRFCR_EL1 unless we trap it. This will also make sure that the guest cannot fiddle with the filtering controls set by a nvhe host. Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210323120647.454211-3-suzuki.poulose@arm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-20KVM: arm64: nvhe: Save the SPE context earlySuzuki K Poulose
commit b96b0c5de685df82019e16826a282d53d86d112c upstream The nVHE KVM hyp drains and disables the SPE buffer, before entering the guest, as the EL1&0 translation regime is going to be loaded with that of the guest. But this operation is performed way too late, because : - The owning translation regime of the SPE buffer is transferred to EL2. (MDCR_EL2_E2PB == 0) - The guest Stage1 is loaded. Thus the flush could use the host EL1 virtual address, but use the EL2 translations instead of host EL1, for writing out any cached data. Fix this by moving the SPE buffer handling early enough. The restore path is doing the right thing. Cc: stable@vger.kernel.org # v4.19 Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-07arm64: Use correct ll/sc atomic constraintsAndrew Murray
commit 580fa1b874711d633f9b145b7777b0e83ebf3787 upstream. The A64 ISA accepts distinct (but overlapping) ranges of immediates for: * add arithmetic instructions ('I' machine constraint) * sub arithmetic instructions ('J' machine constraint) * 32-bit logical instructions ('K' machine constraint) * 64-bit logical instructions ('L' machine constraint) ... but we currently use the 'I' constraint for many atomic operations using sub or logical instructions, which is not always valid. When CONFIG_ARM64_LSE_ATOMICS is not set, this allows invalid immediates to be passed to instructions, potentially resulting in a build failure. When CONFIG_ARM64_LSE_ATOMICS is selected the out-of-line ll/sc atomics always use a register as they have no visibility of the value passed by the caller. This patch adds a constraint parameter to the ATOMIC_xx and __CMPXCHG_CASE macros so that we can pass appropriate constraints for each case, with uses updated accordingly. Unfortunately prior to GCC 8.1.0 the 'K' constraint erroneously accepted '4294967295', so we must instead force the use of a register. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Signed-off-by: Will Deacon <will@kernel.org> [bwh: Backported to 4.19: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-07arm64: cmpxchg: Use "K" instead of "L" for ll/sc immediate constraintWill Deacon
commit 4230509978f2921182da4e9197964dccdbe463c3 upstream. The "L" AArch64 machine constraint, which we use for the "old" value in an LL/SC cmpxchg(), generates an immediate that is suitable for a 64-bit logical instruction. However, for cmpxchg() operations on types smaller than 64 bits, this constraint can result in an invalid instruction which is correctly rejected by GAS, such as EOR W1, W1, #0xffffffff. Whilst we could special-case the constraint based on the cmpxchg size, it's far easier to change the constraint to "K" and put up with using a register for large 64-bit immediates. For out-of-line LL/SC atomics, this is all moot anyway. Reported-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-07arm64: Avoid redundant type conversions in xchg() and cmpxchg()Will Deacon
commit 5ef3fe4cecdf82fdd71ce78988403963d01444d4 upstream. Our atomic instructions (either LSE atomics of LDXR/STXR sequences) natively support byte, half-word, word and double-word memory accesses so there is no need to mask the data register prior to being stored. Signed-off-by: Will Deacon <will.deacon@arm.com> [bwh: Backported to 4.19: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-01-21Merge tag 'v4.19.164' into v4.19/standard/baseBruce Ashfield
This is the 4.19.164 stable release # gpg: Signature made Wed 30 Dec 2020 05:26:36 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-01-21Merge tag 'v4.19.161' into v4.19/standard/baseBruce Ashfield
This is the 4.19.161 stable release # gpg: Signature made Wed 02 Dec 2020 02:48:32 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-01-21Merge tag 'v4.19.155' into v4.19/standard/baseBruce Ashfield
This is the 4.19.155 stable release # gpg: Signature made Thu 05 Nov 2020 05:09:14 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2020-12-30KVM: arm64: Introduce handling of AArch32 TTBCR2 trapsMarc Zyngier
commit ca4e514774930f30b66375a974b5edcbebaf0e7e upstream. ARMv8.2 introduced TTBCR2, which shares TCR_EL1 with TTBCR. Gracefully handle traps to this register when HCR_EL2.TVM is set. Cc: stable@vger.kernel.org Reported-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-12-30arm64: lse: Fix LSE atomics with LLVMVincenzo Frascino
commit dd1f6308b28edf0452dd5dc7877992903ec61e69 upstream. Commit e0d5896bd356 ("arm64: lse: fix LSE atomics with LLVM's integrated assembler") broke the build when clang is used in connjunction with the binutils assembler ("-no-integrated-as"). This happens because __LSE_PREAMBLE is defined as ".arch armv8-a+lse", which overrides the version of the CPU architecture passed via the "-march" paramter to gas: $ aarch64-none-linux-gnu-as -EL -I ./arch/arm64/include -I ./arch/arm64/include/generated -I ./include -I ./include -I ./arch/arm64/include/uapi -I ./arch/arm64/include/generated/uapi -I ./include/uapi -I ./include/generated/uapi -I ./init -I ./init -march=armv8.3-a -o init/do_mounts.o /tmp/do_mounts-d7992a.s /tmp/do_mounts-d7992a.s: Assembler messages: /tmp/do_mounts-d7992a.s:1959: Error: selected processor does not support `autiasp' /tmp/do_mounts-d7992a.s:2021: Error: selected processor does not support `paciasp' /tmp/do_mounts-d7992a.s:2157: Error: selected processor does not support `autiasp' /tmp/do_mounts-d7992a.s:2175: Error: selected processor does not support `paciasp' /tmp/do_mounts-d7992a.s:2494: Error: selected processor does not support `autiasp' Fix the issue by replacing ".arch armv8-a+lse" with ".arch_extension lse". Sami confirms that the clang integrated assembler does now support the '.arch_extension' directive, so this change will be fine even for LTO builds in future. Fixes: e0d5896bd356cd ("arm64: lse: fix LSE atomics with LLVM's integrated assembler") Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Reported-by: Amit Kachhap <Amit.Kachhap@arm.com> Tested-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-12-30arm64: lse: fix LSE atomics with LLVM's integrated assemblerSami Tolvanen
commit e0d5896bd356cd577f9710a02d7a474cdf58426b upstream. Unlike gcc, clang considers each inline assembly block to be independent and therefore, when using the integrated assembler for inline assembly, any preambles that enable features must be repeated in each block. This change defines __LSE_PREAMBLE and adds it to each inline assembly block that has LSE instructions, which allows them to be compiled also with clang's assembler. Link: https://github.com/ClangBuiltLinux/linux/issues/671 Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: Andrew Murray <andrew.murray@arm.com> Tested-by: Kees Cook <keescook@chromium.org> Reviewed-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Will Deacon <will@kernel.org> [nd: backport adjusted due to missing: commit addfc38672c7 ("arm64: atomics: avoid out-of-line ll/sc atomics")] Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-12-02arm64: pgtable: Ensure dirty bit is preserved across pte_wrprotect()Will Deacon
commit ff1712f953e27f0b0718762ec17d0adb15c9fd0b upstream. With hardware dirty bit management, calling pte_wrprotect() on a writable, dirty PTE will lose the dirty state and return a read-only, clean entry. Move the logic from ptep_set_wrprotect() into pte_wrprotect() to ensure that the dirty bit is preserved for writable entries, as this is required for soft-dirty bit management if we enable it in the future. Cc: <stable@vger.kernel.org> Fixes: 2f4b829c625e ("arm64: Add support for hardware updates of the access and dirty pte bits") Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20201120143557.6715-3-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-12-02arm64: pgtable: Fix pte_accessible()Will Deacon
commit 07509e10dcc77627f8b6a57381e878fe269958d3 upstream. pte_accessible() is used by ptep_clear_flush() to figure out whether TLB invalidation is necessary when unmapping pages for reclaim. Although our implementation is correct according to the architecture, returning true only for valid, young ptes in the absence of racing page-table modifications, this is in fact flawed due to lazy invalidation of old ptes in ptep_clear_flush_young() where we elide the expensive DSB instruction for completing the TLB invalidation. Rather than penalise the aging path, adjust pte_accessible() to return true for any valid pte, even if the access flag is cleared. Cc: <stable@vger.kernel.org> Fixes: 76c714be0e5e ("arm64: pgtable: implement pte_accessible()") Reported-by: Yu Zhao <yuzhao@google.com> Acked-by: Yu Zhao <yuzhao@google.com> Reviewed-by: Minchan Kim <minchan@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20201120143557.6715-2-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-11-05KVM: arm64: Fix AArch32 handling of DBGD{CCINT,SCRext} and DBGVCRMarc Zyngier
commit 4a1c2c7f63c52ccb11770b5ae25920a6b79d3548 upstream. The DBGD{CCINT,SCRext} and DBGVCR register entries in the cp14 array are missing their target register, resulting in all accesses being targetted at the guard sysreg (indexed by __INVALID_SYSREG__). Point the emulation code at the actual register entries. Fixes: bdfb4b389c8d ("arm64: KVM: add trap handlers for AArch32 debug registers") Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20201029172409.2768336-1-maz@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-11-05arm64/mm: return cpu_all_mask when node is NUMA_NO_NODEZhengyuan Liu
[ Upstream commit a194c5f2d2b3a05428805146afcabe5140b5d378 ] The @node passed to cpumask_of_node() can be NUMA_NO_NODE, in that case it will trigger the following WARN_ON(node >= nr_node_ids) due to mismatched data types of @node and @nr_node_ids. Actually we should return cpu_all_mask just like most other architectures do if passed NUMA_NO_NODE. Also add a similar check to the inline cpumask_of_node() in numa.h. Signed-off-by: Zhengyuan Liu <liuzhengyuan@tj.kylinos.cn> Reviewed-by: Gavin Shan <gshan@redhat.com> Link: https://lore.kernel.org/r/20200921023936.21846-1-liuzhengyuan@tj.kylinos.cn Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-10-06Merge tag 'v4.19.149' into v4.19/standard/baseBruce Ashfield
This is the 4.19.149 stable release # gpg: Signature made Thu 01 Oct 2020 07:15:31 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key