aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm64/kernel/head.S
AgeCommit message (Collapse)Author
2021-06-15arm64: mm: drop unused __pa(__idmap_text_start)Dong Aisheng
x5 is not used in the following map_memory. Instead, __pa(__idmap_text_start) is stored in x3 which is used later. Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210518101405.1048860-4-aisheng.dong@nxp.com Signed-off-by: Will Deacon <will@kernel.org>
2021-06-15arm64: mm: fix the count comments in compute_indicesDong Aisheng
'count - 1' is confusing and not comply with the real code running. 'count' actually represents the extra entries required, no need minus 1. Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210518101405.1048860-3-aisheng.dong@nxp.com Signed-off-by: Will Deacon <will@kernel.org>
2021-05-27arm64: scs: Drop unused 'tmp' argument to scs_{load, save} asm macrosWill Deacon
The scs_load and scs_save asm macros don't make use of the mandatory 'tmp' register argument, so drop it and fix up the callers. Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Link: https://lore.kernel.org/r/20210527105529.21967-1-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-05-26arm64: smp: initialize cpu offset earlierMark Rutland
Now that we have a consistent place to initialize CPU context registers early in the boot path, let's also initialize the per-cpu offset here. This makes the primary and secondary boot paths more consistent, and allows for the use of per-cpu operations earlier, which will be necessary for instrumentation with KCSAN. Note that smp_prepare_boot_cpu() still needs to re-initialize CPU0's offset as immediately prior to this the per-cpu areas may be reallocated, and hence the boot-time offset may be stale. A comment is added to make this clear. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210520115031.18509-7-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-05-26arm64: smp: unify task and sp setupMark Rutland
Once we enable the MMU, we have to initialize: * SP_EL0 to point at the active task * SP to point at the active task's stack * SCS_SP to point at the active task's shadow stack For all tasks (including init_task), this information can be derived from the task's task_struct. Let's unify __primary_switched and __secondary_switched to consistently acquire this information from the relevant task_struct. At the same time, let's fold this together with initializing a task's final frame. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210520115031.18509-6-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-05-26arm64: smp: remove stack from secondary_dataMark Rutland
When we boot a secondary CPU, we pass it a task and a stack to use. As the stack is always the task's stack, which can be derived from the task, let's have the secondary CPU derive this itself and avoid passing redundant information. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210520115031.18509-5-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-05-25arm64: Rename arm64-internal cache maintenance functionsFuad Tabba
Although naming across the codebase isn't that consistent, it tends to follow certain patterns. Moreover, the term "flush" isn't defined in the Arm Architecture reference manual, and might be interpreted to mean clean, invalidate, or both for a cache. Rename arm64-internal functions to make the naming internally consistent, as well as making it consistent with the Arm ARM, by specifying whether it applies to the instruction, data, or both caches, whether the operation is a clean, invalidate, or both. Also specify which point the operation applies to, i.e., to the point of unification (PoU), coherency (PoC), or persistence (PoP). This commit applies the following sed transformation to all files under arch/arm64: "s/\b__flush_cache_range\b/caches_clean_inval_pou_macro/g;"\ "s/\b__flush_icache_range\b/caches_clean_inval_pou/g;"\ "s/\binvalidate_icache_range\b/icache_inval_pou/g;"\ "s/\b__flush_dcache_area\b/dcache_clean_inval_poc/g;"\ "s/\b__inval_dcache_area\b/dcache_inval_poc/g;"\ "s/__clean_dcache_area_poc\b/dcache_clean_poc/g;"\ "s/\b__clean_dcache_area_pop\b/dcache_clean_pop/g;"\ "s/\b__clean_dcache_area_pou\b/dcache_clean_pou/g;"\ "s/\b__flush_cache_user_range\b/caches_clean_inval_user_pou/g;"\ "s/\b__flush_icache_all\b/icache_inval_all_pou/g;" Note that __clean_dcache_area_poc is deliberately missing a word boundary check at the beginning in order to match the efistub symbols in image-vars.h. Also note that, despite its name, __flush_icache_range operates on both instruction and data caches. The name change here reflects that. No functional change intended. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Fuad Tabba <tabba@google.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-19-tabba@google.com Signed-off-by: Will Deacon <will@kernel.org>
2021-05-25arm64: __inval_dcache_area to take end parameter instead of sizeFuad Tabba
To be consistent with other functions with similar names and functionality in cacheflush.h, cache.S, and cachetlb.rst, change to specify the range in terms of start and end, as opposed to start and size. Because the code is shared with __dma_inv_area, it changes the parameters for that as well. However, __dma_inv_area is local to cache.S, so no other users are affected. No functional change intended. Reported-by: Will Deacon <will@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Fuad Tabba <tabba@google.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210524083001.2586635-11-tabba@google.com Signed-off-by: Will Deacon <will@kernel.org>
2021-05-25arm64: Implement stack trace termination recordMadhavan T. Venkataraman
Reliable stacktracing requires that we identify when a stacktrace is terminated early. We can do this by ensuring all tasks have a final frame record at a known location on their task stack, and checking that this is the final frame record in the chain. We'd like to use task_pt_regs(task)->stackframe as the final frame record, as this is already setup upon exception entry from EL0. For kernel tasks we need to consistently reserve the pt_regs and point x29 at this, which we can do with small changes to __primary_switched, __secondary_switched, and copy_process(). Since the final frame record must be at a specific location, we must create the final frame record in __primary_switched and __secondary_switched rather than leaving this to start_kernel and secondary_start_kernel. Thus, __primary_switched and __secondary_switched will now show up in stacktraces for the idle tasks. Since the final frame record is now identified by its location rather than by its contents, we identify it at the start of unwind_frame(), before we read any values from it. External debuggers may terminate the stack trace when FP == 0. In the pt_regs->stackframe, the PC is 0 as well. So, stack traces taken in the debugger may print an extra record 0x0 at the end. While this is not pretty, this does not do any harm. This is a small price to pay for having reliable stack trace termination in the kernel. That said, gdb does not show the extra record probably because it uses DWARF and not frame pointers for stack traces. Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com> Reviewed-by: Mark Brown <broonie@kernel.org> [Mark: rebase, use ASM_BUG(), update comments, update commit message] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210510110026.18061-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-04-08arm64: Cope with CPUs stuck in VHE modeMarc Zyngier
It seems that the CPUs part of the SoC known as Apple M1 have the terrible habit of being stuck with HCR_EL2.E2H==1, in violation of the architecture. Try and work around this deplorable state of affairs by detecting the stuck bit early and short-circuit the nVHE dance. Additional filtering code ensures that attempts at switching to nVHE from the command-line are also ignored. It is still unknown whether there are many more such nuggets to be found... Reported-by: Hector Martin <marcan@marcan.st> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-3-maz@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2021-03-11arm64: mm: use a 48-bit ID map when possible on 52-bit VA buildsArd Biesheuvel
52-bit VA kernels can run on hardware that is only 48-bit capable, but configure the ID map as 52-bit by default. This was not a problem until recently, because the special T0SZ value for a 52-bit VA space was never programmed into the TCR register anwyay, and because a 52-bit ID map happens to use the same number of translation levels as a 48-bit one. This behavior was changed by commit 1401bef703a4 ("arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()"), which causes the unsupported T0SZ value for a 52-bit VA to be programmed into TCR_EL1. While some hardware simply ignores this, Mark reports that Amberwing systems choke on this, resulting in a broken boot. But even before that commit, the unsupported idmap_t0sz value was exposed to KVM and used to program TCR_EL2 incorrectly as well. Given that we already have to deal with address spaces being either 48-bit or 52-bit in size, the cleanest approach seems to be to simply default to a 48-bit VA ID map, and only switch to a 52-bit one if the placement of the kernel in DRAM requires it. This is guaranteed not to happen unless the system is actually 52-bit VA capable. Fixes: 90ec95cda91a ("arm64: mm: Introduce VA_BITS_MIN") Reported-by: Mark Salter <msalter@redhat.com> Link: http://lore.kernel.org/r/20210310003216.410037-1-msalter@redhat.com Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210310171515.416643-2-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-03-10arm64/mm: Fix __enable_mmu() for new TGRAN range valuesJames Morse
As per ARM ARM DDI 0487G.a, when FEAT_LPA2 is implemented, ID_AA64MMFR0_EL1 might contain a range of values to describe supported translation granules (4K and 16K pages sizes in particular) instead of just enabled or disabled values. This changes __enable_mmu() function to handle complete acceptable range of values (depending on whether the field is signed or unsigned) now represented with ID_AA64MMFR0_TGRAN_SUPPORTED_[MIN..MAX] pair. While here, also fix similar situations in EFI stub and KVM as well. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu Cc: linux-efi@vger.kernel.org Cc: linux-kernel@vger.kernel.org Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/1615355590-21102-1-git-send-email-anshuman.khandual@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2021-02-24arm64: Add missing ISB after invalidating TLB in __primary_switchMarc Zyngier
Although there has been a bit of back and forth on the subject, it appears that invalidating TLBs requires an ISB instruction when FEAT_ETS is not implemented by the CPU. From the bible: | In an implementation that does not implement FEAT_ETS, a TLB | maintenance instruction executed by a PE, PEx, can complete at any | time after it is issued, but is only guaranteed to be finished for a | PE, PEx, after the execution of DSB by the PEx followed by a Context | synchronization event Add the missing ISB in __primary_switch, just in case. Fixes: 3c5e9f238bc4 ("arm64: head.S: move KASLR processing out of __enable_mmu()") Suggested-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210224093738.3629662-3-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Defer enabling pointer authentication on boot coreSrinivas Ramana
Defer enabling pointer authentication on boot core until after its required to be enabled by cpufeature framework. This will help in controlling the feature dynamically with a boot parameter. Signed-off-by: Ajay Patil <pajay@qti.qualcomm.com> Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org> Signed-off-by: Srinivas Ramana <sramana@codeaurora.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/1610152163-16554-2-git-send-email-sramana@codeaurora.org Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-22-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: cpufeature: Add an early command-line cpufeature override facilityMarc Zyngier
In order to be able to override CPU features at boot time, let's add a command line parser that matches options of the form "cpureg.feature=value", and store the corresponding value into the override val/mask pair. No features are currently defined, so no expected change in functionality. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: David Brazdil <dbrazdil@google.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-14-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Extract early FDT mapping from kaslr_early_init()Marc Zyngier
As we want to parse more options very early in the kernel lifetime, let's always map the FDT early. This is achieved by moving that code out of kaslr_early_init(). No functional change expected. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-13-maz@kernel.org [will: Ensue KASAN is enabled before running C code] Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Move SCTLR_EL1 initialisation to EL-agnostic codeMarc Zyngier
We can now move the initial SCTLR_EL1 setup to be used for both EL1 and EL2 setup. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-10-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Simplify init_el2_state to be non-VHE onlyMarc Zyngier
As init_el2_state is now nVHE only, let's simplify it and drop the VHE setup. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: David Brazdil <dbrazdil@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-9-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2021-02-09arm64: Initialise as nVHE before switching to VHEMarc Zyngier
As we are aiming to be able to control whether we enable VHE or not, let's always drop down to EL1 first, and only then upgrade to VHE if at all possible. This means that if the kernel is booted at EL2, we always start with a nVHE init, drop to EL1 to initialise the the kernel, and only then upgrade the kernel EL to EL2 if possible (the process is obviously shortened for secondary CPUs). The resume path is handled similarly to a secondary CPU boot. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: David Brazdil <dbrazdil@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-6-maz@kernel.org [will: Avoid calling switch_to_vhe twice on kaslr path] Signed-off-by: Will Deacon <will@kernel.org>
2021-02-08arm64: Turn the MMU-on sequence into a macroMarc Zyngier
Turning the MMU on is a popular sport in the arm64 kernel, and we do it more than once, or even twice. As we are about to add even more, let's turn it into a macro. No expected functional change. Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-4-maz@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-12-22kasan, arm64: expand CONFIG_KASAN checksAndrey Konovalov
Some #ifdef CONFIG_KASAN checks are only relevant for software KASAN modes (either related to shadow memory or compiler instrumentation). Expand those into CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS. Link: https://lkml.kernel.org/r/e6971e432dbd72bb897ff14134ebb7e169bdcf0c.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alexander Potapenko <glider@google.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-12-20Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull KVM updates from Paolo Bonzini: "Much x86 work was pushed out to 5.12, but ARM more than made up for it. ARM: - PSCI relay at EL2 when "protected KVM" is enabled - New exception injection code - Simplification of AArch32 system register handling - Fix PMU accesses when no PMU is enabled - Expose CSV3 on non-Meltdown hosts - Cache hierarchy discovery fixes - PV steal-time cleanups - Allow function pointers at EL2 - Various host EL2 entry cleanups - Simplification of the EL2 vector allocation s390: - memcg accouting for s390 specific parts of kvm and gmap - selftest for diag318 - new kvm_stat for when async_pf falls back to sync x86: - Tracepoints for the new pagetable code from 5.10 - Catch VFIO and KVM irqfd events before userspace - Reporting dirty pages to userspace with a ring buffer - SEV-ES host support - Nested VMX support for wait-for-SIPI activity state - New feature flag (AVX512 FP16) - New system ioctl to report Hyper-V-compatible paravirtualization features Generic: - Selftest improvements" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (171 commits) KVM: SVM: fix 32-bit compilation KVM: SVM: Add AP_JUMP_TABLE support in prep for AP booting KVM: SVM: Provide support to launch and run an SEV-ES guest KVM: SVM: Provide an updated VMRUN invocation for SEV-ES guests KVM: SVM: Provide support for SEV-ES vCPU loading KVM: SVM: Provide support for SEV-ES vCPU creation/loading KVM: SVM: Update ASID allocation to support SEV-ES guests KVM: SVM: Set the encryption mask for the SVM host save area KVM: SVM: Add NMI support for an SEV-ES guest KVM: SVM: Guest FPU state save/restore not needed for SEV-ES guest KVM: SVM: Do not report support for SMM for an SEV-ES guest KVM: x86: Update __get_sregs() / __set_sregs() to support SEV-ES KVM: SVM: Add support for CR8 write traps for an SEV-ES guest KVM: SVM: Add support for CR4 write traps for an SEV-ES guest KVM: SVM: Add support for CR0 write traps for an SEV-ES guest KVM: SVM: Add support for EFER write traps for an SEV-ES guest KVM: SVM: Support string IO operations for an SEV-ES guest KVM: SVM: Support MMIO for an SEV-ES guest KVM: SVM: Create trace events for VMGEXIT MSR protocol processing KVM: SVM: Create trace events for VMGEXIT processing ...
2020-12-09Merge branch 'for-next/uaccess' into for-next/coreCatalin Marinas
* for-next/uaccess: : uaccess routines clean-up and set_fs() removal arm64: mark __system_matches_cap as __maybe_unused arm64: uaccess: remove vestigal UAO support arm64: uaccess: remove redundant PAN toggling arm64: uaccess: remove addr_limit_user_check() arm64: uaccess: remove set_fs() arm64: uaccess cleanup macro naming arm64: uaccess: split user/kernel routines arm64: uaccess: refactor __{get,put}_user arm64: uaccess: simplify __copy_user_flushcache() arm64: uaccess: rename privileged uaccess routines arm64: sdei: explicitly simulate PAN/UAO entry arm64: sdei: move uaccess logic to arch/arm64/ arm64: head.S: always initialize PSTATE arm64: head.S: cleanup SCTLR_ELx initialization arm64: head.S: rename el2_setup -> init_kernel_el arm64: add C wrappers for SET_PSTATE_*() arm64: ensure ERET from kthread is illegal
2020-12-08KVM: arm64: Fix nVHE boot on VHE systemsMarc Zyngier
Conflict resolution gone astray results in the kernel not booting on VHE-capable HW when VHE support is disabled. Thankfully spotted by David. Reported-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-12-04arm64: Extract parts of el2_setup into a macroDavid Brazdil
When a CPU is booted in EL2, the kernel checks for VHE support and initializes the CPU core accordingly. For nVHE it also installs the stub vectors and drops down to EL1. Once KVM gains the ability to boot cores without going through the kernel entry point, it will need to initialize the CPU the same way. Extract the relevant bits of el2_setup into an init_el2_state macro with an argument specifying whether to initialize for VHE or nVHE. The following ifdefs are removed: * CONFIG_ARM_GIC_V3 - always selected on arm64 * CONFIG_COMPAT - hstr_el2 can be set even without 32-bit support No functional change intended. Size of el2_setup increased by 148 bytes due to duplication. Signed-off-by: David Brazdil <dbrazdil@google.com> [maz: reworked to fit the new PSTATE initial setup code] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-9-dbrazdil@google.com
2020-12-02arm64: head.S: always initialize PSTATEMark Rutland
As with SCTLR_ELx and other control registers, some PSTATE bits are UNKNOWN out-of-reset, and we may not be able to rely on hardware or firmware to initialize them to our liking prior to entry to the kernel, e.g. in the primary/secondary boot paths and return from idle/suspend. It would be more robust (and easier to reason about) if we consistently initialized PSTATE to a default value, as we do with control registers. This will ensure that the kernel is not adversely affected by bits it is not aware of, e.g. when support for a feature such as PAN/UAO is disabled. This patch ensures that PSTATE is consistently initialized at boot time via an ERET. This is not intended to relax the existing requirements (e.g. DAIF bits must still be set prior to entering the kernel). For features detected dynamically (which may require system-wide support), it is still necessary to subsequently modify PSTATE. As ERET is not always a Context Synchronization Event, an ISB is placed before each exception return to ensure updates to control registers have taken effect. This handles the kernel being entered with SCTLR_ELx.EOS clear (or any future control bits being in an UNKNOWN state). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201113124937.20574-6-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-12-02arm64: head.S: cleanup SCTLR_ELx initializationMark Rutland
Let's make SCTLR_ELx initialization a bit clearer by using meaningful names for the initialization values, following the same scheme for SCTLR_EL1 and SCTLR_EL2. These definitions will be used more widely in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201113124937.20574-5-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-12-02arm64: head.S: rename el2_setup -> init_kernel_elMark Rutland
For a while now el2_setup has performed some basic initialization of EL1 even when the kernel is booted at EL1, so the name is a little misleading. Further, some comments are stale as with VHE it doesn't drop the CPU to EL1. To clarify things, rename el2_setup to init_kernel_el, and update comments to be clearer as to the function's purpose. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201113124937.20574-4-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-11-17arm64: head: tidy up the Image header definitionArd Biesheuvel
Even though support for EFI boot remains entirely optional for arm64, it is unlikely that we will ever be able to repurpose the image header fields that the EFI loader relies on, i.e., the magic NOP at offset 0x0 and the PE header address at offset 0x3c. So let's factor out the differences into a 'efi_signature_nop' macro and a local symbol representing the PE header address, and move the conditional definitions into efi-header.S, taking into account whether CONFIG_EFI is enabled or not. While at it, switch to a signature NOP that behaves more like a NOP, i.e., one that only clobbers the flags. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201117124729.12642-4-ardb@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-10-09Revert "arm64: initialize per-cpu offsets earlier"Will Deacon
This reverts commit 353e228eb355be5a65a3c0996c774a0f46737fda. Qian Cai reports that TX2 no longer boots with his .config as it appears that task_cpu() gets instrumented and used before KASAN has been initialised. Although Mark has a proposed fix, let's take the safe option of reverting this for now and sorting it out properly later. Link: https://lore.kernel.org/r/711bc57a314d8d646b41307008db2845b7537b3d.camel@redhat.com Reported-by: Qian Cai <cai@redhat.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-10-07Merge branch 'for-next/late-arrivals' into for-next/coreWill Deacon
Late patches for 5.10: MTE selftests, minor KCSAN preparation and removal of some unused prototypes. (Amit Daniel Kachhap and others) * for-next/late-arrivals: arm64: random: Remove no longer needed prototypes arm64: initialize per-cpu offsets earlier kselftest/arm64: Check mte tagged user address in kernel kselftest/arm64: Verify KSM page merge for MTE pages kselftest/arm64: Verify all different mmap MTE options kselftest/arm64: Check forked child mte memory accessibility kselftest/arm64: Verify mte tag inclusion via prctl kselftest/arm64: Add utilities and a test to validate mte memory
2020-10-05arm64: initialize per-cpu offsets earlierMark Rutland
The current initialization of the per-cpu offset register is difficult to follow and this initialization is not always early enough for upcoming instrumentation with KCSAN, where the instrumentation callbacks use the per-cpu offset. To make it possible to support KCSAN, and to simplify reasoning about early bringup code, let's initialize the per-cpu offset earlier, before we run any C code that may consume it. To do so, this patch adds a new init_this_cpu_offset() helper that's called before the usual primary/secondary start functions. For consistency, this is also used to re-initialize the per-cpu offset after the runtime per-cpu areas have been allocated (which can change CPU0's offset). So that init_this_cpu_offset() isn't subject to any instrumentation that might consume the per-cpu offset, it is marked with noinstr, preventing instrumentation. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201005164303.21389-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-09-07arm64: get rid of TEXT_OFFSETArd Biesheuvel
TEXT_OFFSET serves no purpose, and for this reason, it was redefined as 0x0 in the v5.8 timeframe. Since this does not appear to have caused any issues that require us to revisit that decision, let's get rid of the macro entirely, along with any references to it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20200825135440.11288-1-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
2020-06-09mm: reorder includes after introduction of linux/pgtable.hMike Rapoport
The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include of the latter in the middle of asm includes. Fix this up with the aid of the below script and manual adjustments here and there. import sys import re if len(sys.argv) is not 3: print "USAGE: %s <file> <header>" % (sys.argv[0]) sys.exit(1) hdr_to_move="#include <linux/%s>" % sys.argv[2] moved = False in_hdrs = False with open(sys.argv[1], "r") as f: lines = f.readlines() for _line in lines: line = _line.rstrip(' ') if line == hdr_to_move: continue if line.startswith("#include <linux/"): in_hdrs = True elif not moved and in_hdrs: moved = True print hdr_to_move print line Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200514170327.31389-4-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-06-09mm: introduce include/linux/pgtable.hMike Rapoport
The include/linux/pgtable.h is going to be the home of generic page table manipulation functions. Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and make the latter include asm/pgtable.h. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-05-28Merge branch 'for-next/scs' into for-next/coreWill Deacon
Support for Clang's Shadow Call Stack in the kernel (Sami Tolvanen and Will Deacon) * for-next/scs: arm64: entry-ftrace.S: Update comment to indicate that x18 is live scs: Move DEFINE_SCS macro into core code scs: Remove references to asm/scs.h from core code scs: Move scs_overflow_check() out of architecture code arm64: scs: Use 'scs_sp' register alias for x18 scs: Move accounting into alloc/free functions arm64: scs: Store absolute SCS stack pointer value in thread_info efi/libstub: Disable Shadow Call Stack arm64: scs: Add shadow stacks for SDEI arm64: Implement Shadow Call Stack arm64: Disable SCS for hypervisor code arm64: vdso: Disable Shadow Call Stack arm64: efi: Restore register x18 if it was corrupted arm64: Preserve register x18 when CPU is suspended arm64: Reserve register x18 from general allocation with SCS scs: Disable when function graph tracing is enabled scs: Add support for stack usage debugging scs: Add page accounting for shadow call stack allocations scs: Add support for Clang's Shadow Call Stack (SCS)
2020-05-28Merge branches 'for-next/acpi', 'for-next/bpf', 'for-next/cpufeature', ↵Will Deacon
'for-next/docs', 'for-next/kconfig', 'for-next/misc', 'for-next/perf', 'for-next/ptr-auth', 'for-next/sdei', 'for-next/smccc' and 'for-next/vdso' into for-next/core ACPI and IORT updates (Lorenzo Pieralisi) * for-next/acpi: ACPI/IORT: Remove the unused __get_pci_rid() ACPI/IORT: Fix PMCG node single ID mapping handling ACPI: IORT: Add comments for not calling acpi_put_table() ACPI: GTDT: Put GTDT table after parsing ACPI: IORT: Add extra message "applying workaround" for off-by-1 issue ACPI/IORT: work around num_ids ambiguity Revert "ACPI/IORT: Fix 'Number of IDs' handling in iort_id_map()" ACPI/IORT: take _DMA methods into account for named components BPF JIT optimisations for immediate value generation (Luke Nelson) * for-next/bpf: bpf, arm64: Optimize ADD,SUB,JMP BPF_K using arm64 add/sub immediates bpf, arm64: Optimize AND,OR,XOR,JSET BPF_K using arm64 logical immediates arm64: insn: Fix two bugs in encoding 32-bit logical immediates Addition of new CPU ID register fields and removal of some benign sanity checks (Anshuman Khandual and others) * for-next/cpufeature: (27 commits) KVM: arm64: Check advertised Stage-2 page size capability arm64/cpufeature: Add get_arm64_ftr_reg_nowarn() arm64/cpuinfo: Add ID_MMFR4_EL1 into the cpuinfo_arm64 context arm64/cpufeature: Add remaining feature bits in ID_AA64PFR1 register arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register arm64/cpufeature: Add remaining feature bits in ID_AA64ISAR0 register arm64/cpufeature: Add remaining feature bits in ID_MMFR4 register arm64/cpufeature: Add remaining feature bits in ID_PFR0 register arm64/cpufeature: Introduce ID_MMFR5 CPU register arm64/cpufeature: Introduce ID_DFR1 CPU register arm64/cpufeature: Introduce ID_PFR2 CPU register arm64/cpufeature: Make doublelock a signed feature in ID_AA64DFR0 arm64/cpufeature: Drop TraceFilt feature exposure from ID_DFR0 register arm64/cpufeature: Add explicit ftr_id_isar0[] for ID_ISAR0 register arm64/cpufeature: Drop open encodings while extracting parange arm64/cpufeature: Validate hypervisor capabilities during CPU hotplug arm64: cpufeature: Group indexed system register definitions by name arm64: cpufeature: Extend comment to describe absence of field info arm64: drop duplicate definitions of ID_AA64MMFR0_TGRAN constants arm64: cpufeature: Add an overview comment for the cpufeature framework ... Minor documentation tweaks for silicon errata and booting requirements (Rob Herring and Will Deacon) * for-next/docs: arm64: silicon-errata.rst: Sort the Cortex-A55 entries arm64: docs: Mandate that the I-cache doesn't hold stale kernel text Minor Kconfig cleanups (Geert Uytterhoeven) * for-next/kconfig: arm64: cpufeature: Add "or" to mitigations for multiple errata arm64: Sort vendor-specific errata Miscellaneous updates (Ard Biesheuvel and others) * for-next/misc: arm64: mm: Add asid_gen_match() helper arm64: stacktrace: Factor out some common code into on_stack() arm64: Call debug_traps_init() from trap_init() to help early kgdb arm64: cacheflush: Fix KGDB trap detection arm64/cpuinfo: Move device_initcall() near cpuinfo_regs_init() arm64: kexec_file: print appropriate variable arm: mm: use __pfn_to_section() to get mem_section arm64: Reorder the macro arguments in the copy routines efi/libstub/arm64: align PE/COFF sections to segment alignment KVM: arm64: Drop PTE_S2_MEMATTR_MASK arm64/kernel: Fix range on invalidating dcache for boot page tables arm64: set TEXT_OFFSET to 0x0 in preparation for removing it entirely arm64: lib: Consistently enable crc32 extension arm64/mm: Use phys_to_page() to access pgtable memory arm64: smp: Make cpus_stuck_in_kernel static arm64: entry: remove unneeded semicolon in el1_sync_handler() arm64/kernel: vmlinux.lds: drop redundant discard/keep macros arm64: drop GZFLAGS definition and export arm64: kexec_file: Avoid temp buffer for RNG seed arm64: rename stext to primary_entry Perf PMU driver updates (Tang Bin and others) * for-next/perf: pmu/smmuv3: Clear IRQ affinity hint on device removal drivers/perf: hisi: Permit modular builds of HiSilicon uncore drivers drivers/perf: hisi: Fix typo in events attribute array drivers/perf: arm_spe_pmu: Avoid duplicate printouts drivers/perf: arm_dsu_pmu: Avoid duplicate printouts Pointer authentication updates and support for vmcoreinfo (Amit Daniel Kachhap and Mark Rutland) * for-next/ptr-auth: Documentation/vmcoreinfo: Add documentation for 'KERNELPACMASK' arm64/crash_core: Export KERNELPACMASK in vmcoreinfo arm64: simplify ptrauth initialization arm64: remove ptrauth_keys_install_kernel sync arg SDEI cleanup and non-critical fixes (James Morse and others) * for-next/sdei: firmware: arm_sdei: Document the motivation behind these set_fs() calls firmware: arm_sdei: remove unused interfaces firmware: arm_sdei: Put the SDEI table after using it firmware: arm_sdei: Drop check for /firmware/ node and always register driver SMCCC updates and refactoring (Sudeep Holla) * for-next/smccc: firmware: smccc: Fix missing prototype warning for arm_smccc_version_init firmware: smccc: Add function to fetch SMCCC version firmware: smccc: Refactor SMCCC specific bits into separate file firmware: smccc: Drop smccc_version enum and use ARM_SMCCC_VERSION_1_x instead firmware: smccc: Add the definition for SMCCCv1.2 version/error codes firmware: smccc: Update link to latest SMCCC specification firmware: smccc: Add HAVE_ARM_SMCCC_DISCOVERY to identify SMCCC v1.1 and above vDSO cleanup and non-critical fixes (Mark Rutland and Vincenzo Frascino) * for-next/vdso: arm64: vdso: Add --eh-frame-hdr to ldflags arm64: vdso: use consistent 'map' nomenclature arm64: vdso: use consistent 'abi' nomenclature arm64: vdso: simplify arch_vdso_type ifdeffery arm64: vdso: remove aarch32_vdso_pages[] arm64: vdso: Add '-Bsymbolic' to ldflags
2020-05-18arm64: scs: Use 'scs_sp' register alias for x18Will Deacon
x18 holds the SCS stack pointer value, so introduce a register alias to make this easier to read in assembly code. Tested-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
2020-05-15arm64: Implement Shadow Call StackSami Tolvanen
This change implements shadow stack switching, initial SCS set-up, and interrupt shadow stacks for arm64. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28arm64/kernel: Fix range on invalidating dcache for boot page tablesGavin Shan
Prior to commit 8eb7e28d4c642c31 ("arm64/mm: move runtime pgds to rodata"), idmap_pgd_dir, tramp_pg_dir, reserved_ttbr0, swapper_pg_dir, and init_pg_dir were contiguous at the end of the kernel image. The maintenance at the end of __create_page_tables assumed these were contiguous, and affected everything from the start of idmap_pg_dir to the end of init_pg_dir. That commit moved all but init_pg_dir into the .rodata section, with other data placed between idmap_pg_dir and init_pg_dir, but did not update the maintenance. Hence the maintenance is performed on much more data than necessary (but as the bootloader previously made this clean to the PoC there is no functional problem). As we only alter idmap_pg_dir, and init_pg_dir, we only need to perform maintenance for these. As the other dirs are in .rodata, the bootloader will have initialised them as expected and cleaned them to the PoC. The kernel will initialize them as necessary after enabling the MMU. This patch reworks the maintenance to only cover the idmap_pg_dir and init_pg_dir to avoid this unnecessary work. Signed-off-by: Gavin Shan <gshan@redhat.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200427235700.112220-1-gshan@redhat.com Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28arm64: rename stext to primary_entryArd Biesheuvel
For historical reasons, the primary entry routine living somewhere in the inittext section is called stext(), which is confusing, given that there is also a section marker called _stext which lives at a fixed offset in the image (either 64 or 4096 bytes, depending on whether CONFIG_EFI is enabled) Let's rename stext to primary_entry(), which is a better description and reflects the secondary_entry() routine that already exists for SMP boot. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200326171423.3080-1-ardb@kernel.org Reviwed-by: Mark Brown <broonie@kernel.org> Signed-off-by: Will Deacon <will@kernel.org>
2020-04-28arm64: simplify ptrauth initializationMark Rutland
Currently __cpu_setup conditionally initializes the address authentication keys and enables them in SCTLR_EL1, doing so differently for the primary CPU and secondary CPUs, and skipping this work for CPUs returning from an idle state. For the latter case, cpu_do_resume restores the keys and SCTLR_EL1 value after the MMU has been enabled. This flow is rather difficult to follow, so instead let's move the primary and secondary CPU initialization into their respective boot paths. By following the example of cpu_do_resume and doing so once the MMU is enabled, we can always initialize the keys from the values in thread_struct, and avoid the machinery necessary to pass the keys in secondary_data or open-coding initialization for the boot CPU. This means we perform an additional RMW of SCTLR_EL1, but we already do this in the cpu_do_resume path, and for other features in cpufeature.c, so this isn't a major concern in a bringup path. Note that even while the enable bits are clear, the key registers are accessible. As this now renders the argument to __cpu_setup redundant, let's also remove that entirely. Future extensions can follow a similar approach to initialize values that differ for primary/secondary CPUs. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Amit Daniel Kachhap <amit.kachhap@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200423101606.37601-3-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org>
2020-03-25Merge branch 'for-next/kernel-ptrauth' into for-next/coreCatalin Marinas
* for-next/kernel-ptrauth: : Return address signing - in-kernel support arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH lkdtm: arm64: test kernel pointer authentication arm64: compile the kernel with ptrauth return address signing kconfig: Add support for 'as-option' arm64: suspend: restore the kernel ptrauth keys arm64: __show_regs: strip PAC from lr in printk arm64: unwind: strip PAC from kernel addresses arm64: mask PAC bits of __builtin_return_address arm64: initialize ptrauth keys for kernel booting task arm64: initialize and switch ptrauth kernel keys arm64: enable ptrauth earlier arm64: cpufeature: handle conflicts based on capability arm64: cpufeature: Move cpu capability helpers inside C file arm64: ptrauth: Add bootup/runtime flags for __cpu_setup arm64: install user ptrauth keys at kernel exit time arm64: rename ptrauth key structures to be user-specific arm64: cpufeature: add pointer auth meta-capabilities arm64: cpufeature: Fix meta-capability cpufeature check
2020-03-25Merge branch 'for-next/asm-cleanups' into for-next/coreCatalin Marinas
* for-next/asm-cleanups: : Various asm clean-ups (alignment, mov_q vs ldr, .idmap) arm64: move kimage_vaddr to .rodata arm64: use mov_q instead of literal ldr
2020-03-25Merge branch 'for-next/asm-annotations' into for-next/coreCatalin Marinas
* for-next/asm-annotations: : Modernise arm64 assembly annotations arm64: head: Convert install_el2_stub to SYM_INNER_LABEL arm64: Mark call_smc_arch_workaround_1 as __maybe_unused arm64: entry-ftrace.S: Fix missing argument for CONFIG_FUNCTION_GRAPH_TRACER=y arm64: vdso32: Convert to modern assembler annotations arm64: vdso: Convert to modern assembler annotations arm64: sdei: Annotate SDEI entry points using new style annotations arm64: kvm: Modernize __smccc_workaround_1_smc_start annotations arm64: kvm: Modernize annotation for __bp_harden_hyp_vecs arm64: kvm: Annotate assembly using modern annoations arm64: kernel: Convert to modern annotations for assembly data arm64: head: Annotate stext and preserve_boot_args as code arm64: head.S: Convert to modern annotations for assembly functions arm64: ftrace: Modernise annotation of return_to_handler arm64: ftrace: Correct annotation of ftrace_caller assembly arm64: entry-ftrace.S: Convert to modern annotations for assembly functions arm64: entry: Additional annotation conversions for entry.S arm64: entry: Annotate ret_from_fork as code arm64: entry: Annotate vector table and handlers as code arm64: crypto: Modernize names for AES function macros arm64: crypto: Modernize some extra assembly annotations
2020-03-25arm64: head: Convert install_el2_stub to SYM_INNER_LABELMark Brown
New assembly annotations have recently been introduced which aim to make the way we describe symbols in assembly more consistent. Recently the arm64 assembler was converted to use these but install_el2_stub was missed. Signed-off-by: Mark Brown <broonie@kernel.org> [catalin.marinas@arm.com: changed to SYM_L_LOCAL] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-24arm64: move kimage_vaddr to .rodataRemi Denis-Courmont
This datum is not referenced from .idmap.text: it does not need to be mapped in idmap. Lets move it to .rodata as it is never written to after early boot of the primary CPU. (Maybe .data.ro_after_init would be cleaner though?) Signed-off-by: Rémi Denis-Courmont <remi@remlab.net> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-18arm64: ptrauth: Add bootup/runtime flags for __cpu_setupAmit Daniel Kachhap
This patch allows __cpu_setup to be invoked with one of these flags, ARM64_CPU_BOOT_PRIMARY, ARM64_CPU_BOOT_SECONDARY or ARM64_CPU_RUNTIME. This is required as some cpufeatures need different handling during different scenarios. The input parameter in x0 is preserved till the end to be used inside this function. There should be no functional change with this patch and is useful for the subsequent ptrauth patch which utilizes it. Some upcoming arm cpufeatures can also utilize these flags. Suggested-by: James Morse <james.morse@arm.com> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@arm.com> Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-09arm64: kernel: Convert to modern annotations for assembly dataMark Brown
In an effort to clarify and simplify the annotation of assembly functions in the kernel new macros have been introduced. These include specific annotations for the start and end of data, update symbols for data to use these. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2020-03-09arm64: head: Annotate stext and preserve_boot_args as codeMark Brown
In an effort to clarify and simplify the annotation of assembly functions new macros have been introduced. These replace ENTRY and ENDPROC with two different annotations for normal functions and those with unusual calling conventions. Neither stext nor preserve_boot_args is called with the usual AAPCS calling conventions and they should therefore be annotated as code. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>