aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm64/include
AgeCommit message (Collapse)Author
2024-02-21Merge tag 'v4.19.303' into v4.19/standard/baseBruce Ashfield
This is the 4.19.303 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmWC/FIACgkQONu9yGCS # aT4PGBAAmX42BpkC8qqWMrV0bmHf7KtjUyPBeMybVKBXaFkhUSbtwraAI0QWkIUM # mEqzUehaTxFhy+QFGRvA9982ChamygDZsWK+2EOigqTXFmVWrIESC5GJAHrCdc06 # /b+6oNoTFuRcbVIAxyEL9S+K1pJ11/6Da6tvUKiWizczpZnA3IXLT4nbTFr3Q7cS # wPv6ggk6rdyXlmMSiYrRJA4HjN/0akrUNcwoW00LCgKc8892Y7Q1YfnFiJVC75Fa # U2+97SSvboM6pJ28mvm3yR4dV02Q8Cs9hI9M1rzIV6ftU8KzEUP3ZCZGaUh4Bwqi # MOH8T5DaE+Velbp7ECyzQRHOzhu1dGjFOda1ZR9YpbcE/PBzZ4fOSIUf1glP3K6o # U9FpIJHMn+MQk1j4AA+GYJfhOAeaiBjs5y7z3hhNLV1lvshuZGxRKONUtBtOMhwS # HfYBW3/7Af7a2q2ITS27RRBCFv6Gqkza0vLzte0Om0XvCn6JFzzIsymFa0cjjg2B # G++HZDBQqHWF8EEYRA1XsoJQDEk9o2F7IaX+hOar24mAEsqXBYNzOFvqn2uf/a5c # 9mGBpbDGrq0P5EkwIQjgSovbvmplmmAGB74fBrCSrQVKuXjzloAjUbHQqFVVu49q # lAMTCtfLxQyioSlTXrYKX1ANKjlR1kqBIF2GP2oHE74MwR3hYSo= # =V5/I # -----END PGP SIGNATURE----- # gpg: Signature made Wed 20 Dec 2023 09:38:10 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2024-02-21Merge tag 'v4.19.301' into v4.19/standard/baseBruce Ashfield
This is the 4.19.301 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmVyySUACgkQONu9yGCS # aT6OEw/+NbDTTyd6qvilupBXQI0U2zNYYUAgYyI3b+f26bJcGaVS144bvGloFsEJ # F2kLGRzeHdskQbU8p91XClmkTZ5rne9MMqQjUosfED6On5NT54o51eGPMrfzF44o # zt63nFLuWYKhzbvMij8JuX8qzjn65t3maUcT0aNR81x2VuaBgCvFRshxMwOgZ7Gg # ImSJys2VXuQK89zvmKCvJgnhGzv3oLU1yF/AQl/vrcnhsR/75ClvVlGs/eAFsvno # 8qfwzKHVD3NYtBuR7j1JtjlO2d7ale42EgHukqzaw/vBq/FpUSBu7Q8EAmUQaxjp # ri6BoApBSc6wNa++owlkt3bbNzNzTtlbV5jyibSfRAEMl5aIHsgz9JQnJmO91UTh # VWGMTQVHy173ubm0FkpyrDLQ0rqLqKWigIGRGV2ZzfiGPKgME3zwgDnjq9IHAo23 # 8NNSSVBu7JV1eQqm3yG7rCSxWk99O9/yN3scS913CsrTdDCCu4v1NPOozHnNfGXO # O1N4KClar8zsYJ9ZVXM0P1cH7kOOXdjxYQxODaR/FbTfZQ2Jq/ayo7wvC4+ZF+cX # VxcJtoK94PcbyC/jub9m74Kq3Ujtz5lYzU9JmzjsuvjY6qeBe5fOiLk69sKa5set # exg9SSwvPSgA5JndW3eXB5uD+rUY45taUGpZzNmksE/dYANdd84= # =Xhd3 # -----END PGP SIGNATURE----- # gpg: Signature made Fri 08 Dec 2023 02:43:33 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2024-02-21Merge tag 'v4.19.288' into v4.19/standard/baseBruce Ashfield
This is the 4.19.288 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmSb7KoACgkQONu9yGCS # aT6s8A//U4Q/1LMrkXiew99gV76v0LlFNuXEVWkd0VqEdeK+UG86gLcfjZUaHmtX # Jx0nAZ7GWvn90ZaCNgN6PrKQ8LSeABOCGcpdx8cvLxoSGoB0ipjVcwddocAjOIzp # ly/+1HhC0UadE9NZ7vyaCiUZ3U+0Sj22J85JZz+A4y1FwpYbXHJclGmmmmUg4MCU # NwBUiu+2ad8D7vR7a0yiTlsdxBAwU2LoEdysteBv8vDHB+BXjNXC0jpBhXsvaaBd # VN0bav9XWvKHN73CMcWW8I8ABSirJRQhdGC43BMNjE2+I3KIHjOzgqALOvfd9eSJ # Jl9ztoqO+tI4wee0ZIQbobJ57vgqik+oX4eTGxaAfxD1BgqtuNVaDDw+3Wg3pgpP # mRdbbfUixFR4tP0VsuLN3b6Ff5q4nhq8h6ZJ0I4tiSRL6K9CNimBKhTh1ECexDPr # t+se4Zr58KkgrZCM/ERrwn5NvRcjF5PuBA1i3u1DWecHptZ6FNAwHSKMPFM7CoCH # FTyNikDe6FCtzA2gHkj85bC5W0QahU+SD65OIv7Ziz6SOLKu2HjLYxQbcW/1uCW0 # Nikd5nADhOpDAxLvb7Cjt7Gh1GxWOIVnZaAFXh+KCVT9p/Xt8JimXvRTdSN5PGkp # Mhg525BLTdXHQPr32IHY3gbeRbiAMCBK/pygPQ2DBKRVS53jkyM= # =RsRd # -----END PGP SIGNATURE----- # gpg: Signature made Wed 28 Jun 2023 04:17:46 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2024-02-21Merge tag 'v4.19.283' into v4.19/standard/baseBruce Ashfield
This is the 4.19.283 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmRkmvoACgkQONu9yGCS # aT77dw/6A648P7TZgPEqBR5L4aG1u4GC4wE762PUb5YCK1XEWzgUdVPXrcRM6+r4 # ntoKlSJxveJh3TYKLcUAJWvvIt2lbOEdQTb9BS2ALoZv35q5J8Npw/CUP148Vy47 # 52PQwr4M76+WTx8bfckrBeVPHyhgNjFtFjuwg1TLfIvo6pGrDPnuNYo57K1/O38m # Sid+eFrGBkOIjUVlfaStMIP9RVZTUHpPWHWp+cmqGTDK3B0m8BkoTMXM0hLu/fJH # HPivMQFnyRNa0ZZAe+iQVmUjiruSPbgqNOAGSqTr5FxxSrZ3ZUjvtI0BYTA7eo7q # BnPbRHpuRQ+YOnDK0Q+Ps96DDNALCz2j8bXXEjJePpOrqv8IoxU8kGx+GVcbnQiJ # Bd6bqZwXU3uPN8VLTR0KtfypEH6ELbBrCXjeeSw+RQqAgsdEGSbVSgfBtISo7UMt # iL/VFwl03qdm4Y+Ww544kNMrtDV+Qmq2MWeP6uHzx54ZH6ic5rFhLGamHEuIUg54 # Ux/9dLoByzbVOEMS5SHaqaxcLd/Qx0FtUq02rhsHeV0IEFxviX4jPRet0kn2vVru # 8o+Vh92K+gfNW+zT47GPeTCBRIK+YuH2cwsXJRucGkE7IyDccgyA/v1cchZO9xoD # oetofMcWiZi3QNY26EVuYA8SlIwURWkhb3yTbFoOx2+jQ6JER6k= # =VSYH # -----END PGP SIGNATURE----- # gpg: Signature made Wed 17 May 2023 05:14:34 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2024-02-21Merge tag 'v4.19.270' into v4.19/standard/baseBruce Ashfield
This is the 4.19.270 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmPHynkACgkQONu9yGCS # aT5AtBAAsdmYCYkmKZsRcS1EUTqdwKVN7FDILDcdMjfmrSp4ZDliaD1dUc0EmDRl # yy+aNGCrhbuYACk9WdQsSUrUIh1dK0H5VsioB1m0cjCgifbNjsYqjYWK5ewXKUyX # yjc+NmY1HVUFQDLnYHJxSbnB/o+nobWjts8nGuWHwQmoh7UmFe7lvMqZg753x6Bw # wCiaC1DrU3aKHYK7IirdWgOiDiGia8DX1nX6PmLi6JTsXj+Io0i8PXKkFzANDf/p # /rOyg7j8NOXIQPZGN0Zu88QiMWsNk7u2bOORZgtFbwo7r9BFbzXfWk/x8QxzDX1B # iH1p02XQvBwm44xGJZKiWEY2nZdw4mpyzLXZNOL8V7vn9xhT6HDksVAPnyIkU8Dh # wsij2r27x18VI9H7sstvAHvIyg6ihmq2E6WuC4W74tUcys7MXxCFc2DuJzMMocf0 # 7LMTmx3/oUHvuM1riJ9STo9mzXbTmfNd6hnqRnFgGKiGGhOE+pX//RHfupaXRieQ # Rq51ODFKcJdDIM7hxeyPdACYF/kso8sNEODCgQ5/+3opel1mzLdBJ1T2bV12DpQe # ZhTsESPCVSoUAjCnC9Jje3g0u3qztClYq1faHOXtnjykn9mHmmedVwvdfJL/sOsr # ec7NgqzM9xvMQVe4CNf0mouugaLpn2m6uQDTu+GWswRfEKuCx2Q= # =ksam # -----END PGP SIGNATURE----- # gpg: Signature made Wed 18 Jan 2023 05:31:21 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2024-02-21Merge tag 'v4.19.264' into v4.19/standard/baseBruce Ashfield
This is the 4.19.264 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmNj1bgACgkQONu9yGCS # aT4ABQ/8CCO0lyrHTLSG/hmhXOaLkEq3q4sO5xI2hAAwaIGngivQpjR+qGicUxm2 # MmX9pLihi5uEpayVFM0Gb5+6NRObgUULAetiDk34gSqnWjFQksrS8WZdNzXSIq85 # yn3+1o87Nr+gOTGd1xighmRKw87Kaacqtt80MXBeTQ7SZt7ES7Oxn/I4zw1vjiDz # 2nfUp+w2yWsUoQHOUGITe3+ae8QmTX1dwQhXf/z0EX8VqgGEKD7Sv6aSxZfmb4cL # srBYw0D3bi5+cSH/auiN+rxkGQ7CZ24xsqaFZN4L5lAhO+TTfanp6XJAMAKFYm5N # 1sjeBfNNDq8LEThqBG1RULlaMOkflW7fXAZuA6oGJTr1+V0MP8h79jniKSJ8kGnN # xpprlnm7hKy0OazbRIcRBPim3hv5fvy+U7eiWQqZCOdYc93hTflzamXeu+OZNpsB # flAJbUGDUniApKFhyXhEWr8jCz7oQvf2VzQmKRB6KpbEOgaEJ5S8Ls5pGzt3JqdW # AOQHC4t4/EcyVvOBUcIiXYtnE3VQ4RuOU6bCU5soqDiWTYk1yeRWKFiFLpkzpMAR # FjKBLFez2Dc+T09DXcXbJZ7V3t510hhLw9Pai1iXuZlkYuk6jJGZ/VMmi8txxyYt # wChGDUcnv2W/Ub7hLSk9GkxtaXQ3zdLYVk2GtloCkuqorks8EFs= # =7RS7 # -----END PGP SIGNATURE----- # gpg: Signature made Thu 03 Nov 2022 10:52:40 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-12-20arm64: mm: Always make sw-dirty PTEs hw-dirty in pte_modifyJames Houghton
commit 3c0696076aad60a2f04c019761921954579e1b0e upstream. It is currently possible for a userspace application to enter an infinite page fault loop when using HugeTLB pages implemented with contiguous PTEs when HAFDBS is not available. This happens because: 1. The kernel may sometimes write PTEs that are sw-dirty but hw-clean (PTE_DIRTY | PTE_RDONLY | PTE_WRITE). 2. If, during a write, the CPU uses a sw-dirty, hw-clean PTE in handling the memory access on a system without HAFDBS, we will get a page fault. 3. HugeTLB will check if it needs to update the dirty bits on the PTE. For contiguous PTEs, it will check to see if the pgprot bits need updating. In this case, HugeTLB wants to write a sequence of sw-dirty, hw-dirty PTEs, but it finds that all the PTEs it is about to overwrite are all pte_dirty() (pte_sw_dirty() => pte_dirty()), so it thinks no update is necessary. We can get the kernel to write a sw-dirty, hw-clean PTE with the following steps (showing the relevant VMA flags and pgprot bits): i. Create a valid, writable contiguous PTE. VMA vmflags: VM_SHARED | VM_READ | VM_WRITE VMA pgprot bits: PTE_RDONLY | PTE_WRITE PTE pgprot bits: PTE_DIRTY | PTE_WRITE ii. mprotect the VMA to PROT_NONE. VMA vmflags: VM_SHARED VMA pgprot bits: PTE_RDONLY PTE pgprot bits: PTE_DIRTY | PTE_RDONLY iii. mprotect the VMA back to PROT_READ | PROT_WRITE. VMA vmflags: VM_SHARED | VM_READ | VM_WRITE VMA pgprot bits: PTE_RDONLY | PTE_WRITE PTE pgprot bits: PTE_DIRTY | PTE_WRITE | PTE_RDONLY Make it impossible to create a writeable sw-dirty, hw-clean PTE with pte_modify(). Such a PTE should be impossible to create, and there may be places that assume that pte_dirty() implies pte_hw_dirty(). Signed-off-by: James Houghton <jthoughton@google.com> Fixes: 031e6e6b4e12 ("arm64: hugetlb: Avoid unnecessary clearing in huge_ptep_set_access_flags") Cc: <stable@vger.kernel.org> Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Link: https://lore.kernel.org/r/20231204172646.2541916-3-jthoughton@google.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-08KVM: arm64: limit PMU version to PMUv3 for ARMv8.1Andrew Murray
commit c854188ea01062f5a5fd7f05658feb1863774eaa upstream. We currently expose the PMU version of the host to the guest via emulation of the DFR0_EL1 and AA64DFR0_EL1 debug feature registers. However many of the features offered beyond PMUv3 for 8.1 are not supported in KVM. Examples of this include support for the PMMIR registers (added in PMUv3 for ARMv8.4) and 64-bit event counters added in (PMUv3 for ARMv8.5). Let's trap the Debug Feature Registers in order to limit PMUVer/PerfMon in the Debug Feature Registers to PMUv3 for ARMv8.1 to avoid unexpected behaviour. Both ID_AA64DFR0.PMUVer and ID_DFR0.PerfMon follow the "Alternative ID scheme used for the Performance Monitors Extension version" where 0xF means an IMPLEMENTATION DEFINED PMU is implemented, and values 0x0-0xE are treated as with an unsigned field (with 0x0 meaning no PMU is present). As we don't expect to expose an IMPLEMENTATION DEFINED PMU, and our cap is below 0xF, we can treat these fields as unsigned when applying the cap. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> [Mark: make field names consistent, use perfmon cap] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org> [yuzenghui@huawei.com: adjust the context in read_id_reg()] Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-12-08arm64: cpufeature: Extract capped perfmon fieldsAndrew Murray
commit 8e35aa642ee4dab01b16cc4b2df59d1936f3b3c2 upstream. When emulating ID registers there is often a need to cap the version bits of a feature such that the guest will not use features that the host is not aware of. For example, when KVM mediates access to the PMU by emulating register accesses. Let's add a helper that extracts a performance monitors ID field and caps the version to a given value. Fields that identify the version of the Performance Monitors Extension do not follow the standard ID scheme, and instead follow the scheme described in ARM DDI 0487E.a page D13-2825 "Alternative ID scheme used for the Performance Monitors Extension version". The value 0xF means an IMPLEMENTATION DEFINED PMU is present, and values 0x0-OxE can be treated the same as an unsigned field with 0x0 meaning no PMU is present. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> [Mark: rework to handle perfmon fields] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-06-28arm64: Add missing Set/Way CMO encodingsMarc Zyngier
[ Upstream commit 8d0f019e4c4f2ee2de81efd9bf1c27e9fb3c0460 ] Add the missing Set/Way CMOs that apply to tagged memory. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Steven Price <steven.price@arm.com> Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20230515204601.1270428-2-maz@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-17arm64: kgdb: Set PSTATE.SS to 1 to re-enable single-stepSumit Garg
[ Upstream commit af6c0bd59f4f3ad5daad2f7b777954b1954551d5 ] Currently only the first attempt to single-step has any effect. After that all further stepping remains "stuck" at the same program counter value. Refer to the ARM Architecture Reference Manual (ARM DDI 0487E.a) D2.12, PSTATE.SS=1 should be set at each step before transferring the PE to the 'Active-not-pending' state. The problem here is PSTATE.SS=1 is not set since the second single-step. After the first single-step, the PE transferes to the 'Inactive' state, with PSTATE.SS=0 and MDSCR.SS=1, thus PSTATE.SS won't be set to 1 due to kernel_active_single_step()=true. Then the PE transferes to the 'Active-pending' state when ERET and returns to the debugger by step exception. Before this patch: ================== Entering kdb (current=0xffff3376039f0000, pid 1) on processor 0 due to Keyboard Entry [0]kdb> [0]kdb> [0]kdb> bp write_sysrq_trigger Instruction(i) BP #0 at 0xffffa45c13d09290 (write_sysrq_trigger) is enabled addr at ffffa45c13d09290, hardtype=0 installed=0 [0]kdb> go $ echo h > /proc/sysrq-trigger Entering kdb (current=0xffff4f7e453f8000, pid 175) on processor 1 due to Breakpoint @ 0xffffad651a309290 [1]kdb> ss Entering kdb (current=0xffff4f7e453f8000, pid 175) on processor 1 due to SS trap @ 0xffffad651a309294 [1]kdb> ss Entering kdb (current=0xffff4f7e453f8000, pid 175) on processor 1 due to SS trap @ 0xffffad651a309294 [1]kdb> After this patch: ================= Entering kdb (current=0xffff6851c39f0000, pid 1) on processor 0 due to Keyboard Entry [0]kdb> bp write_sysrq_trigger Instruction(i) BP #0 at 0xffffc02d2dd09290 (write_sysrq_trigger) is enabled addr at ffffc02d2dd09290, hardtype=0 installed=0 [0]kdb> go $ echo h > /proc/sysrq-trigger Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to Breakpoint @ 0xffffc02d2dd09290 [1]kdb> ss Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to SS trap @ 0xffffc02d2dd09294 [1]kdb> ss Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to SS trap @ 0xffffc02d2dd09298 [1]kdb> ss Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to SS trap @ 0xffffc02d2dd0929c [1]kdb> Fixes: 44679a4f142b ("arm64: KGDB: Add step debugging support") Co-developed-by: Wei Li <liwei391@huawei.com> Signed-off-by: Wei Li <liwei391@huawei.com> Signed-off-by: Sumit Garg <sumit.garg@linaro.org> Tested-by: Douglas Anderson <dianders@chromium.org> Acked-by: Daniel Thompson <daniel.thompson@linaro.org> Tested-by: Daniel Thompson <daniel.thompson@linaro.org> Link: https://lore.kernel.org/r/20230202073148.657746-3-sumit.garg@linaro.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-01-18arm64: cmpxchg_double*: hazard against entire exchange variableMark Rutland
[ Upstream commit 031af50045ea97ed4386eb3751ca2c134d0fc911 ] The inline assembly for arm64's cmpxchg_double*() implementations use a +Q constraint to hazard against other accesses to the memory location being exchanged. However, the pointer passed to the constraint is a pointer to unsigned long, and thus the hazard only applies to the first 8 bytes of the location. GCC can take advantage of this, assuming that other portions of the location are unchanged, leading to a number of potential problems. This is similar to what we fixed back in commit: fee960bed5e857eb ("arm64: xchg: hazard against entire exchange variable") ... but we forgot to adjust cmpxchg_double*() similarly at the same time. The same problem applies, as demonstrated with the following test: | struct big { | u64 lo, hi; | } __aligned(128); | | unsigned long foo(struct big *b) | { | u64 hi_old, hi_new; | | hi_old = b->hi; | cmpxchg_double_local(&b->lo, &b->hi, 0x12, 0x34, 0x56, 0x78); | hi_new = b->hi; | | return hi_old ^ hi_new; | } ... which GCC 12.1.0 compiles as: | 0000000000000000 <foo>: | 0: d503233f paciasp | 4: aa0003e4 mov x4, x0 | 8: 1400000e b 40 <foo+0x40> | c: d2800240 mov x0, #0x12 // #18 | 10: d2800681 mov x1, #0x34 // #52 | 14: aa0003e5 mov x5, x0 | 18: aa0103e6 mov x6, x1 | 1c: d2800ac2 mov x2, #0x56 // #86 | 20: d2800f03 mov x3, #0x78 // #120 | 24: 48207c82 casp x0, x1, x2, x3, [x4] | 28: ca050000 eor x0, x0, x5 | 2c: ca060021 eor x1, x1, x6 | 30: aa010000 orr x0, x0, x1 | 34: d2800000 mov x0, #0x0 // #0 <--- BANG | 38: d50323bf autiasp | 3c: d65f03c0 ret | 40: d2800240 mov x0, #0x12 // #18 | 44: d2800681 mov x1, #0x34 // #52 | 48: d2800ac2 mov x2, #0x56 // #86 | 4c: d2800f03 mov x3, #0x78 // #120 | 50: f9800091 prfm pstl1strm, [x4] | 54: c87f1885 ldxp x5, x6, [x4] | 58: ca0000a5 eor x5, x5, x0 | 5c: ca0100c6 eor x6, x6, x1 | 60: aa0600a6 orr x6, x5, x6 | 64: b5000066 cbnz x6, 70 <foo+0x70> | 68: c8250c82 stxp w5, x2, x3, [x4] | 6c: 35ffff45 cbnz w5, 54 <foo+0x54> | 70: d2800000 mov x0, #0x0 // #0 <--- BANG | 74: d50323bf autiasp | 78: d65f03c0 ret Notice that at the lines with "BANG" comments, GCC has assumed that the higher 8 bytes are unchanged by the cmpxchg_double() call, and that `hi_old ^ hi_new` can be reduced to a constant zero, for both LSE and LL/SC versions of cmpxchg_double(). This patch fixes the issue by passing a pointer to __uint128_t into the +Q constraint, ensuring that the compiler hazards against the entire 16 bytes being modified. With this change, GCC 12.1.0 compiles the above test as: | 0000000000000000 <foo>: | 0: f9400407 ldr x7, [x0, #8] | 4: d503233f paciasp | 8: aa0003e4 mov x4, x0 | c: 1400000f b 48 <foo+0x48> | 10: d2800240 mov x0, #0x12 // #18 | 14: d2800681 mov x1, #0x34 // #52 | 18: aa0003e5 mov x5, x0 | 1c: aa0103e6 mov x6, x1 | 20: d2800ac2 mov x2, #0x56 // #86 | 24: d2800f03 mov x3, #0x78 // #120 | 28: 48207c82 casp x0, x1, x2, x3, [x4] | 2c: ca050000 eor x0, x0, x5 | 30: ca060021 eor x1, x1, x6 | 34: aa010000 orr x0, x0, x1 | 38: f9400480 ldr x0, [x4, #8] | 3c: d50323bf autiasp | 40: ca0000e0 eor x0, x7, x0 | 44: d65f03c0 ret | 48: d2800240 mov x0, #0x12 // #18 | 4c: d2800681 mov x1, #0x34 // #52 | 50: d2800ac2 mov x2, #0x56 // #86 | 54: d2800f03 mov x3, #0x78 // #120 | 58: f9800091 prfm pstl1strm, [x4] | 5c: c87f1885 ldxp x5, x6, [x4] | 60: ca0000a5 eor x5, x5, x0 | 64: ca0100c6 eor x6, x6, x1 | 68: aa0600a6 orr x6, x5, x6 | 6c: b5000066 cbnz x6, 78 <foo+0x78> | 70: c8250c82 stxp w5, x2, x3, [x4] | 74: 35ffff45 cbnz w5, 5c <foo+0x5c> | 78: f9400480 ldr x0, [x4, #8] | 7c: d50323bf autiasp | 80: ca0000e0 eor x0, x7, x0 | 84: d65f03c0 ret ... sampling the high 8 bytes before and after the cmpxchg, and performing an EOR, as we'd expect. For backporting, I've tested this atop linux-4.9.y with GCC 5.5.0. Note that linux-4.9.y is oldest currently supported stable release, and mandates GCC 5.1+. Unfortunately I couldn't get a GCC 5.1 binary to run on my machines due to library incompatibilities. I've also used a standalone test to check that we can use a __uint128_t pointer in a +Q constraint at least as far back as GCC 4.8.5 and LLVM 3.9.1. Fixes: 5284e1b4bc8a ("arm64: xchg: Implement cmpxchg_double") Fixes: e9a4b795652f ("arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPU") Reported-by: Boqun Feng <boqun.feng@gmail.com> Link: https://lore.kernel.org/lkml/Y6DEfQXymYVgL3oJ@boqun-archlinux/ Reported-by: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/lkml/Y6GXoO4qmH9OIZ5Q@hirez.programming.kicks-ass.net/ Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: stable@vger.kernel.org Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230104151626.3262137-1-mark.rutland@arm.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-11-03arm64: errata: Remove AES hwcap for COMPAT tasksJames Morse
commit 44b3834b2eed595af07021b1c64e6f9bc396398b upstream. Cortex-A57 and Cortex-A72 have an erratum where an interrupt that occurs between a pair of AES instructions in aarch32 mode may corrupt the ELR. The task will subsequently produce the wrong AES result. The AES instructions are part of the cryptographic extensions, which are optional. User-space software will detect the support for these instructions from the hwcaps. If the platform doesn't support these instructions a software implementation should be used. Remove the hwcap bits on affected parts to indicate user-space should not use the AES instructions. Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20220714161523.279570-3-james.morse@arm.com Signed-off-by: Will Deacon <will@kernel.org> [florian: resolved conflicts in arch/arm64/tools/cpucaps and cpu_errata.c] Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-09-10Merge tag 'v4.19.257' into v4.19/standard/baseBruce Ashfield
This is the 4.19.257 stable release # gpg: Signature made Mon 05 Sep 2022 04:26:42 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2022-09-10Merge tag 'v4.19.256' into v4.19/standard/baseBruce Ashfield
This is the 4.19.256 stable release # gpg: Signature made Thu 25 Aug 2022 05:15:59 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2022-09-09Merge tag 'v4.19.238' into v4.19/standard/baseBruce Ashfield
This is the 4.19.238 stable release # gpg: Signature made Fri 15 Apr 2022 08:15:13 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2022-09-09Merge tag 'v4.19.236' into v4.19/standard/baseBruce Ashfield
This is the 4.19.236 stable release # gpg: Signature made Wed 23 Mar 2022 04:11:04 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2022-09-05arm64: map FDT as RW for early_init_dt_scan()Hsin-Yi Wang
commit e112b032a72c78f15d0c803c5dc6be444c2e6c66 upstream. Currently in arm64, FDT is mapped to RO before it's passed to early_init_dt_scan(). However, there might be some codes (eg. commit "fdt: add support for rng-seed") that need to modify FDT during init. Map FDT to RO after early fixups are done. Signed-off-by: Hsin-Yi Wang <hsinyi@chromium.org> Reviewed-by: Stephen Boyd <swboyd@chromium.org> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Will Deacon <will@kernel.org> [mkbestas: fixed trivial conflicts for 4.19 backport] Signed-off-by: Michael Bestas <mkbestas@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-08-25arm64: Do not forget syscall when starting a new thread.Francis Laniel
[ Upstream commit de6921856f99c11d3986c6702d851e1328d4f7f6 ] Enable tracing of the execve*() system calls with the syscalls:sys_exit_execve tracepoint by removing the call to forget_syscall() when starting a new thread and preserving the value of regs->syscallno across exec. Signed-off-by: Francis Laniel <flaniel@linux.microsoft.com> Link: https://lore.kernel.org/r/20220608162447.666494-2-flaniel@linux.microsoft.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-04-15KVM: arm64: Check arm64_get_bp_hardening_data() didn't return NULLJames Morse
Will reports that with CONFIG_EXPERT=y and CONFIG_HARDEN_BRANCH_PREDICTOR=n, the kernel dereferences a NULL pointer during boot: [ 2.384444] Internal error: Oops: 96000004 [#1] PREEMPT SMP [ 2.384461] pstate: 20400085 (nzCv daIf +PAN -UAO) [ 2.384472] pc : cpu_hyp_reinit+0x114/0x30c [ 2.384476] lr : cpu_hyp_reinit+0x80/0x30c [ 2.384529] Call trace: [ 2.384533] cpu_hyp_reinit+0x114/0x30c [ 2.384537] _kvm_arch_hardware_enable+0x30/0x54 [ 2.384541] flush_smp_call_function_queue+0xe4/0x154 [ 2.384544] generic_smp_call_function_single_interrupt+0x10/0x18 [ 2.384549] ipi_handler+0x170/0x2b0 [ 2.384555] handle_percpu_devid_fasteoi_ipi+0x120/0x1cc [ 2.384560] __handle_domain_irq+0x9c/0xf4 [ 2.384563] gic_handle_irq+0x6c/0xe4 [ 2.384566] el1_irq+0xf0/0x1c0 [ 2.384570] arch_cpu_idle+0x28/0x44 [ 2.384574] do_idle+0x100/0x2a8 [ 2.384577] cpu_startup_entry+0x20/0x24 [ 2.384581] secondary_start_kernel+0x1b0/0x1cc [ 2.384589] Code: b9469d08 7100011f 540003ad 52800208 (f9400108) [ 2.384600] ---[ end trace 266d08dbf96ff143 ]--- [ 2.385171] Kernel panic - not syncing: Fatal exception in interrupt In this configuration arm64_get_bp_hardening_data() returns NULL. Add a check in kvm_get_hyp_vector(). Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/linux-arm-kernel/20220408120041.GB27685@willie-the-truck/ Fixes: a68912a3ae3 ("KVM: arm64: Add templates for BHB mitigation sequences") Cc: stable@vger.kernel.org # 4.19 Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-03-23arm64: Use the clearbhb instruction in mitigationsJames Morse
commit 228a26b912287934789023b4132ba76065d9491c upstream. Future CPUs may implement a clearbhb instruction that is sufficient to mitigate SpectreBHB. CPUs that implement this instruction, but not CSV2.3 must be affected by Spectre-BHB. Add support to use this instruction as the BHB mitigation on CPUs that support it. The instruction is in the hint space, so it will be treated by a NOP as older CPUs. Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> [ modified for stable: Use a KVM vector template instead of alternatives ] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: add ID_AA64ISAR2_EL1 sys registerJoey Gouly
commit 9e45365f1469ef2b934f9d035975dbc9ad352116 upstream. This is a new ID register, introduced in 8.7. Signed-off-by: Joey Gouly <joey.gouly@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Reiji Watanabe <reijiw@google.com> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211210165432.8106-3-joey.gouly@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23KVM: arm64: Allow SMCCC_ARCH_WORKAROUND_3 to be discovered and migratedJames Morse
commit a5905d6af492ee6a4a2205f0d550b3f931b03d03 upstream. KVM allows the guest to discover whether the ARCH_WORKAROUND SMCCC are implemented, and to preserve that state during migration through its firmware register interface. Add the necessary boiler plate for SMCCC_ARCH_WORKAROUND_3. Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> [ kvm code moved to virt/kvm/arm, removed fw regs ABI. Added 32bit stub ] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: Mitigate spectre style branch history side channelsJames Morse
commit 558c303c9734af5a813739cd284879227f7297d2 upstream. Speculation attacks against some high-performance processors can make use of branch history to influence future speculation. When taking an exception from user-space, a sequence of branches or a firmware call overwrites or invalidates the branch history. The sequence of branches is added to the vectors, and should appear before the first indirect branch. For systems using KPTI the sequence is added to the kpti trampoline where it has a free register as the exit from the trampoline is via a 'ret'. For systems not using KPTI, the same register tricks are used to free up a register in the vectors. For the firmware call, arch-workaround-3 clobbers 4 registers, so there is no choice but to save them to the EL1 stack. This only happens for entry from EL0, so if we take an exception due to the stack access, it will not become re-entrant. For KVM, the existing branch-predictor-hardening vectors are used. When a spectre version of these vectors is in use, the firmware call is sufficient to mitigate against Spectre-BHB. For the non-spectre versions, the sequence of branches is added to the indirect vector. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: <stable@kernel.org> # <v5.17.x 72bb9dcb6c33c arm64: Add Cortex-X2 CPU part definition Cc: <stable@kernel.org> # <v5.16.x 2d0d656700d67 arm64: Add Neoverse-N2, Cortex-A710 CPU part definition Cc: <stable@kernel.org> # <v5.10.x 8a6b88e66233f arm64: Add part number for Arm Cortex-A77 [ modified for stable, moved code to cpu_errata.c removed bitmap of mitigations, use kvm template infrastructure ] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23KVM: arm64: Add templates for BHB mitigation sequencesJames Morse
KVM writes the Spectre-v2 mitigation template at the beginning of each vector when a CPU requires a specific sequence to run. Because the template is copied, it can not be modified by the alternatives at runtime. As the KVM template code is intertwined with the bp-hardening callbacks, all templates must have a bp-hardening callback. Add templates for calling ARCH_WORKAROUND_3 and one for each value of K in the brancy-loop. Identify these sequences by a new parameter template_start, and add a copy of install_bp_hardening_cb() that is able to install them. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: proton-pack: Report Spectre-BHB vulnerabilities as part of Spectre-v2James Morse
commit dee435be76f4117410bbd90573a881fd33488f37 upstream. Speculation attacks against some high-performance processors can make use of branch history to influence future speculation as part of a spectre-v2 attack. This is not mitigated by CSV2, meaning CPUs that previously reported 'Not affected' are now moderately mitigated by CSV2. Update the value in /sys/devices/system/cpu/vulnerabilities/spectre_v2 to also show the state of the BHB mitigation. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> [ code move to cpu_errata.c for backport ] Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: Add percpu vectors for EL1James Morse
commit bd09128d16fac3c34b80bd6a29088ac632e8ce09 upstream. The Spectre-BHB workaround adds a firmware call to the vectors. This is needed on some CPUs, but not others. To avoid the unaffected CPU in a big/little pair from making the firmware call, create per cpu vectors. The per-cpu vectors only apply when returning from EL0. Systems using KPTI can use the canonical 'full-fat' vectors directly at EL1, the trampoline exit code will switch to this_cpu_vector on exit to EL0. Systems not using KPTI should always use this_cpu_vector. this_cpu_vector will point at a vector in tramp_vecs or __bp_harden_el1_vectors, depending on whether KPTI is in use. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: entry: Add vectors that have the bhb mitigation sequencesJames Morse
commit ba2689234be92024e5635d30fe744f4853ad97db upstream. Some CPUs affected by Spectre-BHB need a sequence of branches, or a firmware call to be run before any indirect branch. This needs to go in the vectors. No CPU needs both. While this can be patched in, it would run on all CPUs as there is a single set of vectors. If only one part of a big/little combination is affected, the unaffected CPUs have to run the mitigation too. Create extra vectors that include the sequence. Subsequent patches will allow affected CPUs to select this set of vectors. Later patches will modify the loop count to match what the CPU requires. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: entry: Allow the trampoline text to occupy multiple pagesJames Morse
commit a9c406e6462ff14956d690de7bbe5131a5677dc9 upstream. Adding a second set of vectors to .entry.tramp.text will make it larger than a single 4K page. Allow the trampoline text to occupy up to three pages by adding two more fixmap slots. Previous changes to tramp_valias allowed it to reach beyond a single page. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: entry: Move the trampoline data page before the text pageJames Morse
commit c091fb6ae059cda563b2a4d93fdbc548ef34e1d6 upstream. The trampoline code has a data page that holds the address of the vectors, which is unmapped when running in user-space. This ensures that with CONFIG_RANDOMIZE_BASE, the randomised address of the kernel can't be discovered until after the kernel has been mapped. If the trampoline text page is extended to include multiple sets of vectors, it will be larger than a single page, making it tricky to find the data page without knowing the size of the trampoline text pages, which will vary with PAGE_SIZE. Move the data page to appear before the text page. This allows the data page to be found without knowing the size of the trampoline text pages. 'tramp_vectors' is used to refer to the beginning of the .entry.tramp.text section, do that explicitly. Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: Add Cortex-X2 CPU part definitionAnshuman Khandual
commit 72bb9dcb6c33cfac80282713c2b4f2b254cd24d1 upstream. Add the CPU Partnumbers for the new Arm designs. Cc: Will Deacon <will@kernel.org> Cc: Suzuki Poulose <suzuki.poulose@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/1642994138-25887-2-git-send-email-anshuman.khandual@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: Add Neoverse-N2, Cortex-A710 CPU part definitionSuzuki K Poulose
commit 2d0d656700d67239a57afaf617439143d8dac9be upstream. Add the CPU Partnumbers for the new Arm designs. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20211019163153.3692640-2-suzuki.poulose@arm.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-03-23arm64: Add part number for Arm Cortex-A77Rob Herring
commit 8a6b88e66233f5f1779b0a1342aa9dc030dddcd5 upstream. Add the MIDR part number info for the Arm Cortex-A77. Signed-off-by: Rob Herring <robh@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201028182839.166037-1-robh@kernel.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-01-25Merge tag 'v4.19.218' into v4.19/standard/baseBruce Ashfield
This is the 4.19.218 stable release Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com> # gpg: Signature made Fri 26 Nov 2021 05:36:32 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key # Conflicts: # arch/arm/Makefile
2022-01-25Merge tag 'v4.19.207' into v4.19/standard/baseBruce Ashfield
This is the 4.19.207 stable release # gpg: Signature made Wed 22 Sep 2021 05:48:26 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-11-26arm64: pgtable: make __pte_to_phys/__phys_to_pte_val inline functionsArnd Bergmann
[ Upstream commit c7c386fbc20262c1d911c615c65db6a58667d92c ] gcc warns about undefined behavior the vmalloc code when building with CONFIG_ARM64_PA_BITS_52, when the 'idx++' in the argument to __phys_to_pte_val() is evaluated twice: mm/vmalloc.c: In function 'vmap_pfn_apply': mm/vmalloc.c:2800:58: error: operation on 'data->idx' may be undefined [-Werror=sequence-point] 2800 | *pte = pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot)); | ~~~~~~~~~^~ arch/arm64/include/asm/pgtable-types.h:25:37: note: in definition of macro '__pte' 25 | #define __pte(x) ((pte_t) { (x) } ) | ^ arch/arm64/include/asm/pgtable.h:80:15: note: in expansion of macro '__phys_to_pte_val' 80 | __pte(__phys_to_pte_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)) | ^~~~~~~~~~~~~~~~~ mm/vmalloc.c:2800:30: note: in expansion of macro 'pfn_pte' 2800 | *pte = pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot)); | ^~~~~~~ I have no idea why this never showed up earlier, but the safest workaround appears to be changing those macros into inline functions so the arguments get evaluated only once. Cc: Matthew Wilcox <willy@infradead.org> Fixes: 75387b92635e ("arm64: handle 52-bit physical addresses in page table entries") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20211105075414.2553155-1-arnd@kernel.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-09-22arm64: head: avoid over-mapping in map_memoryMark Rutland
commit 90268574a3e8a6b883bd802d702a2738577e1006 upstream. The `compute_indices` and `populate_entries` macros operate on inclusive bounds, and thus the `map_memory` macro which uses them also operates on inclusive bounds. We pass `_end` and `_idmap_text_end` to `map_memory`, but these are exclusive bounds, and if one of these is sufficiently aligned (as a result of kernel configuration, physical placement, and KASLR), then: * In `compute_indices`, the computed `iend` will be in the page/block *after* the final byte of the intended mapping. * In `populate_entries`, an unnecessary entry will be created at the end of each level of table. At the leaf level, this entry will map up to SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend to map. As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may violate the boot protocol and map physical address past the 2MiB-aligned end address we are permitted to map. As we map these with Normal memory attributes, this may result in further problems depending on what these physical addresses correspond to. The final entry at each level may require an additional table at that level. As EARLY_ENTRIES() calculates an inclusive bound, we allocate enough memory for this. Avoid the extraneous mapping by having map_memory convert the exclusive end address to an inclusive end address by subtracting one, and do likewise in EARLY_ENTRIES() when calculating the number of required tables. For clarity, comments are updated to more clearly document which boundaries the macros operate on. For consistency with the other macros, the comments in map_memory are also updated to describe `vstart` and `vend` as virtual addresses. Fixes: 0370b31e4845 ("arm64: Extend early page table code to allow for larger kernels") Cc: <stable@vger.kernel.org> # 4.16.x Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210823101253.55567-1-mark.rutland@arm.com Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-06-10Merge tag 'v4.19.191' into v4.19/standard/baseBruce Ashfield
This is the 4.19.191 stable release # gpg: Signature made Sat 22 May 2021 05:07:03 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-06-10Merge tag 'v4.19.189' into v4.19/standard/baseBruce Ashfield
This is the 4.19.189 stable release # gpg: Signature made Wed 28 Apr 2021 07:18:17 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-06-10Merge tag 'v4.19.188' into v4.19/standard/baseBruce Ashfield
This is the 4.19.188 stable release # gpg: Signature made Fri 16 Apr 2021 05:50:28 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-06-10Merge tag 'v4.19.182' into v4.19/standard/baseBruce Ashfield
This is the 4.19.182 stable release # gpg: Signature made Sat 20 Mar 2021 05:38:58 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-06-10Merge tag 'v4.19.179' into v4.19/standard/baseBruce Ashfield
This is the 4.19.179 stable release # gpg: Signature made Sun 07 Mar 2021 06:19:26 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2021-05-22KVM: arm64: Initialize VCPU mdcr_el2 before loading itAlexandru Elisei
commit 263d6287da1433aba11c5b4046388f2cdf49675c upstream. When a VCPU is created, the kvm_vcpu struct is initialized to zero in kvm_vm_ioctl_create_vcpu(). On VHE systems, the first time vcpu.arch.mdcr_el2 is loaded on hardware is in vcpu_load(), before it is set to a sensible value in kvm_arm_setup_debug() later in the run loop. The result is that KVM executes for a short time with MDCR_EL2 set to zero. This has several unintended consequences: * Setting MDCR_EL2.HPMN to 0 is constrained unpredictable according to ARM DDI 0487G.a, page D13-3820. The behavior specified by the architecture in this case is for the PE to behave as if MDCR_EL2.HPMN is set to a value less than or equal to PMCR_EL0.N, which means that an unknown number of counters are now disabled by MDCR_EL2.HPME, which is zero. * The host configuration for the other debug features controlled by MDCR_EL2 is temporarily lost. This has been harmless so far, as Linux doesn't use the other fields, but that might change in the future. Let's avoid both issues by initializing the VCPU's mdcr_el2 field in kvm_vcpu_vcpu_first_run_init(), thus making sure that the MDCR_EL2 register has a consistent value after each vcpu_load(). Fixes: d5a21bcc2995 ("KVM: arm64: Move common VHE/non-VHE trap config in separate functions") Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210407144857.199746-3-alexandru.elisei@arm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-28arm64: alternatives: Move length validation in alternative_{insn, endif}Nathan Chancellor
commit 22315a2296f4c251fa92aec45fbbae37e9301b6c upstream. After commit 2decad92f473 ("arm64: mte: Ensure TIF_MTE_ASYNC_FAULT is set atomically"), LLVM's integrated assembler fails to build entry.S: <instantiation>:5:7: error: expected assembly-time absolute expression .org . - (664b-663b) + (662b-661b) ^ <instantiation>:6:7: error: expected assembly-time absolute expression .org . - (662b-661b) + (664b-663b) ^ The root cause is LLVM's assembler has a one-pass design, meaning it cannot figure out these instruction lengths when the .org directive is outside of the subsection that they are in, which was changed by the .arch_extension directive added in the above commit. Apply the same fix from commit 966a0acce2fc ("arm64/alternatives: move length validation inside the subsection") to the alternative_endif macro, shuffling the .org directives so that the length validation happen will always happen in the same subsections. alternative_insn has not shown any issue yet but it appears that it could have the same issue in the future so just preemptively change it. Fixes: f7b93d42945c ("arm64/alternatives: use subsections for replacement sequences") Cc: <stable@vger.kernel.org> # 5.8.x Link: https://github.com/ClangBuiltLinux/linux/issues/1347 Signed-off-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Tested-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lore.kernel.org/r/20210414000803.662534-1-nathan@kernel.org Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-28arm64: fix inline asm in load_unaligned_zeropad()Peter Collingbourne
commit 185f2e5f51c2029efd9dd26cceb968a44fe053c6 upstream. The inline asm's addr operand is marked as input-only, however in the case where an exception is taken it may be modified by the BIC instruction on the exception path. Fix the problem by using a temporary register as the destination register for the BIC instruction. Signed-off-by: Peter Collingbourne <pcc@google.com> Cc: stable@vger.kernel.org Link: https://linux-review.googlesource.com/id/I84538c8a2307d567b4f45bb20b715451005f9617 Link: https://lore.kernel.org/r/20210401165110.3952103-1-pcc@google.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-04-16KVM: arm64: Disable guest access to trace filter controlsSuzuki K Poulose
[ Upstream commit a354a64d91eec3e0f8ef0eed575b480fd75b999c ] Disable guest access to the Trace Filter control registers. We do not advertise the Trace filter feature to the guest (ID_AA64DFR0_EL1: TRACE_FILT is cleared) already, but the guest can still access the TRFCR_EL1 unless we trap it. This will also make sure that the guest cannot fiddle with the filtering controls set by a nvhe host. Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210323120647.454211-3-suzuki.poulose@arm.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-20KVM: arm64: nvhe: Save the SPE context earlySuzuki K Poulose
commit b96b0c5de685df82019e16826a282d53d86d112c upstream The nVHE KVM hyp drains and disables the SPE buffer, before entering the guest, as the EL1&0 translation regime is going to be loaded with that of the guest. But this operation is performed way too late, because : - The owning translation regime of the SPE buffer is transferred to EL2. (MDCR_EL2_E2PB == 0) - The guest Stage1 is loaded. Thus the flush could use the host EL1 virtual address, but use the EL2 translations instead of host EL1, for writing out any cached data. Fix this by moving the SPE buffer handling early enough. The restore path is doing the right thing. Cc: stable@vger.kernel.org # v4.19 Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-07arm64: Use correct ll/sc atomic constraintsAndrew Murray
commit 580fa1b874711d633f9b145b7777b0e83ebf3787 upstream. The A64 ISA accepts distinct (but overlapping) ranges of immediates for: * add arithmetic instructions ('I' machine constraint) * sub arithmetic instructions ('J' machine constraint) * 32-bit logical instructions ('K' machine constraint) * 64-bit logical instructions ('L' machine constraint) ... but we currently use the 'I' constraint for many atomic operations using sub or logical instructions, which is not always valid. When CONFIG_ARM64_LSE_ATOMICS is not set, this allows invalid immediates to be passed to instructions, potentially resulting in a build failure. When CONFIG_ARM64_LSE_ATOMICS is selected the out-of-line ll/sc atomics always use a register as they have no visibility of the value passed by the caller. This patch adds a constraint parameter to the ATOMIC_xx and __CMPXCHG_CASE macros so that we can pass appropriate constraints for each case, with uses updated accordingly. Unfortunately prior to GCC 8.1.0 the 'K' constraint erroneously accepted '4294967295', so we must instead force the use of a register. Signed-off-by: Andrew Murray <andrew.murray@arm.com> Signed-off-by: Will Deacon <will@kernel.org> [bwh: Backported to 4.19: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-07arm64: cmpxchg: Use "K" instead of "L" for ll/sc immediate constraintWill Deacon
commit 4230509978f2921182da4e9197964dccdbe463c3 upstream. The "L" AArch64 machine constraint, which we use for the "old" value in an LL/SC cmpxchg(), generates an immediate that is suitable for a 64-bit logical instruction. However, for cmpxchg() operations on types smaller than 64 bits, this constraint can result in an invalid instruction which is correctly rejected by GAS, such as EOR W1, W1, #0xffffffff. Whilst we could special-case the constraint based on the cmpxchg size, it's far easier to change the constraint to "K" and put up with using a register for large 64-bit immediates. For out-of-line LL/SC atomics, this is all moot anyway. Reported-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-07arm64: Avoid redundant type conversions in xchg() and cmpxchg()Will Deacon
commit 5ef3fe4cecdf82fdd71ce78988403963d01444d4 upstream. Our atomic instructions (either LSE atomics of LDXR/STXR sequences) natively support byte, half-word, word and double-word memory accesses so there is no need to mask the data register prior to being stored. Signed-off-by: Will Deacon <will.deacon@arm.com> [bwh: Backported to 4.19: adjust context] Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>