aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86
AgeCommit message (Collapse)Author
2023-09-23x86/virt: Drop unnecessary check on extended CPUID level in cpu_has_svm()Sean Christopherson
[ Upstream commit 5df8ecfe3632d5879d1f154f7aa8de441b5d1c89 ] Drop the explicit check on the extended CPUID level in cpu_has_svm(), the kernel's cached CPUID info will leave the entire SVM leaf unset if said leaf is not supported by hardware. Prior to using cached information, the check was needed to avoid false positives due to Intel's rather crazy CPUID behavior of returning the values of the maximum supported leaf if the specified leaf is unsupported. Fixes: 682a8108872f ("x86/kvm/svm: Simplify cpu_has_svm()") Link: https://lore.kernel.org/r/20230721201859.2307736-13-seanjc@google.com Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-23x86/speculation: Mark all Skylake CPUs as vulnerable to GDSDave Hansen
[ Upstream commit c9f4c45c8ec3f07f4f083f9750032a1ec3eab6b2 ] The Gather Data Sampling (GDS) vulnerability is common to all Skylake processors. However, the "client" Skylakes* are now in this list: https://www.intel.com/content/www/us/en/support/articles/000022396/processors.html which means they are no longer included for new vulnerabilities here: https://www.intel.com/content/www/us/en/developer/topic-technology/software-security-guidance/processors-affected-consolidated-product-cpu-model.html or in other GDS documentation. Thus, they were not included in the original GDS mitigation patches. Mark SKYLAKE and SKYLAKE_L as vulnerable to GDS to match all the other Skylake CPUs (which include Kaby Lake). Also group the CPUs so that the ones that share the exact same vulnerabilities are next to each other. Last, move SRBDS to the end of each line. This makes it clear at a glance that SKYLAKE_X is unique. Of the five Skylakes, it is the only "server" CPU and has a different implementation from the clients of the "special register" hardware, making it immune to SRBDS. This makes the diff much harder to read, but the resulting table is worth it. I very much appreciate the report from Michael Zhivich about this issue. Despite what level of support a hardware vendor is providing, the kernel very much needs an accurate and up-to-date list of vulnerable CPUs. More reports like this are very welcome. * Client Skylakes are CPUID 406E3/506E3 which is family 6, models 0x4E and 0x5E, aka INTEL_FAM6_SKYLAKE and INTEL_FAM6_SKYLAKE_L. Reported-by: Michael Zhivich <mzhivich@akamai.com> Fixes: 8974eb588283 ("x86/speculation: Add Gather Data Sampling mitigation") Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-23x86/APM: drop the duplicate APM_MINOR_DEV macroRandy Dunlap
[ Upstream commit 4ba2909638a29630a346d6c4907a3105409bee7d ] This source file already includes <linux/miscdevice.h>, which contains the same macro. It doesn't need to be defined here again. Fixes: 874bcd00f520 ("apm-emulation: move APM_MINOR_DEV to include/linux/miscdevice.h") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Jiri Kosina <jikos@kernel.org> Cc: x86@kernel.org Cc: Sohil Mehta <sohil.mehta@intel.com> Cc: Corentin Labbe <clabbe.montjoie@gmail.com> Reviewed-by: Sohil Mehta <sohil.mehta@intel.com> Link: https://lore.kernel.org/r/20230728011120.759-1-rdunlap@infradead.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-23x86/decompressor: Don't rely on upper 32 bits of GPRs being preservedArd Biesheuvel
[ Upstream commit 264b82fdb4989cf6a44a2bcd0c6ea05e8026b2ac ] The 4-to-5 level mode switch trampoline disables long mode and paging in order to be able to flick the LA57 bit. According to section 3.4.1.1 of the x86 architecture manual [0], 64-bit GPRs might not retain the upper 32 bits of their contents across such a mode switch. Given that RBP, RBX and RSI are live at this point, preserve them on the stack, along with the return address that might be above 4G as well. [0] Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 1: Basic Architecture "Because the upper 32 bits of 64-bit general-purpose registers are undefined in 32-bit modes, the upper 32 bits of any general-purpose register are not preserved when switching from 64-bit mode to a 32-bit mode (to protected mode or compatibility mode). Software must not depend on these bits to maintain a value after a 64-bit to 32-bit mode switch." Fixes: 194a9749c73d650c ("x86/boot/compressed/64: Handle 5-level paging boot if kernel is above 4G") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230807162720.545787-2-ardb@kernel.org Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-23x86/boot: Annotate local functionsJiri Slaby
[ Upstream commit deff8a24e1021fb39dddf5f6bc5832e0e3a632ea ] .Lrelocated, .Lpaging_enabled, .Lno_longmode, and .Lin_pm32 are self-standing local functions, annotate them as such and preserve "no alignment". The annotations do not generate anything yet. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: linux-arch@vger.kernel.org Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wei Huang <wei@redhat.com> Cc: x86-ml <x86@kernel.org> Cc: Xiaoyao Li <xiaoyao.li@linux.intel.com> Link: https://lkml.kernel.org/r/20191011115108.12392-8-jslaby@suse.cz Stable-dep-of: 264b82fdb498 ("x86/decompressor: Don't rely on upper 32 bits of GPRs being preserved") Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-23x86/asm: Make more symbols localJiri Slaby
[ Upstream commit 30a2441cae7b149ff484a697bf9eb8de53240a4f ] During the assembly cleanup patchset review, I found more symbols which are used only locally. So make them really local by prepending ".L" to them. Namely: - wakeup_idt is used only in realmode/rm/wakeup_asm.S. - in_pm32 is used only in boot/pmjump.S. - retint_user is used only in entry/entry_64.S, perhaps since commit 2ec67971facc ("x86/entry/64/compat: Remove most of the fast system call machinery"), where entry_64_compat's caller was removed. Drop GLOBAL from all of them too. I do not see more candidates in the series. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Acked-by: Borislav Petkov <bp@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bp@alien8.de Cc: hpa@zytor.com Link: https://lkml.kernel.org/r/20191011092213.31470-1-jslaby@suse.cz Signed-off-by: Ingo Molnar <mingo@kernel.org> Stable-dep-of: 264b82fdb498 ("x86/decompressor: Don't rely on upper 32 bits of GPRs being preserved") Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-09-04Merge tag 'v5.4.255' into v5.4/standard/baseBruce Ashfield
This is the 5.4.255 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmTvUhUACgkQONu9yGCS # aT4bKA//VvBb7CUEq4FFMv5qig67dKUIqJVfpwLrqaCqVR8B0QonL1M5dcKXywwT # zFqcQNGmgig9TtbYmrLtcpI/v3J3jilY7/an5dWBEPteyZgpkpAwO3M7MinbtIbj # qRkU5qN/zojUMqgWUYRenICeiN4EOVQ64/Q9fhbj2yFBeQWzCFb0eoeF059DocTD # UzN1Ls+cYHvZEDi0VEiapQzYX1JcxMbuWaGDttQLDvjV6FMaExT5mIobDqSF+9MA # MS9GGj3R/Q+NjOi/AXEMfnWGEYPLsX5hgM3ok2hjyneJiw1J6OqxG1JoPJAnDUEH # d3u/tlcWQ0j/QP0iNZBvC9aVC9YBndOoaAny5QINoLGQsbeCbZ34cKs80p76xTBa # Vvl/B2pFu3pGVBk7f37rf/D2v/MTxkDONxwBzG4J6uDViPgpIDK7UExjGDub6gf1 # Ii5HmXvGCNwIk3NnCpdaHUQy3XRI7cz24kvDZsqkalMW6GYwlVNj9gikcW3dfOVY # Jsdufo9fM5N3jXbru3NW61ne024+NxGRd3SnUsYB/saKfUZAxm0S/O34fzQi3wZx # VLXFB85DIY5gkYl2VeycDZzmVkFEaDP4vzDR1gCmMTaiQsyQuD5wma6dUGggdF/2 # fvigMgosamWhHHHByASp9RxYRBwTe7vEdFE4+8gbEa7NxMoBcg8= # =Dhtw # -----END PGP SIGNATURE----- # gpg: Signature made Wed 30 Aug 2023 10:28:37 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-08-30x86/fpu: Set X86_FEATURE_OSXSAVE feature after enabling OSXSAVE in CR4Feng Tang
commit 2c66ca3949dc701da7f4c9407f2140ae425683a5 upstream. 0-Day found a 34.6% regression in stress-ng's 'af-alg' test case, and bisected it to commit b81fac906a8f ("x86/fpu: Move FPU initialization into arch_cpu_finalize_init()"), which optimizes the FPU init order, and moves the CR4_OSXSAVE enabling into a later place: arch_cpu_finalize_init identify_boot_cpu identify_cpu generic_identify get_cpu_cap --> setup cpu capability ... fpu__init_cpu fpu__init_cpu_xstate cr4_set_bits(X86_CR4_OSXSAVE); As the FPU is not yet initialized the CPU capability setup fails to set X86_FEATURE_OSXSAVE. Many security module like 'camellia_aesni_avx_x86_64' depend on this feature and therefore fail to load, causing the regression. Cure this by setting X86_FEATURE_OSXSAVE feature right after OSXSAVE enabling. [ tglx: Moved it into the actual BSP FPU initialization code and added a comment ] Fixes: b81fac906a8f ("x86/fpu: Move FPU initialization into arch_cpu_finalize_init()") Reported-by: kernel test robot <oliver.sang@intel.com> Signed-off-by: Feng Tang <feng.tang@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/lkml/202307192135.203ac24e-oliver.sang@intel.com Link: https://lore.kernel.org/lkml/20230823065747.92257-1-feng.tang@intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-21Merge tag 'v5.4.254' into v5.4/standard/baseBruce Ashfield
This is the 5.4.254 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmTc9xQACgkQONu9yGCS # aT4VrQ//aeDhl35R4kgB8ZYcv2EHa9DgE+cDkFYIUruQ029jPET1NkMuXMiWys2K # trzC5VCAawfac8jHR4kAetP2W6ydHgSQprtUxjgjIwKlPgiFQOla5/DXNqMGTv9Y # fFKcIfgpef9OkvPr46LHf1S4i3H8vaVu3pN40gEnIAPu4Rgz3wvbmi6BoSTszThI # MrbOu70iRxVxF8+3HR8G3Ja6bly4dTl2Y36P6G+JPAQc8+X9RT1Yv39kiAg7TSzP # i/SIn2G5pNu2+BMcxUn0SO13YXPYIGQTZ6xLX13QIDEuoqQRjDB9vZPaKoSm7170 # Nh1JjvUXy72/zvDJYJWzbrL/o4AfS2tHBNTUq5loXO4LPvbmg18g3Xysuk6vRBmG # NjOpbG9PK3MCwWEQIdEAgtFF6hw74MF/UM7b1Tty3FAaWf4Ca75vY5acqSy4Fhbx # KuBnNGZTaRJRCUFeEBtAz8sIeFCclfuSpBhrQzj8zCkDT59PuiWJ7wyW1yAizMtE # VNvDJFxdQXtU06AHWJxp9msWiXYBZbFfk75MVTxnZLS3SDJuJ40fDjjGCM4WciMg # gBLS+v4tO67KEuhj8t50WJW4Kk7P1Q0j3QkItHOMMRpsJQZF3z8rqEqthAacEaAH # hVvB2OxpXicqpHwrFW3DZuxzgfJd6sofaKq85/2I7FullRyXVjg= # =6G08 # -----END PGP SIGNATURE----- # gpg: Signature made Wed 16 Aug 2023 12:19:32 PM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-08-16x86: Move gds_ucode_mitigated() declaration to headerArnd Bergmann
commit eb3515dc99c7c85f4170b50838136b2a193f8012 upstream. The declaration got placed in the .c file of the caller, but that causes a warning for the definition: arch/x86/kernel/cpu/bugs.c:682:6: error: no previous prototype for 'gds_ucode_mitigated' [-Werror=missing-prototypes] Move it to a header where both sides can observe it instead. Fixes: 81ac7e5d74174 ("KVM: Add GDS_NO support to KVM") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Tested-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Cc: stable@kernel.org Link: https://lore.kernel.org/all/20230809130530.1913368-2-arnd%40kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16x86/mm: Fix VDSO and VVAR placement on 5-level paging machinesKirill A. Shutemov
commit 1b8b1aa90c9c0e825b181b98b8d9e249dc395470 upstream. Yingcong has noticed that on the 5-level paging machine, VDSO and VVAR VMAs are placed above the 47-bit border: 8000001a9000-8000001ad000 r--p 00000000 00:00 0 [vvar] 8000001ad000-8000001af000 r-xp 00000000 00:00 0 [vdso] This might confuse users who are not aware of 5-level paging and expect all userspace addresses to be under the 47-bit border. So far problem has only been triggered with ASLR disabled, although it may also occur with ASLR enabled if the layout is randomized in a just right way. The problem happens due to custom placement for the VMAs in the VDSO code: vdso_addr() tries to place them above the stack and checks the result against TASK_SIZE_MAX, which is wrong. TASK_SIZE_MAX is set to the 56-bit border on 5-level paging machines. Use DEFAULT_MAP_WINDOW instead. Fixes: b569bab78d8d ("x86/mm: Prepare to expose larger address space to userspace") Reported-by: Yingcong Wu <yingcong.wu@intel.com> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/20230803151609.22141-1-kirill.shutemov%40linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16x86/cpu/amd: Enable Zenbleed fix for AMD Custom APU 0405Cristian Ciocaltea
commit 6dbef74aeb090d6bee7d64ef3fa82ae6fa53f271 upstream. Commit 522b1d69219d ("x86/cpu/amd: Add a Zenbleed fix") provided a fix for the Zen2 VZEROUPPER data corruption bug affecting a range of CPU models, but the AMD Custom APU 0405 found on SteamDeck was not listed, although it is clearly affected by the vulnerability. Add this CPU variant to the Zenbleed erratum list, in order to unconditionally enable the fallback fix until a proper microcode update is available. Fixes: 522b1d69219d ("x86/cpu/amd: Add a Zenbleed fix") Signed-off-by: Cristian Ciocaltea <cristian.ciocaltea@collabora.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230811203705.1699914-1-cristian.ciocaltea@collabora.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-16x86/pkeys: Revert a5eff7259790 ("x86/pkeys: Add PKRU value to init_fpstate")Thomas Gleixner
commit b3607269ff57fd3c9690cb25962c5e4b91a0fd3b upstream. This cannot work and it's unclear how that ever made a difference. init_fpstate.xsave.header.xfeatures is always 0 so get_xsave_addr() will always return a NULL pointer, which will prevent storing the default PKRU value in init_fpstate. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210623121451.451391598@linutronix.de Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08Merge tag 'v5.4.252' into v5.4/standard/baseBruce Ashfield
This is the 5.4.252 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmTSgdwACgkQONu9yGCS # aT4tfg/9GUHbPrj8Eo7IRWoaPXmlDT6GCCtjwJm6l09BebEwNN3t7Rg8e+uPkrof # dUdLwQbdYwwLcwjbvE/ZF9UxcYbH7dSWzfBYs1dkcY9oQ6gPl+8Qq1Ld3KvG9BmI # 7hFdV4dpOv/0hPUmS9PNSOZhD1eiqdqKJxlvul3v7RJhSSV8Ppwiq23Q+gn9U8pR # aMF9PDCHpfLvtR68jkl+gJb/VE9vwlWog0V2TCIovJxkbrmc3UWmA1KOjXFv6QnF # fsfWEH4tCYOYWjYkYFGWyIcfdFT4VjSJQsOOVwzpZV7ttZW5q5I3YLXulIvi4uai # 3kOewYbnGhRSO5abuarn/nxlXM1nukiyYU00PqA/p2kiQyNFyXEO8UeEDE9pnbZh # GSwShxL+g++AEkP12wbQQajCGPYqWYFKIiEtOiVKY3SM7Twd2JHgmmA0xNa9TAQn # cMFHklx2867ZQzOdVDLIrEkm9nC/pLxJRMGnKMkwyv6WMVga3tADmuXSxhkCLfNZ # 6tUCrOi+b3c5teeWc1kQ1qc8uPFUDIE0TV+4LXK8tHd0C25bN8ws+ilTc7J1g9Ki # W3KeTetOyeW39u4MH9l1WW2YOlAI3NL92XEJAhvPHCbB84lSTdO3YMvneiaHQ8/t # NHavaEimMlw1krV1G8ko2YR+q8TN1rLMmSMAhLwyzxKKvCh+w20= # =IU0F # -----END PGP SIGNATURE----- # gpg: Signature made Tue 08 Aug 2023 01:56:44 PM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-08-08x86: fix backwards merge of GDS/SRSO bitGreg Kroah-Hartman
Stable-tree-only change. Due to the way the GDS and SRSO patches flowed into the stable tree, it was a 50% chance that the merge of the which value GDS and SRSO should be. Of course, I lost that bet, and chose the opposite of what Linus chose in commit 64094e7e3118 ("Merge tag 'gds-for-linus-2023-08-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip") Fix this up by switching the values to match what is now in Linus's tree as that is the correct value to mirror. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/cpu, kvm: Add support for CPUID_80000021_EAXKim Phillips
commit 8415a74852d7c24795007ee9862d25feb519007c upstream. Add support for CPUID leaf 80000021, EAX. The majority of the features will be used in the kernel and thus a separate leaf is appropriate. Include KVM's reverse_cpuid entry because features are used by VM guests, too. [ bp: Massage commit message. ] Signed-off-by: Kim Phillips <kim.phillips@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20230124163319.2277355-2-kim.phillips@amd.com [bwh: Backported to 6.1: adjust context] Signed-off-by: Ben Hutchings <benh@debian.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/bugs: Increase the x86 bugs vector size to two u32sBorislav Petkov (AMD)
Upstream commit: 0e52740ffd10c6c316837c6c128f460f1aaba1ea There was never a doubt in my mind that they would not fit into a single u32 eventually. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/cpufeatures: Assign dedicated feature word for CPUID_0x8000001F[EAX]Sean Christopherson
commit fb35d30fe5b06cc24444f0405da8fbe0be5330d1 upstream. Collect the scattered SME/SEV related feature flags into a dedicated word. There are now five recognized features in CPUID.0x8000001F.EAX, with at least one more on the horizon (SEV-SNP). Using a dedicated word allows KVM to use its automagic CPUID adjustment logic when reporting the set of supported features to userspace. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Link: https://lkml.kernel.org/r/20210122204047.2860075-2-seanjc@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/cpu: Add VM page flush MSR availablility as a CPUID featureTom Lendacky
commit 69372cf01290b9587d2cee8fbe161d75d55c3adc upstream. On systems that do not have hardware enforced cache coherency between encrypted and unencrypted mappings of the same physical page, the hypervisor can use the VM page flush MSR (0xc001011e) to flush the cache contents of an SEV guest page. When a small number of pages are being flushed, this can be used in place of issuing a WBINVD across all CPUs. CPUID 0x8000001f_eax[2] is used to determine if the VM page flush MSR is available. Add a CPUID feature to indicate it is supported and define the MSR. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Message-Id: <f1966379e31f9b208db5257509c4a089a87d33d0.1607620209.git.thomas.lendacky@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/cpufeatures: Add SEV-ES CPU featureTom Lendacky
commit 360e7c5c4ca4fd8e627781ed42f95d58bc3bb732 upstream. Add CPU feature detection for Secure Encrypted Virtualization with Encrypted State. This feature enhances SEV by also encrypting the guest register state, making it in-accessible to the hypervisor. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-6-joro@8bytes.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/mm: Use mm_alloc() in poking_init()Peter Zijlstra
commit 3f4c8211d982099be693be9aa7d6fc4607dff290 upstream. Instead of duplicating init_mm, allocate a fresh mm. The advantage is that mm_alloc() has much simpler dependencies. Additionally it makes more conceptual sense, init_mm has no (and must not have) user state to duplicate. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221025201057.816175235@infradead.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/mm: fix poking_init() for Xen PV guestsJuergen Gross
commit 26ce6ec364f18d2915923bc05784084e54a5c4cc upstream. Commit 3f4c8211d982 ("x86/mm: Use mm_alloc() in poking_init()") broke the kernel for running as Xen PV guest. It seems as if the new address space is never activated before being used, resulting in Xen rejecting to accept the new CR3 value (the PGD isn't pinned). Fix that by adding the now missing call of paravirt_arch_dup_mmap() to poking_init(). That call was previously done by dup_mm()->dup_mmap() and it is a NOP for all cases but for Xen PV, where it is just doing the pinning of the PGD. Fixes: 3f4c8211d982 ("x86/mm: Use mm_alloc() in poking_init()") Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20230109150922.10578-1-jgross@suse.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/xen: Fix secondary processors' FPU initializationJuergen Gross
commit fe3e0a13e597c1c8617814bf9b42ab732db5c26e upstream. Moving the call of fpu__init_cpu() from cpu_init() to start_secondary() broke Xen PV guests, as those don't call start_secondary() for APs. Call fpu__init_cpu() in Xen's cpu_bringup(), which is the Xen PV replacement of start_secondary(). Fixes: b81fac906a8f ("x86/fpu: Move FPU initialization into arch_cpu_finalize_init()") Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230703130032.22916-1-jgross@suse.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08KVM: Add GDS_NO support to KVMDaniel Sneddon
commit 81ac7e5d741742d650b4ed6186c4826c1a0631a7 upstream Gather Data Sampling (GDS) is a transient execution attack using gather instructions from the AVX2 and AVX512 extensions. This attack allows malicious code to infer data that was previously stored in vector registers. Systems that are not vulnerable to GDS will set the GDS_NO bit of the IA32_ARCH_CAPABILITIES MSR. This is useful for VM guests that may think they are on vulnerable systems that are, in fact, not affected. Guests that are running on affected hosts where the mitigation is enabled are protected as if they were running on an unaffected system. On all hosts that are not affected or that are mitigated, set the GDS_NO bit. Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/speculation: Add Kconfig option for GDSDaniel Sneddon
commit 53cf5797f114ba2bd86d23a862302119848eff19 upstream Gather Data Sampling (GDS) is mitigated in microcode. However, on systems that haven't received the updated microcode, disabling AVX can act as a mitigation. Add a Kconfig option that uses the microcode mitigation if available and disables AVX otherwise. Setting this option has no effect on systems not affected by GDS. This is the equivalent of setting gather_data_sampling=force. Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/speculation: Add force option to GDS mitigationDaniel Sneddon
commit 553a5c03e90a6087e88f8ff878335ef0621536fb upstream The Gather Data Sampling (GDS) vulnerability allows malicious software to infer stale data previously stored in vector registers. This may include sensitive data such as cryptographic keys. GDS is mitigated in microcode, and systems with up-to-date microcode are protected by default. However, any affected system that is running with older microcode will still be vulnerable to GDS attacks. Since the gather instructions used by the attacker are part of the AVX2 and AVX512 extensions, disabling these extensions prevents gather instructions from being executed, thereby mitigating the system from GDS. Disabling AVX2 is sufficient, but we don't have the granularity to do this. The XCR0[2] disables AVX, with no option to just disable AVX2. Add a kernel parameter gather_data_sampling=force that will enable the microcode mitigation if available, otherwise it will disable AVX on affected systems. This option will be ignored if cmdline mitigations=off. This is a *big* hammer. It is known to break buggy userspace that uses incomplete, buggy AVX enumeration. Unfortunately, such userspace does exist in the wild: https://www.mail-archive.com/bug-coreutils@gnu.org/msg33046.html [ dhansen: add some more ominous warnings about disabling AVX ] Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/speculation: Add Gather Data Sampling mitigationDaniel Sneddon
commit 8974eb588283b7d44a7c91fa09fcbaf380339f3a upstream Gather Data Sampling (GDS) is a hardware vulnerability which allows unprivileged speculative access to data which was previously stored in vector registers. Intel processors that support AVX2 and AVX512 have gather instructions that fetch non-contiguous data elements from memory. On vulnerable hardware, when a gather instruction is transiently executed and encounters a fault, stale data from architectural or internal vector registers may get transiently stored to the destination vector register allowing an attacker to infer the stale data using typical side channel techniques like cache timing attacks. This mitigation is different from many earlier ones for two reasons. First, it is enabled by default and a bit must be set to *DISABLE* it. This is the opposite of normal mitigation polarity. This means GDS can be mitigated simply by updating microcode and leaving the new control bit alone. Second, GDS has a "lock" bit. This lock bit is there because the mitigation affects the hardware security features KeyLocker and SGX. It needs to be enabled and *STAY* enabled for these features to be mitigated against GDS. The mitigation is enabled in the microcode by default. Disable it by setting gather_data_sampling=off or by disabling all mitigations with mitigations=off. The mitigation status can be checked by reading: /sys/devices/system/cpu/vulnerabilities/gather_data_sampling Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/fpu: Move FPU initialization into arch_cpu_finalize_init()Thomas Gleixner
commit b81fac906a8f9e682e513ddd95697ec7a20878d4 upstream Initializing the FPU during the early boot process is a pointless exercise. Early boot is convoluted and fragile enough. Nothing requires that the FPU is set up early. It has to be initialized before fork_init() because the task_struct size depends on the FPU register buffer size. Move the initialization to arch_cpu_finalize_init() which is the perfect place to do so. No functional change. This allows to remove quite some of the custom early command line parsing, but that's subject to the next installment. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230613224545.902376621@linutronix.de Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/fpu: Mark init functions __initThomas Gleixner
commit 1703db2b90c91b2eb2d699519fc505fe431dde0e upstream No point in keeping them around. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230613224545.841685728@linutronix.de Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/fpu: Remove cpuinfo argument from init functionsThomas Gleixner
commit 1f34bb2a24643e0087652d81078e4f616562738d upstream Nothing in the call chain requires it Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230613224545.783704297@linutronix.de Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08init, x86: Move mem_encrypt_init() into arch_cpu_finalize_init()Thomas Gleixner
commit 439e17576eb47f26b78c5bbc72e344d4206d2327 upstream Invoke the X86ism mem_encrypt_init() from X86 arch_cpu_finalize_init() and remove the weak fallback from the core code. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230613224545.670360645@linutronix.de Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/cpu: Switch to arch_cpu_finalize_init()Thomas Gleixner
commit 7c7077a72674402654f3291354720cd73cdf649e upstream check_bugs() is a dumping ground for finalizing the CPU bringup. Only parts of it has to do with actual CPU bugs. Split it apart into arch_cpu_finalize_init() and cpu_select_mitigations(). Fixup the bogus 32bit comments while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230613224545.019583869@linutronix.de Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-27Merge tag 'v5.4.251' into v5.4/standard/baseBruce Ashfield
This is the 5.4.251 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmTCEMUACgkQONu9yGCS # aT52vhAAr5fuA8n3nANC/iWrnV+tR7PS9+ncqxloumGgIPnFijlCpB7DBoK7KAPw # cs83aMisxfvWkSPuQebqY2xO2dUX03DiySCNta0W81Iw2ndASLnA/OXYn+ZOXMbW # xKYA37d5EmQ+JWIhh3+Gnxjb3Tui6vVEJAgqkC+4FD/sB60VwuGNIKirkYT58402 # NlYExg0Wcgye8Qc50JXH96Dy6opvX84qGnnmz3slfKk7Jykifqh3jm1bSIQrngWs # mUb8cXOkQgMrAWz8IJ4FgHisA0X3B3SklaiEO0ClPWw4nwC9PtpnAxZRxIVf2LDC # eXj0fsJcP6So2b2vDnmfn2V+1bM8jQFuyv6eqhxW6sz4uiQQuZ3GAqdw0UhhfUmL # ExzlCWTzdy2ZP4oN440JvxnYDItCsK263G+6l+LH3owWEbwHYmUh2uZoiC31rIEk # pzXpZYzpFpGweTGtKx0+mW90i8l0lyQojN4pJMUrHgjp7u+bQIY0BkFUTClMH59E # TsArErG8YOUh3cb+JkiTuJfgpv/D1kW//p3t2uJEsZPUHjN9BDsn0rsMftLYZI1C # IKXpi69yYjbSmYAz6gRzi7AmlxRxqM4BEdOOyqHMylyyK5K0EneXqpA1UMT+Fuel # 5KXXVWjPu+C0I5x4MLnbBckJQHVsKY/sUE94ba4OFsTMbCJeNZ8= # =Vm2g # -----END PGP SIGNATURE----- # gpg: Signature made Thu 27 Jul 2023 02:37:57 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-07-27x86/resctrl: Only show tasks' pid in current pid namespaceShawn Wang
[ Upstream commit 2997d94b5dd0e8b10076f5e0b6f18410c73e28bd ] When writing a task id to the "tasks" file in an rdtgroup, rdtgroup_tasks_write() treats the pid as a number in the current pid namespace. But when reading the "tasks" file, rdtgroup_tasks_show() shows the list of global pids from the init namespace, which is confusing and incorrect. To be more robust, let the "tasks" file only show pids in the current pid namespace. Fixes: e02737d5b826 ("x86/intel_rdt: Add tasks files") Signed-off-by: Shawn Wang <shawnwang@linux.alibaba.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Reinette Chatre <reinette.chatre@intel.com> Acked-by: Fenghua Yu <fenghua.yu@intel.com> Tested-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/all/20230116071246.97717-1-shawnwang@linux.alibaba.com/ Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-27x86/resctrl: Use is_closid_match() in more placesJames Morse
[ Upstream commit e6b2fac36fcc0b73cbef063d700a9841850e37a0 ] rdtgroup_tasks_assigned() and show_rdt_tasks() loop over threads testing for a CTRL/MON group match by closid/rmid with the provided rdtgrp. Further down the file are helpers to do this, move these further up and make use of them here. These helpers additionally check for alloc/mon capable. This is harmless as rdtgroup_mkdir() tests these capable flags before allowing the config directories to be created. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lkml.kernel.org/r/20200708163929.2783-7-james.morse@arm.com Stable-dep-of: 2997d94b5dd0 ("x86/resctrl: Only show tasks' pid in current pid namespace") Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-27x86/smp: Use dedicated cache-line for mwait_play_dead()Thomas Gleixner
commit f9c9987bf52f4e42e940ae217333ebb5a4c3b506 upstream. Monitoring idletask::thread_info::flags in mwait_play_dead() has been an obvious choice as all what is needed is a cache line which is not written by other CPUs. But there is a use case where a "dead" CPU needs to be brought out of MWAIT: kexec(). This is required as kexec() can overwrite text, pagetables, stacks and the monitored cacheline of the original kernel. The latter causes MWAIT to resume execution which obviously causes havoc on the kexec kernel which results usually in triple faults. Use a dedicated per CPU storage to prepare for that. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230615193330.434553750@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-26Merge tag 'v5.4.250' into v5.4/standard/baseBruce Ashfield
This is the 5.4.250 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmS+sKEACgkQONu9yGCS # aT4QxQ//WnGWHnz/6K7Ra/u04z0FICXjMN2W0Lw7M6bqFspWbR7cJMncbF09ZGHr # v9feeI1RczZxFkrsPQJ3tDgZbl8BjJyPZ6Yzhr0OZs4LRzlTkYllFE6+ZQdUOaZh # /YqB04m9PS1ifRwbZyIzxlGkXhmPnmeG2V44D6ZvlhweM9tlLIKf5yCDQo3PYmcn # bjlTBMPtpXQXykPUOT6XN+xlSDGQfaGy3B6SfapiEpxg6PDu9CAWzkL7yhbVoWnL # xof2yld0yyyTVO9FtSCoO8XHqGTxrNIdTqrrcozPDX30z9Pi71h/QMR2ZUc420e3 # 52Z3PMcImoTOKppP3+kbeL7cjlGUIrlDb8rQz0birABZjPHIuIKNwFvF4cUH5s3L # g9z5NOd1g9qPYvh0JqSazZt9UyhhPmWKQGExO9f8IkcsZ5bXqFFeol/5UgB9UX+f # fytrgS0D6vP3WGIRbXDu2F+DYhS6dIO5n1NH2FKILQbK4j0Hm/9Xdz19UpaSLsXF # Ldus2WIzmPgXpsKPdb9dGYu+m2mGHRD0oEpP2XjF+GEFbFxyCGTj3UGKVCrnOLAa # UJkxuub9KeGRzudOw0NAZC2hqNvHsdSKcZDzZz/X40ZJCnXV9jw25OO2zsRMDayj # 1o8VvaxlNgpLsLGjnqh7unt3bRnbl1Ck4E1duQ5pjXsGNNbaU7A= # =rg5U # -----END PGP SIGNATURE----- # gpg: Signature made Mon 24 Jul 2023 01:10:57 PM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-07-24x86/cpu/amd: Add a Zenbleed fixBorislav Petkov (AMD)
Upstream commit: 522b1d69219d8f083173819fde04f994aa051a98 Add a fix for the Zen2 VZEROUPPER data corruption bug where under certain circumstances executing VZEROUPPER can cause register corruption or leak data. The optimal fix is through microcode but in the case the proper microcode revision has not been applied, enable a fallback fix using a chicken bit. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-24x86/cpu/amd: Move the errata checking functionality upBorislav Petkov (AMD)
Upstream commit: 8b6f687743dacce83dbb0c7cfacf88bab00f808a Avoid new and remove old forward declarations. No functional changes. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-24x86/microcode/AMD: Load late on both threads tooBorislav Petkov (AMD)
commit a32b0f0db3f396f1c9be2fe621e77c09ec3d8e7d upstream. Do the same as early loading - load on both threads. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: <stable@kernel.org> Link: https://lore.kernel.org/r/20230605141332.25948-1-bp@alien8.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-03Merge tag 'v5.4.249' into v5.4/standard/baseBruce Ashfield
This is the 5.4.249 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmSb7V4ACgkQONu9yGCS # aT5vLxAA0yhg7h210wyMLrPNgQHrIItxkvcosoAG04WziImnvTT84XYpvthKlQrZ # jzLGwdrH8ggdZIq+jPblmGvfvpGuM7MjKw1F8tgmviMnMyfKziGO/kIEzkNPaHSt # sRFuGniXx2Q/m2IVblhC8pqJG6SRgkBbNgg3by7SpTRSEHBjpxaOVxvGC53Bdlkb # ep90ox3iVbA4Q45rGCn5UfJM22wEnUYbzRv04085fzWaPDEZyHi5S6a3rHepVbrq # 7ElDQgUgHKlLm7rd1ngB8Ac+EdfavVcPok789pbEmQwf6jsAetl43yPUSEE6xFXb # 5FZAA7uUUa+E7P+140+iWBCZwQX9g+WglEkOxJV8gOMtWoiFZjpPcJxyWnvz/7ch # XFz88WW/Ub4+bpg62TJ2F3dboeF0x1rN5kB8/ylb+Gf9vACT2gPLDbFaeG24DZEr # s1hdsRx1Q3m8ffOYbsuTTn3bfGv8TfycV4Cwy+v+QPwJF/WPdMUnIDRY7VgWJ6fO # scRdhkgMer9MLDrcSwxgS3tyn6JObQMp5A40H1Yb6ZVwN+q2BRC/B4Gqi6BmUNKr # uU0BRMeyExyyQfKYCgvcf0M23qUf5L4PDpk1MX38pU+AHm8rPHlE36/pNFG4PG0g # p6vBTlKzYeHKh12VAdPJjiWICloaz2ixf3K85xJ+vH56jXfjbSY= # =3Pqk # -----END PGP SIGNATURE----- # gpg: Signature made Wed 28 Jun 2023 04:20:46 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-06-28x86/apic: Fix kernel panic when booting with intremap=off and x2apic_physDheeraj Kumar Srivastava
[ Upstream commit 85d38d5810e285d5aec7fb5283107d1da70c12a9 ] When booting with "intremap=off" and "x2apic_phys" on the kernel command line, the physical x2APIC driver ends up being used even when x2APIC mode is disabled ("intremap=off" disables x2APIC mode). This happens because the first compound condition check in x2apic_phys_probe() is false due to x2apic_mode == 0 and so the following one returns true after default_acpi_madt_oem_check() having already selected the physical x2APIC driver. This results in the following panic: kernel BUG at arch/x86/kernel/apic/io_apic.c:2409! invalid opcode: 0000 [#1] PREEMPT SMP NOPTI CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.4.0-rc2-ver4.1rc2 #2 Hardware name: Dell Inc. PowerEdge R6515/07PXPY, BIOS 2.3.6 07/06/2021 RIP: 0010:setup_IO_APIC+0x9c/0xaf0 Call Trace: <TASK> ? native_read_msr apic_intr_mode_init x86_late_time_init start_kernel x86_64_start_reservations x86_64_start_kernel secondary_startup_64_no_verify </TASK> which is: setup_IO_APIC: apic_printk(APIC_VERBOSE, "ENABLING IO-APIC IRQs\n"); for_each_ioapic(ioapic) BUG_ON(mp_irqdomain_create(ioapic)); Return 0 to denote that x2APIC has not been enabled when probing the physical x2APIC driver. [ bp: Massage commit message heavily. ] Fixes: 9ebd680bd029 ("x86, apic: Use probe routines to simplify apic selection") Signed-off-by: Dheeraj Kumar Srivastava <dheerajkumar.srivastava@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Kishon Vijay Abraham I <kvijayab@amd.com> Reviewed-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Cyrill Gorcunov <gorcunov@gmail.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230616212236.1389-1-dheerajkumar.srivastava@amd.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-06-28x86/mm: Avoid using set_pgd() outside of real PGD pagesLee Jones
commit d082d48737c75d2b3cc1f972b8c8674c25131534 upstream. KPTI keeps around two PGDs: one for userspace and another for the kernel. Among other things, set_pgd() contains infrastructure to ensure that updates to the kernel PGD are reflected in the user PGD as well. One side-effect of this is that set_pgd() expects to be passed whole pages. Unfortunately, init_trampoline_kaslr() passes in a single entry: 'trampoline_pgd_entry'. When KPTI is on, set_pgd() will update 'trampoline_pgd_entry' (an 8-Byte globally stored [.bss] variable) and will then proceed to replicate that value into the non-existent neighboring user page (located +4k away), leading to the corruption of other global [.bss] stored variables. Fix it by directly assigning 'trampoline_pgd_entry' and avoiding set_pgd(). [ dhansen: tweak subject and changelog ] Fixes: 0925dda5962e ("x86/mm/KASLR: Use only one PUD entry for real mode trampoline") Suggested-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Lee Jones <lee@kernel.org> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/all/20230614163859.924309-1-lee@kernel.org/g Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-06-28x86/purgatory: remove PGO flagsRicardo Ribalda
commit 97b6b9cbba40a21c1d9a344d5c1991f8cfbf136e upstream. If profile-guided optimization is enabled, the purgatory ends up with multiple .text sections. This is not supported by kexec and crashes the system. Link: https://lkml.kernel.org/r/20230321-kexec_clang16-v7-2-b05c520b7296@chromium.org Fixes: 930457057abe ("kernel/kexec_file.c: split up __kexec_load_puragory") Signed-off-by: Ricardo Ribalda <ribalda@chromium.org> Cc: <stable@vger.kernel.org> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Baoquan He <bhe@redhat.com> Cc: Borislav Petkov (AMD) <bp@alien8.de> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Dave Young <dyoung@redhat.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Palmer Dabbelt <palmer@rivosinc.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Philipp Rudo <prudo@redhat.com> Cc: Ross Zwisler <zwisler@google.com> Cc: Simon Horman <horms@kernel.org> Cc: Steven Rostedt (Google) <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tom Rix <trix@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ricardo Ribalda Delgado <ribalda@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-06-12Merge tag 'v5.4.246' into v5.4/standard/baseBruce Ashfield
This is the 5.4.246 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmSC4twACgkQONu9yGCS # aT7+DhAAtEjTkoxf13gC+oPdj7XmBHx2uXkXm0BbkjxMTs/HCM9Eyf1FKTZBEawX # HolNx+akXd2R2xtRCPoV4xD3qy3o2xhLspCfNXoiOU+oL+ej+tNid+6WAl4wtK2w # mafRGafVcNuxOOOgy8B/Bs+fNy+kjEa9riw8VvcgcTYhyOhJnLcKvjpaaJ5AFzxa # i/dwVKo4YkPQNQuN9iPHiubqkVgc83OiXDdhX1YkwXlxnuMJ8sUy1exTmLNQxk2g # hWUtANARnsXI36qv14Gy3iQeJdNGqWq79FWEhfUqVnrFF9Qhn1IoCuGb+wk2s73i # UPdId/iElkYdrDD0PFHKNj7aKeYYanoi3YvrBniS2qT7wwCGnBlXoR1QIuwGCPll # etjWdR9roOBHKA3rRAYPItrVXk2gNJafyBpCW/Zke6QYrXkDvu+m0Oz2C7jttFwN # BCDDIqdWmPJBgU8CJ+sguXsAkHRahWI59NQX5BmgxGooshtPdqPRegCUTPqxhXlP # YFyPyLqQ/s+xMu4cE/MYqRiKSfRN61sSYWO8R2W8fQ60Kd+FxIKZvnhiEEWlpbGv # i9ZcHkQVG/eubWhutj9GBDsZ4auzXfzxLkTBW94NXlycjjG4g0QRkBl+4UC3BJO+ # HNTAO9x7KRnqPGMEJVL8QzaWYvXnnOGfC+7yBE75c/iQiCcY3gI= # =fzPK # -----END PGP SIGNATURE----- # gpg: Signature made Fri 09 Jun 2023 04:29:16 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-06-09treewide: Remove uninitialized_var() usageKees Cook
commit 3f649ab728cda8038259d8f14492fe400fbab911 upstream. Using uninitialized_var() is dangerous as it papers over real bugs[1] (or can in the future), and suppresses unrelated compiler warnings (e.g. "unused variable"). If the compiler thinks it is uninitialized, either simply initialize the variable or make compiler changes. In preparation for removing[2] the[3] macro[4], remove all remaining needless uses with the following script: git grep '\buninitialized_var\b' | cut -d: -f1 | sort -u | \ xargs perl -pi -e \ 's/\buninitialized_var\(([^\)]+)\)/\1/g; s:\s*/\* (GCC be quiet|to make compiler happy) \*/$::g;' drivers/video/fbdev/riva/riva_hw.c was manually tweaked to avoid pathological white-space. No outstanding warnings were found building allmodconfig with GCC 9.3.0 for x86_64, i386, arm64, arm, powerpc, powerpc64le, s390x, mips, sparc64, alpha, and m68k. [1] https://lore.kernel.org/lkml/20200603174714.192027-1-glider@google.com/ [2] https://lore.kernel.org/lkml/CA+55aFw+Vbj0i=1TGqCR5vQkCzWJ0QxK6CernOU6eedsudAixw@mail.gmail.com/ [3] https://lore.kernel.org/lkml/CA+55aFwgbgqhbp1fkxvRKEpzyR5J8n1vKT1VZdz9knmPuXhOeg@mail.gmail.com/ [4] https://lore.kernel.org/lkml/CA+55aFz2500WfbKXAx8s67wrm9=yVJu65TpLgN_ybYNv0VEOKA@mail.gmail.com/ Reviewed-by: Leon Romanovsky <leonro@mellanox.com> # drivers/infiniband and mlx4/mlx5 Acked-by: Jason Gunthorpe <jgg@mellanox.com> # IB Acked-by: Kalle Valo <kvalo@codeaurora.org> # wireless drivers Reviewed-by: Chao Yu <yuchao0@huawei.com> # erofs Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-06-09x86/boot: Wrap literal addresses in absolute_pointer()Kees Cook
commit aeb84412037b89e06f45e382f044da6f200e12f8 upstream. GCC 11 (incorrectly[1]) assumes that literal values cast to (void *) should be treated like a NULL pointer with an offset, and raises diagnostics when doing bounds checking under -Warray-bounds. GCC 12 got "smarter" about finding these: In function 'rdfs8', inlined from 'vga_recalc_vertical' at /srv/code/arch/x86/boot/video-mode.c:124:29, inlined from 'set_mode' at /srv/code/arch/x86/boot/video-mode.c:163:3: /srv/code/arch/x86/boot/boot.h:114:9: warning: array subscript 0 is outside array bounds of 'u8[0]' {aka 'unsigned char[]'} [-Warray-bounds] 114 | asm volatile("movb %%fs:%1,%0" : "=q" (v) : "m" (*(u8 *)addr)); | ^~~ This has been solved in other places[2] already by using the recently added absolute_pointer() macro. Do the same here. [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99578 [2] https://lore.kernel.org/all/20210912160149.2227137-1-linux@roeck-us.net/ Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/20220227195918.705219-1-keescook@chromium.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-06-05Merge tag 'v5.4.244' into v5.4/standard/baseBruce Ashfield
This is the 5.4.244 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmR14ZEACgkQONu9yGCS # aT41Mw/+NyTg/nNT37u5X7l6TeoWkJTTpxJTFM+EIL0L/LZ8d+fPwvXRuSEfUH8X # 7yLBaepbuGdtyMMCmJofxlNwMrx9L9M1xK03s9DnKGxVlkFZbJth/8L2FD/R939z # 7IP06/uYL/YI8ZjJSSEf6bOLqvy0BdqSLRpn9NKK9eChK0aIVQ03TIrS1NarAzuQ # lMD5CwaFqZCz8NaGfdpg01JDfMuvKdCD8dCkYE+bO9U/nQRr1dmKvHNsQMpecDte # F/YXfbcv3CIh7vwfdw8UOFzwhyZWjWHsSWi0wRK8ZGy1ckDr3lZFgYj+jr0K/CWu # mMRiEXUIphqwCb7mdi5doWyLD9ZFyU8Jx249vqWBeuL4Hb+74vqJVf1wKT0wOE8c # F6LyxXkc7lfNIIWojn4MyvxtIu4SPo/NsTd9Qxz7kj4SZHmAJNJihFIEezMUB8Wr # 7VZP8o75PJ4Kx0aKkFY2IyZuC/GJa7VD+9AnCyB93eWfkufzMV/1fdOR3WEukpOg # cqRl2xRcQiRu7I1jkn09Ir6yHjR5zZ12QHT/MNZiapaXmnG/IwHGopkQKUlM3Cwz # rbAg7gLb89mjHbbFq8TO1W7JIelLuejAk/P8tO1Uf9VEa/c0E0I7Q434posf0/Yk # XJdV2V+meOG6qyGkW35yUgentd5+bcSxyaA9D1IarA0EC11UFjU= # =hQuZ # -----END PGP SIGNATURE----- # gpg: Signature made Tue 30 May 2023 07:44:17 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-05-30x86/show_trace_log_lvl: Ensure stack pointer is aligned, againVernon Lovejoy
commit 2e4be0d011f21593c6b316806779ba1eba2cd7e0 upstream. The commit e335bb51cc15 ("x86/unwind: Ensure stack pointer is aligned") tried to align the stack pointer in show_trace_log_lvl(), otherwise the "stack < stack_info.end" check can't guarantee that the last read does not go past the end of the stack. However, we have the same problem with the initial value of the stack pointer, it can also be unaligned. So without this patch this trivial kernel module #include <linux/module.h> static int init(void) { asm volatile("sub $0x4,%rsp"); dump_stack(); asm volatile("add $0x4,%rsp"); return -EAGAIN; } module_init(init); MODULE_LICENSE("GPL"); crashes the kernel. Fixes: e335bb51cc15 ("x86/unwind: Ensure stack pointer is aligned") Signed-off-by: Vernon Lovejoy <vlovejoy@redhat.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20230512104232.GA10227@redhat.com Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-05-30x86/topology: Fix erroneous smp_num_siblings on Intel Hybrid platformsZhang Rui
commit edc0a2b5957652f4685ef3516f519f84807087db upstream. Traditionally, all CPUs in a system have identical numbers of SMT siblings. That changes with hybrid processors where some logical CPUs have a sibling and others have none. Today, the CPU boot code sets the global variable smp_num_siblings when every CPU thread is brought up. The last thread to boot will overwrite it with the number of siblings of *that* thread. That last thread to boot will "win". If the thread is a Pcore, smp_num_siblings == 2. If it is an Ecore, smp_num_siblings == 1. smp_num_siblings describes if the *system* supports SMT. It should specify the maximum number of SMT threads among all cores. Ensure that smp_num_siblings represents the system-wide maximum number of siblings by always increasing its value. Never allow it to decrease. On MeteorLake-P platform, this fixes a problem that the Ecore CPUs are not updated in any cpu sibling map because the system is treated as an UP system when probing Ecore CPUs. Below shows part of the CPU topology information before and after the fix, for both Pcore and Ecore CPU (cpu0 is Pcore, cpu 12 is Ecore). ... -/sys/devices/system/cpu/cpu0/topology/package_cpus:000fff -/sys/devices/system/cpu/cpu0/topology/package_cpus_list:0-11 +/sys/devices/system/cpu/cpu0/topology/package_cpus:3fffff +/sys/devices/system/cpu/cpu0/topology/package_cpus_list:0-21 ... -/sys/devices/system/cpu/cpu12/topology/package_cpus:001000 -/sys/devices/system/cpu/cpu12/topology/package_cpus_list:12 +/sys/devices/system/cpu/cpu12/topology/package_cpus:3fffff +/sys/devices/system/cpu/cpu12/topology/package_cpus_list:0-21 Notice that the "before" 'package_cpus_list' has only one CPU. This means that userspace tools like lscpu will see a little laptop like an 11-socket system: -Core(s) per socket: 1 -Socket(s): 11 +Core(s) per socket: 16 +Socket(s): 1 This is also expected to make the scheduler do rather wonky things too. [ dhansen: remove CPUID detail from changelog, add end user effects ] CC: stable@kernel.org Fixes: bbb65d2d365e ("x86: use cpuid vector 0xb when available for detecting cpu topology") Fixes: 95f3d39ccf7a ("x86/cpu/topology: Provide detect_extended_topology_early()") Suggested-by: Len Brown <len.brown@intel.com> Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20230323015640.27906-1-rui.zhang%40intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>