aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel
AgeCommit message (Collapse)Author
2023-08-08x86/cpu, kvm: Add support for CPUID_80000021_EAXKim Phillips
commit 8415a74852d7c24795007ee9862d25feb519007c upstream. Add support for CPUID leaf 80000021, EAX. The majority of the features will be used in the kernel and thus a separate leaf is appropriate. Include KVM's reverse_cpuid entry because features are used by VM guests, too. [ bp: Massage commit message. ] Signed-off-by: Kim Phillips <kim.phillips@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20230124163319.2277355-2-kim.phillips@amd.com [bwh: Backported to 6.1: adjust context] Signed-off-by: Ben Hutchings <benh@debian.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/cpufeatures: Assign dedicated feature word for CPUID_0x8000001F[EAX]Sean Christopherson
commit fb35d30fe5b06cc24444f0405da8fbe0be5330d1 upstream. Collect the scattered SME/SEV related feature flags into a dedicated word. There are now five recognized features in CPUID.0x8000001F.EAX, with at least one more on the horizon (SEV-SNP). Using a dedicated word allows KVM to use its automagic CPUID adjustment logic when reporting the set of supported features to userspace. No functional change intended. Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Link: https://lkml.kernel.org/r/20210122204047.2860075-2-seanjc@google.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/cpu: Add VM page flush MSR availablility as a CPUID featureTom Lendacky
commit 69372cf01290b9587d2cee8fbe161d75d55c3adc upstream. On systems that do not have hardware enforced cache coherency between encrypted and unencrypted mappings of the same physical page, the hypervisor can use the VM page flush MSR (0xc001011e) to flush the cache contents of an SEV guest page. When a small number of pages are being flushed, this can be used in place of issuing a WBINVD across all CPUs. CPUID 0x8000001f_eax[2] is used to determine if the VM page flush MSR is available. Add a CPUID feature to indicate it is supported and define the MSR. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Message-Id: <f1966379e31f9b208db5257509c4a089a87d33d0.1607620209.git.thomas.lendacky@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/cpufeatures: Add SEV-ES CPU featureTom Lendacky
commit 360e7c5c4ca4fd8e627781ed42f95d58bc3bb732 upstream. Add CPU feature detection for Secure Encrypted Virtualization with Encrypted State. This feature enhances SEV by also encrypting the guest register state, making it in-accessible to the hypervisor. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-6-joro@8bytes.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08KVM: Add GDS_NO support to KVMDaniel Sneddon
commit 81ac7e5d741742d650b4ed6186c4826c1a0631a7 upstream Gather Data Sampling (GDS) is a transient execution attack using gather instructions from the AVX2 and AVX512 extensions. This attack allows malicious code to infer data that was previously stored in vector registers. Systems that are not vulnerable to GDS will set the GDS_NO bit of the IA32_ARCH_CAPABILITIES MSR. This is useful for VM guests that may think they are on vulnerable systems that are, in fact, not affected. Guests that are running on affected hosts where the mitigation is enabled are protected as if they were running on an unaffected system. On all hosts that are not affected or that are mitigated, set the GDS_NO bit. Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/speculation: Add Kconfig option for GDSDaniel Sneddon
commit 53cf5797f114ba2bd86d23a862302119848eff19 upstream Gather Data Sampling (GDS) is mitigated in microcode. However, on systems that haven't received the updated microcode, disabling AVX can act as a mitigation. Add a Kconfig option that uses the microcode mitigation if available and disables AVX otherwise. Setting this option has no effect on systems not affected by GDS. This is the equivalent of setting gather_data_sampling=force. Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/speculation: Add force option to GDS mitigationDaniel Sneddon
commit 553a5c03e90a6087e88f8ff878335ef0621536fb upstream The Gather Data Sampling (GDS) vulnerability allows malicious software to infer stale data previously stored in vector registers. This may include sensitive data such as cryptographic keys. GDS is mitigated in microcode, and systems with up-to-date microcode are protected by default. However, any affected system that is running with older microcode will still be vulnerable to GDS attacks. Since the gather instructions used by the attacker are part of the AVX2 and AVX512 extensions, disabling these extensions prevents gather instructions from being executed, thereby mitigating the system from GDS. Disabling AVX2 is sufficient, but we don't have the granularity to do this. The XCR0[2] disables AVX, with no option to just disable AVX2. Add a kernel parameter gather_data_sampling=force that will enable the microcode mitigation if available, otherwise it will disable AVX on affected systems. This option will be ignored if cmdline mitigations=off. This is a *big* hammer. It is known to break buggy userspace that uses incomplete, buggy AVX enumeration. Unfortunately, such userspace does exist in the wild: https://www.mail-archive.com/bug-coreutils@gnu.org/msg33046.html [ dhansen: add some more ominous warnings about disabling AVX ] Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/speculation: Add Gather Data Sampling mitigationDaniel Sneddon
commit 8974eb588283b7d44a7c91fa09fcbaf380339f3a upstream Gather Data Sampling (GDS) is a hardware vulnerability which allows unprivileged speculative access to data which was previously stored in vector registers. Intel processors that support AVX2 and AVX512 have gather instructions that fetch non-contiguous data elements from memory. On vulnerable hardware, when a gather instruction is transiently executed and encounters a fault, stale data from architectural or internal vector registers may get transiently stored to the destination vector register allowing an attacker to infer the stale data using typical side channel techniques like cache timing attacks. This mitigation is different from many earlier ones for two reasons. First, it is enabled by default and a bit must be set to *DISABLE* it. This is the opposite of normal mitigation polarity. This means GDS can be mitigated simply by updating microcode and leaving the new control bit alone. Second, GDS has a "lock" bit. This lock bit is there because the mitigation affects the hardware security features KeyLocker and SGX. It needs to be enabled and *STAY* enabled for these features to be mitigated against GDS. The mitigation is enabled in the microcode by default. Disable it by setting gather_data_sampling=off or by disabling all mitigations with mitigations=off. The mitigation status can be checked by reading: /sys/devices/system/cpu/vulnerabilities/gather_data_sampling Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/fpu: Move FPU initialization into arch_cpu_finalize_init()Thomas Gleixner
commit b81fac906a8f9e682e513ddd95697ec7a20878d4 upstream Initializing the FPU during the early boot process is a pointless exercise. Early boot is convoluted and fragile enough. Nothing requires that the FPU is set up early. It has to be initialized before fork_init() because the task_struct size depends on the FPU register buffer size. Move the initialization to arch_cpu_finalize_init() which is the perfect place to do so. No functional change. This allows to remove quite some of the custom early command line parsing, but that's subject to the next installment. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230613224545.902376621@linutronix.de Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/fpu: Mark init functions __initThomas Gleixner
commit 1703db2b90c91b2eb2d699519fc505fe431dde0e upstream No point in keeping them around. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230613224545.841685728@linutronix.de Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/fpu: Remove cpuinfo argument from init functionsThomas Gleixner
commit 1f34bb2a24643e0087652d81078e4f616562738d upstream Nothing in the call chain requires it Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230613224545.783704297@linutronix.de Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08init, x86: Move mem_encrypt_init() into arch_cpu_finalize_init()Thomas Gleixner
commit 439e17576eb47f26b78c5bbc72e344d4206d2327 upstream Invoke the X86ism mem_encrypt_init() from X86 arch_cpu_finalize_init() and remove the weak fallback from the core code. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230613224545.670360645@linutronix.de Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-08-08x86/cpu: Switch to arch_cpu_finalize_init()Thomas Gleixner
commit 7c7077a72674402654f3291354720cd73cdf649e upstream check_bugs() is a dumping ground for finalizing the CPU bringup. Only parts of it has to do with actual CPU bugs. Split it apart into arch_cpu_finalize_init() and cpu_select_mitigations(). Fixup the bogus 32bit comments while at it. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230613224545.019583869@linutronix.de Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-27Merge branch 'v5.4/standard/base' into v5.4/standard/preempt-rt/baseBruce Ashfield
2023-07-27Merge tag 'v5.4.251' into v5.4/standard/baseBruce Ashfield
This is the 5.4.251 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmTCEMUACgkQONu9yGCS # aT52vhAAr5fuA8n3nANC/iWrnV+tR7PS9+ncqxloumGgIPnFijlCpB7DBoK7KAPw # cs83aMisxfvWkSPuQebqY2xO2dUX03DiySCNta0W81Iw2ndASLnA/OXYn+ZOXMbW # xKYA37d5EmQ+JWIhh3+Gnxjb3Tui6vVEJAgqkC+4FD/sB60VwuGNIKirkYT58402 # NlYExg0Wcgye8Qc50JXH96Dy6opvX84qGnnmz3slfKk7Jykifqh3jm1bSIQrngWs # mUb8cXOkQgMrAWz8IJ4FgHisA0X3B3SklaiEO0ClPWw4nwC9PtpnAxZRxIVf2LDC # eXj0fsJcP6So2b2vDnmfn2V+1bM8jQFuyv6eqhxW6sz4uiQQuZ3GAqdw0UhhfUmL # ExzlCWTzdy2ZP4oN440JvxnYDItCsK263G+6l+LH3owWEbwHYmUh2uZoiC31rIEk # pzXpZYzpFpGweTGtKx0+mW90i8l0lyQojN4pJMUrHgjp7u+bQIY0BkFUTClMH59E # TsArErG8YOUh3cb+JkiTuJfgpv/D1kW//p3t2uJEsZPUHjN9BDsn0rsMftLYZI1C # IKXpi69yYjbSmYAz6gRzi7AmlxRxqM4BEdOOyqHMylyyK5K0EneXqpA1UMT+Fuel # 5KXXVWjPu+C0I5x4MLnbBckJQHVsKY/sUE94ba4OFsTMbCJeNZ8= # =Vm2g # -----END PGP SIGNATURE----- # gpg: Signature made Thu 27 Jul 2023 02:37:57 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-07-27x86/resctrl: Only show tasks' pid in current pid namespaceShawn Wang
[ Upstream commit 2997d94b5dd0e8b10076f5e0b6f18410c73e28bd ] When writing a task id to the "tasks" file in an rdtgroup, rdtgroup_tasks_write() treats the pid as a number in the current pid namespace. But when reading the "tasks" file, rdtgroup_tasks_show() shows the list of global pids from the init namespace, which is confusing and incorrect. To be more robust, let the "tasks" file only show pids in the current pid namespace. Fixes: e02737d5b826 ("x86/intel_rdt: Add tasks files") Signed-off-by: Shawn Wang <shawnwang@linux.alibaba.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Acked-by: Reinette Chatre <reinette.chatre@intel.com> Acked-by: Fenghua Yu <fenghua.yu@intel.com> Tested-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lore.kernel.org/all/20230116071246.97717-1-shawnwang@linux.alibaba.com/ Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-27x86/resctrl: Use is_closid_match() in more placesJames Morse
[ Upstream commit e6b2fac36fcc0b73cbef063d700a9841850e37a0 ] rdtgroup_tasks_assigned() and show_rdt_tasks() loop over threads testing for a CTRL/MON group match by closid/rmid with the provided rdtgrp. Further down the file are helpers to do this, move these further up and make use of them here. These helpers additionally check for alloc/mon capable. This is harmless as rdtgroup_mkdir() tests these capable flags before allowing the config directories to be created. Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Reinette Chatre <reinette.chatre@intel.com> Link: https://lkml.kernel.org/r/20200708163929.2783-7-james.morse@arm.com Stable-dep-of: 2997d94b5dd0 ("x86/resctrl: Only show tasks' pid in current pid namespace") Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-07-27x86/smp: Use dedicated cache-line for mwait_play_dead()Thomas Gleixner
commit f9c9987bf52f4e42e940ae217333ebb5a4c3b506 upstream. Monitoring idletask::thread_info::flags in mwait_play_dead() has been an obvious choice as all what is needed is a cache line which is not written by other CPUs. But there is a use case where a "dead" CPU needs to be brought out of MWAIT: kexec(). This is required as kexec() can overwrite text, pagetables, stacks and the monitored cacheline of the original kernel. The latter causes MWAIT to resume execution which obviously causes havoc on the kexec kernel which results usually in triple faults. Use a dedicated per CPU storage to prepare for that. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230615193330.434553750@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-26Merge branch 'v5.4/standard/base' into v5.4/standard/preempt-rt/baseBruce Ashfield
2023-07-26Merge tag 'v5.4.250' into v5.4/standard/baseBruce Ashfield
This is the 5.4.250 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmS+sKEACgkQONu9yGCS # aT4QxQ//WnGWHnz/6K7Ra/u04z0FICXjMN2W0Lw7M6bqFspWbR7cJMncbF09ZGHr # v9feeI1RczZxFkrsPQJ3tDgZbl8BjJyPZ6Yzhr0OZs4LRzlTkYllFE6+ZQdUOaZh # /YqB04m9PS1ifRwbZyIzxlGkXhmPnmeG2V44D6ZvlhweM9tlLIKf5yCDQo3PYmcn # bjlTBMPtpXQXykPUOT6XN+xlSDGQfaGy3B6SfapiEpxg6PDu9CAWzkL7yhbVoWnL # xof2yld0yyyTVO9FtSCoO8XHqGTxrNIdTqrrcozPDX30z9Pi71h/QMR2ZUc420e3 # 52Z3PMcImoTOKppP3+kbeL7cjlGUIrlDb8rQz0birABZjPHIuIKNwFvF4cUH5s3L # g9z5NOd1g9qPYvh0JqSazZt9UyhhPmWKQGExO9f8IkcsZ5bXqFFeol/5UgB9UX+f # fytrgS0D6vP3WGIRbXDu2F+DYhS6dIO5n1NH2FKILQbK4j0Hm/9Xdz19UpaSLsXF # Ldus2WIzmPgXpsKPdb9dGYu+m2mGHRD0oEpP2XjF+GEFbFxyCGTj3UGKVCrnOLAa # UJkxuub9KeGRzudOw0NAZC2hqNvHsdSKcZDzZz/X40ZJCnXV9jw25OO2zsRMDayj # 1o8VvaxlNgpLsLGjnqh7unt3bRnbl1Ck4E1duQ5pjXsGNNbaU7A= # =rg5U # -----END PGP SIGNATURE----- # gpg: Signature made Mon 24 Jul 2023 01:10:57 PM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-07-24x86/cpu/amd: Add a Zenbleed fixBorislav Petkov (AMD)
Upstream commit: 522b1d69219d8f083173819fde04f994aa051a98 Add a fix for the Zen2 VZEROUPPER data corruption bug where under certain circumstances executing VZEROUPPER can cause register corruption or leak data. The optimal fix is through microcode but in the case the proper microcode revision has not been applied, enable a fallback fix using a chicken bit. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-24x86/cpu/amd: Move the errata checking functionality upBorislav Petkov (AMD)
Upstream commit: 8b6f687743dacce83dbb0c7cfacf88bab00f808a Avoid new and remove old forward declarations. No functional changes. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-24x86/microcode/AMD: Load late on both threads tooBorislav Petkov (AMD)
commit a32b0f0db3f396f1c9be2fe621e77c09ec3d8e7d upstream. Do the same as early loading - load on both threads. Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: <stable@kernel.org> Link: https://lore.kernel.org/r/20230605141332.25948-1-bp@alien8.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-07-03Merge branch 'v5.4/standard/base' into v5.4/standard/preempt-rt/baseBruce Ashfield
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com> # Conflicts: # include/linux/rcupdate.h
2023-07-03Merge tag 'v5.4.249' into v5.4/standard/baseBruce Ashfield
This is the 5.4.249 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmSb7V4ACgkQONu9yGCS # aT5vLxAA0yhg7h210wyMLrPNgQHrIItxkvcosoAG04WziImnvTT84XYpvthKlQrZ # jzLGwdrH8ggdZIq+jPblmGvfvpGuM7MjKw1F8tgmviMnMyfKziGO/kIEzkNPaHSt # sRFuGniXx2Q/m2IVblhC8pqJG6SRgkBbNgg3by7SpTRSEHBjpxaOVxvGC53Bdlkb # ep90ox3iVbA4Q45rGCn5UfJM22wEnUYbzRv04085fzWaPDEZyHi5S6a3rHepVbrq # 7ElDQgUgHKlLm7rd1ngB8Ac+EdfavVcPok789pbEmQwf6jsAetl43yPUSEE6xFXb # 5FZAA7uUUa+E7P+140+iWBCZwQX9g+WglEkOxJV8gOMtWoiFZjpPcJxyWnvz/7ch # XFz88WW/Ub4+bpg62TJ2F3dboeF0x1rN5kB8/ylb+Gf9vACT2gPLDbFaeG24DZEr # s1hdsRx1Q3m8ffOYbsuTTn3bfGv8TfycV4Cwy+v+QPwJF/WPdMUnIDRY7VgWJ6fO # scRdhkgMer9MLDrcSwxgS3tyn6JObQMp5A40H1Yb6ZVwN+q2BRC/B4Gqi6BmUNKr # uU0BRMeyExyyQfKYCgvcf0M23qUf5L4PDpk1MX38pU+AHm8rPHlE36/pNFG4PG0g # p6vBTlKzYeHKh12VAdPJjiWICloaz2ixf3K85xJ+vH56jXfjbSY= # =3Pqk # -----END PGP SIGNATURE----- # gpg: Signature made Wed 28 Jun 2023 04:20:46 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-06-28x86/apic: Fix kernel panic when booting with intremap=off and x2apic_physDheeraj Kumar Srivastava
[ Upstream commit 85d38d5810e285d5aec7fb5283107d1da70c12a9 ] When booting with "intremap=off" and "x2apic_phys" on the kernel command line, the physical x2APIC driver ends up being used even when x2APIC mode is disabled ("intremap=off" disables x2APIC mode). This happens because the first compound condition check in x2apic_phys_probe() is false due to x2apic_mode == 0 and so the following one returns true after default_acpi_madt_oem_check() having already selected the physical x2APIC driver. This results in the following panic: kernel BUG at arch/x86/kernel/apic/io_apic.c:2409! invalid opcode: 0000 [#1] PREEMPT SMP NOPTI CPU: 0 PID: 0 Comm: swapper/0 Not tainted 6.4.0-rc2-ver4.1rc2 #2 Hardware name: Dell Inc. PowerEdge R6515/07PXPY, BIOS 2.3.6 07/06/2021 RIP: 0010:setup_IO_APIC+0x9c/0xaf0 Call Trace: <TASK> ? native_read_msr apic_intr_mode_init x86_late_time_init start_kernel x86_64_start_reservations x86_64_start_kernel secondary_startup_64_no_verify </TASK> which is: setup_IO_APIC: apic_printk(APIC_VERBOSE, "ENABLING IO-APIC IRQs\n"); for_each_ioapic(ioapic) BUG_ON(mp_irqdomain_create(ioapic)); Return 0 to denote that x2APIC has not been enabled when probing the physical x2APIC driver. [ bp: Massage commit message heavily. ] Fixes: 9ebd680bd029 ("x86, apic: Use probe routines to simplify apic selection") Signed-off-by: Dheeraj Kumar Srivastava <dheerajkumar.srivastava@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Kishon Vijay Abraham I <kvijayab@amd.com> Reviewed-by: Vasant Hegde <vasant.hegde@amd.com> Reviewed-by: Cyrill Gorcunov <gorcunov@gmail.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20230616212236.1389-1-dheerajkumar.srivastava@amd.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-06-12Merge branch 'v5.4/standard/base' into v5.4/standard/preempt-rt/baseBruce Ashfield
2023-06-12Merge tag 'v5.4.246' into v5.4/standard/baseBruce Ashfield
This is the 5.4.246 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmSC4twACgkQONu9yGCS # aT7+DhAAtEjTkoxf13gC+oPdj7XmBHx2uXkXm0BbkjxMTs/HCM9Eyf1FKTZBEawX # HolNx+akXd2R2xtRCPoV4xD3qy3o2xhLspCfNXoiOU+oL+ej+tNid+6WAl4wtK2w # mafRGafVcNuxOOOgy8B/Bs+fNy+kjEa9riw8VvcgcTYhyOhJnLcKvjpaaJ5AFzxa # i/dwVKo4YkPQNQuN9iPHiubqkVgc83OiXDdhX1YkwXlxnuMJ8sUy1exTmLNQxk2g # hWUtANARnsXI36qv14Gy3iQeJdNGqWq79FWEhfUqVnrFF9Qhn1IoCuGb+wk2s73i # UPdId/iElkYdrDD0PFHKNj7aKeYYanoi3YvrBniS2qT7wwCGnBlXoR1QIuwGCPll # etjWdR9roOBHKA3rRAYPItrVXk2gNJafyBpCW/Zke6QYrXkDvu+m0Oz2C7jttFwN # BCDDIqdWmPJBgU8CJ+sguXsAkHRahWI59NQX5BmgxGooshtPdqPRegCUTPqxhXlP # YFyPyLqQ/s+xMu4cE/MYqRiKSfRN61sSYWO8R2W8fQ60Kd+FxIKZvnhiEEWlpbGv # i9ZcHkQVG/eubWhutj9GBDsZ4auzXfzxLkTBW94NXlycjjG4g0QRkBl+4UC3BJO+ # HNTAO9x7KRnqPGMEJVL8QzaWYvXnnOGfC+7yBE75c/iQiCcY3gI= # =fzPK # -----END PGP SIGNATURE----- # gpg: Signature made Fri 09 Jun 2023 04:29:16 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-06-09treewide: Remove uninitialized_var() usageKees Cook
commit 3f649ab728cda8038259d8f14492fe400fbab911 upstream. Using uninitialized_var() is dangerous as it papers over real bugs[1] (or can in the future), and suppresses unrelated compiler warnings (e.g. "unused variable"). If the compiler thinks it is uninitialized, either simply initialize the variable or make compiler changes. In preparation for removing[2] the[3] macro[4], remove all remaining needless uses with the following script: git grep '\buninitialized_var\b' | cut -d: -f1 | sort -u | \ xargs perl -pi -e \ 's/\buninitialized_var\(([^\)]+)\)/\1/g; s:\s*/\* (GCC be quiet|to make compiler happy) \*/$::g;' drivers/video/fbdev/riva/riva_hw.c was manually tweaked to avoid pathological white-space. No outstanding warnings were found building allmodconfig with GCC 9.3.0 for x86_64, i386, arm64, arm, powerpc, powerpc64le, s390x, mips, sparc64, alpha, and m68k. [1] https://lore.kernel.org/lkml/20200603174714.192027-1-glider@google.com/ [2] https://lore.kernel.org/lkml/CA+55aFw+Vbj0i=1TGqCR5vQkCzWJ0QxK6CernOU6eedsudAixw@mail.gmail.com/ [3] https://lore.kernel.org/lkml/CA+55aFwgbgqhbp1fkxvRKEpzyR5J8n1vKT1VZdz9knmPuXhOeg@mail.gmail.com/ [4] https://lore.kernel.org/lkml/CA+55aFz2500WfbKXAx8s67wrm9=yVJu65TpLgN_ybYNv0VEOKA@mail.gmail.com/ Reviewed-by: Leon Romanovsky <leonro@mellanox.com> # drivers/infiniband and mlx4/mlx5 Acked-by: Jason Gunthorpe <jgg@mellanox.com> # IB Acked-by: Kalle Valo <kvalo@codeaurora.org> # wireless drivers Reviewed-by: Chao Yu <yuchao0@huawei.com> # erofs Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-06-05Merge branch 'v5.4/standard/base' into v5.4/standard/preempt-rt/baseBruce Ashfield
2023-06-05Merge tag 'v5.4.244' into v5.4/standard/baseBruce Ashfield
This is the 5.4.244 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmR14ZEACgkQONu9yGCS # aT41Mw/+NyTg/nNT37u5X7l6TeoWkJTTpxJTFM+EIL0L/LZ8d+fPwvXRuSEfUH8X # 7yLBaepbuGdtyMMCmJofxlNwMrx9L9M1xK03s9DnKGxVlkFZbJth/8L2FD/R939z # 7IP06/uYL/YI8ZjJSSEf6bOLqvy0BdqSLRpn9NKK9eChK0aIVQ03TIrS1NarAzuQ # lMD5CwaFqZCz8NaGfdpg01JDfMuvKdCD8dCkYE+bO9U/nQRr1dmKvHNsQMpecDte # F/YXfbcv3CIh7vwfdw8UOFzwhyZWjWHsSWi0wRK8ZGy1ckDr3lZFgYj+jr0K/CWu # mMRiEXUIphqwCb7mdi5doWyLD9ZFyU8Jx249vqWBeuL4Hb+74vqJVf1wKT0wOE8c # F6LyxXkc7lfNIIWojn4MyvxtIu4SPo/NsTd9Qxz7kj4SZHmAJNJihFIEezMUB8Wr # 7VZP8o75PJ4Kx0aKkFY2IyZuC/GJa7VD+9AnCyB93eWfkufzMV/1fdOR3WEukpOg # cqRl2xRcQiRu7I1jkn09Ir6yHjR5zZ12QHT/MNZiapaXmnG/IwHGopkQKUlM3Cwz # rbAg7gLb89mjHbbFq8TO1W7JIelLuejAk/P8tO1Uf9VEa/c0E0I7Q434posf0/Yk # XJdV2V+meOG6qyGkW35yUgentd5+bcSxyaA9D1IarA0EC11UFjU= # =hQuZ # -----END PGP SIGNATURE----- # gpg: Signature made Tue 30 May 2023 07:44:17 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-05-30x86/show_trace_log_lvl: Ensure stack pointer is aligned, againVernon Lovejoy
commit 2e4be0d011f21593c6b316806779ba1eba2cd7e0 upstream. The commit e335bb51cc15 ("x86/unwind: Ensure stack pointer is aligned") tried to align the stack pointer in show_trace_log_lvl(), otherwise the "stack < stack_info.end" check can't guarantee that the last read does not go past the end of the stack. However, we have the same problem with the initial value of the stack pointer, it can also be unaligned. So without this patch this trivial kernel module #include <linux/module.h> static int init(void) { asm volatile("sub $0x4,%rsp"); dump_stack(); asm volatile("add $0x4,%rsp"); return -EAGAIN; } module_init(init); MODULE_LICENSE("GPL"); crashes the kernel. Fixes: e335bb51cc15 ("x86/unwind: Ensure stack pointer is aligned") Signed-off-by: Vernon Lovejoy <vlovejoy@redhat.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Link: https://lore.kernel.org/r/20230512104232.GA10227@redhat.com Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-05-30x86/topology: Fix erroneous smp_num_siblings on Intel Hybrid platformsZhang Rui
commit edc0a2b5957652f4685ef3516f519f84807087db upstream. Traditionally, all CPUs in a system have identical numbers of SMT siblings. That changes with hybrid processors where some logical CPUs have a sibling and others have none. Today, the CPU boot code sets the global variable smp_num_siblings when every CPU thread is brought up. The last thread to boot will overwrite it with the number of siblings of *that* thread. That last thread to boot will "win". If the thread is a Pcore, smp_num_siblings == 2. If it is an Ecore, smp_num_siblings == 1. smp_num_siblings describes if the *system* supports SMT. It should specify the maximum number of SMT threads among all cores. Ensure that smp_num_siblings represents the system-wide maximum number of siblings by always increasing its value. Never allow it to decrease. On MeteorLake-P platform, this fixes a problem that the Ecore CPUs are not updated in any cpu sibling map because the system is treated as an UP system when probing Ecore CPUs. Below shows part of the CPU topology information before and after the fix, for both Pcore and Ecore CPU (cpu0 is Pcore, cpu 12 is Ecore). ... -/sys/devices/system/cpu/cpu0/topology/package_cpus:000fff -/sys/devices/system/cpu/cpu0/topology/package_cpus_list:0-11 +/sys/devices/system/cpu/cpu0/topology/package_cpus:3fffff +/sys/devices/system/cpu/cpu0/topology/package_cpus_list:0-21 ... -/sys/devices/system/cpu/cpu12/topology/package_cpus:001000 -/sys/devices/system/cpu/cpu12/topology/package_cpus_list:12 +/sys/devices/system/cpu/cpu12/topology/package_cpus:3fffff +/sys/devices/system/cpu/cpu12/topology/package_cpus_list:0-21 Notice that the "before" 'package_cpus_list' has only one CPU. This means that userspace tools like lscpu will see a little laptop like an 11-socket system: -Core(s) per socket: 1 -Socket(s): 11 +Core(s) per socket: 16 +Socket(s): 1 This is also expected to make the scheduler do rather wonky things too. [ dhansen: remove CPUID detail from changelog, add end user effects ] CC: stable@kernel.org Fixes: bbb65d2d365e ("x86: use cpuid vector 0xb when available for detecting cpu topology") Fixes: 95f3d39ccf7a ("x86/cpu/topology: Provide detect_extended_topology_early()") Suggested-by: Len Brown <len.brown@intel.com> Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/20230323015640.27906-1-rui.zhang%40intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-05-19Merge branch 'v5.4/standard/base' into v5.4/standard/preempt-rt/baseBruce Ashfield
Signed-off-by: Bruce Ashfield <bruce.ashfield@gmail.com> # Conflicts: # kernel/time/tick-sched.c # lib/debugobjects.c
2023-05-19Merge tag 'v5.4.243' into v5.4/standard/baseBruce Ashfield
This is the 5.4.243 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmRkoEsACgkQONu9yGCS # aT6nbBAAxLX8QMuKuA8fcSFqQTZwrGAW/x7aOih1Sgkw/pttE8t8/q9sxlPZHljK # UnZWzy/xjBayWA4aEskkd8pvZh7uXqcQH56UuiuzTiZwNtKQfAlvbVjsibzOk8mt # leuNP1F/Kod7CFYi/o8yoo4tUrWPmNLgc5ZaAvR/FYapanpYLB/6I9u2mf8HPjRP # tF1PwYPl9V7NdiAx5Liw6mczBI+v05FY7+G2tsUrnE/XM3SFOg8mwKNTksBeiZ8a # vZxCwQgTohUR2yKMjSrsKnZ2sQAoskOlpc8YpdwSk2s7KZKf+QcI6Y2BhneK/A7+ # BU9vQr8Y0qrciBrpZvBGLcBhcmXUQwgZBh4VKUwJCUWijSQRSjhs/3+rAyvj74rF # w8hP6EDgyAb5fKSU//MAZiFqdQfzowGne2Uin/rgyhyK9l+zxRCRtY1Ra+T75Jvl # 2MNU+VwvfRzzGJtP4BiuA2qoHsTqmLK2SUUrqmhyRm2D3cK17NuIJeGMwt3BXDzw # g+FpXoVGmkmfl+HHQLWdqpJ654APpJgxjhK6Hjca5608V+FIW7FGScAWX2CRmpUK # rTAUPloptXIuo41CI+z7hdmYSfFtJymOgd650p5ntmro+7tMRQkhhjnEDDF8y1Jr # 703VIa3QkRWRE5/xGi2KM2GgEH81j0s2Nyo/7JQtiitOjqtpgJ4= # =SrzM # -----END PGP SIGNATURE----- # gpg: Signature made Wed 17 May 2023 05:37:15 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-05-17x86/ioapic: Don't return 0 from arch_dynirq_lower_bound()Saurabh Sengar
[ Upstream commit 5af507bef93c09a94fb8f058213b489178f4cbe5 ] arch_dynirq_lower_bound() is invoked by the core interrupt code to retrieve the lowest possible Linux interrupt number for dynamically allocated interrupts like MSI. The x86 implementation uses this to exclude the IO/APIC GSI space. This works correctly as long as there is an IO/APIC registered, but returns 0 if not. This has been observed in VMs where the BIOS does not advertise an IO/APIC. 0 is an invalid interrupt number except for the legacy timer interrupt on x86. The return value is unchecked in the core code, so it ends up to allocate interrupt number 0 which is subsequently considered to be invalid by the caller, e.g. the MSI allocation code. The function has already a check for 0 in the case that an IO/APIC is registered, as ioapic_dynirq_base is 0 in case of device tree setups. Consolidate this and zero check for both ioapic_dynirq_base and gsi_top, which is used in the case that no IO/APIC is registered. Fixes: 3e5bedc2c258 ("x86/apic: Fix arch_dynirq_lower_bound() bug for DT enabled machines") Signed-off-by: Saurabh Sengar <ssengar@linux.microsoft.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/1679988604-20308-1-git-send-email-ssengar@linux.microsoft.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-05-17x86/apic: Fix atomic update of offset in reserve_eilvt_offset()Uros Bizjak
[ Upstream commit f96fb2df3eb31ede1b34b0521560967310267750 ] The detection of atomic update failure in reserve_eilvt_offset() is not correct. The value returned by atomic_cmpxchg() should be compared to the old value from the location to be updated. If these two are the same, then atomic update succeeded and "eilvt_offsets[offset]" location is updated to "new" in an atomic way. Otherwise, the atomic update failed and it should be retried with the value from "eilvt_offsets[offset]" - exactly what atomic_try_cmpxchg() does in a correct and more optimal way. Fixes: a68c439b1966c ("apic, x86: Check if EILVT APIC registers are available (AMD only)") Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20230227160917.107820-1-ubizjak@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-04-21Merge branch 'v5.4/standard/base' into v5.4/standard/preempt-rt/baseBruce Ashfield
2023-04-21Merge tag 'v5.4.241' into v5.4/standard/baseBruce Ashfield
This is the 5.4.241 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmRBDvQACgkQONu9yGCS # aT58+g/+PmTvsCcjSlTvpHw+kFsIu2F2pIjlJNlX+YHSzEYMG/G2mn7XD1rUE7JB # mQQ7hFXEeQZTuN4zsFnviotzn1xd6bV26Od598m+yFZ3Fty07NXy3OD1/08ZdP0G # k+U9u4TDr47BV3WdokGWIdXAjnlvC0kwjc+EWznOmXquhW8hSrGvz0IUJQWgDue1 # IpOkuRs60T85dUHoPYEL8T23XRyhFhdeIrlx6Hqzlp1fGg7dSpfh0yYJmDLF6Y21 # CXO2WcADhMMszk+CFEXgfWVem5mheUuVsfmBUg34ZGzzNos0rPyHG+VmwZxnsPYE # SVLVxLL649CzAuJdpXflTghdBUU/WKT/ZZ0aQddohPjij55My2oCV6/FpHXg7kuo # ZTXN0ByyeAbV4DlY2+ooY6Vhr5lzQG1l15Ap9xj4ioQO8U3jeCCQ5wT9L01ig6a9 # /9U9wb6CYp3thrY94x1WapJF5utiIigbiS+rioRfAwHAzj5JiLfQvdH1nVgBNz+9 # DWCEGIiUONmHMRrt+X6Nnu8KkHWOkDFF2lisphXUsaY140gFG0+d3xnRArsU4ZDi # j1zT0ErqigV6vzA39ic898EW8wkNqVCWPwYRLVRSBuPCDKK7SjnOuStGi7eIoYp2 # zI2NV9UUMm5Qxub5zUNktpWDFFn9knN3E7Gc4wAKWkHkWc/DJ+U= # =ffeb # -----END PGP SIGNATURE----- # gpg: Signature made Thu 20 Apr 2023 06:07:48 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-04-20efi: sysfb_efi: Add quirk for Lenovo Yoga Book X91F/LHans de Goede
[ Upstream commit 5ed213dd64681f84a01ceaa82fb336cf7d59ddcf ] Another Lenovo convertable which reports a landscape resolution of 1920x1200 with a pitch of (1920 * 4) bytes, while the actual framebuffer has a resolution of 1200x1920 with a pitch of (1200 * 4) bytes. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Reviewed-by: Javier Martinez Canillas <javierm@redhat.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-17Merge branch 'v5.4/standard/base' into v5.4/standard/preempt-rt/baseBruce Ashfield
2023-03-17Merge tag 'v5.4.237' into v5.4/standard/baseBruce Ashfield
This is the 5.4.237 stable release # -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmQUF8AACgkQONu9yGCS # aT7Ivg/9HDcZ73/ntBZYD5TCbUl8yEEFqaqhb6vBdzC/Uba8CJNgeM91ti0GB0x0 # Zue4GNrlP7NQzYV+E9pnTXxagAKxtQfZJc/kJmBbbpXgisJWJlgQ4aqtoRoXnCE1 # 1NwRVQlyVjmPG/AvU7p1jGUdw6Or/OEhMz+kN89XtUiAI2uCuPxM7W02zWxCZtQy # EHonZ4nqhE6mQpAvYpwJ9x11hnAmnJNMOJQI+S/aL6M2geZ6lIblOG7+D8F4Vbz6 # nsNQ9LguJ89ffuZe7W3kXvLwqDBZingURxh4r70hNwcmDSNw0o3VpDfL+znUmixw # g1TfuFR4zBp4vB9WOFQBlZs7XQUE+InQXm79CL62l5b01JgvoGxPBwguBsjyuBJI # 9b/bWOcgRsDNKjLwhql3yj8SIN5a7csjofN5BqsZqihXSdoKc1mmHQHzf8NzS+HU # 5juQGDtvFGqXj6L0N/PDTBH8cOhjqyao+1FPdIWlL1XNI0XekCVncakrx0HNxiPv # YTXnETEIf13bfIjY88UcoigGTz/4rULO2G2ykwZQdP+F+r3U1BFHaxM9nd7aQaS6 # LoabYP/mlIoYUN/Kc0FI+Pp6052+IU70LFAm0urYqJ7RBvRolWn/arpK2PbXdVO4 # Z5D2fGuX/E5ZCBm4l5PSIQgfOEdRqQr9roIN0tfVjSQn6y+n3ww= # =nqJu # -----END PGP SIGNATURE----- # gpg: Signature made Fri 17 Mar 2023 03:33:20 AM EDT # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-03-17Merge tag 'v5.4.235' into v5.4/standard/baseBruce Ashfield
This is the 5.4.235 stable release # -----BEGIN PGP SIGNATURE----- # # iQIyBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmQModwACgkQONu9yGCS # aT7W7A/1EyhortcaMdZXEkdl7kZYupASsOm2QgOzeRkK0ELtbYRTt1qXdZgl40hU # binrh5Yib2avHTEAF9I6AKVXMirSUTtODe/zQ7icyxVNcXeanlIbobEVBzSWIBtC # Wxj129KZyCQlucagWihngQ9D+66bvD5JCsJ3EHKJjpheSqmZI88KVnOSnvyoJArj # yLDY21UgxRN4KASgB+tpLBT4x0yN9zk8VuCGpyJjO/nHzhj6Y6DkOcx2q7hAxdn+ # H1OBCQ2QBCODCMrpW4xBuwy2blBZsRytUdEy8JsfxjgXvUp8+TdxUsuxb16a31jW # pVo9LYB0cdKVoAzNJ2pTD8rhaATSbq+2MYDEUYCz8Rr+dZ/Nt2nTKSYeJprLsTwx # TzPRNErQMKxKoQUQU/seWx47ebwt+Z8Rk4FAoyQMxRITw/9bBGLWpDKrGjNsByz9 # A2Q9UU+uM+jyqZnjQMvkzKSznggwfJ+SgaeqDMjwyyCQysJS8DTXPr9nA+IC9cht # Kz00QetNgvPvZPE/gg81XOcKtJVTmA4AITQ0PlxYJT0hHCHx02GxvdPH2XBspgUt # aNbDgVsupq8ONvRZlEf9hJKltTUmIRvI9JSOXnuhaN2jCv88SNv1M0TKfAo0XDNK # Z/prv3qCnugMZ0KB0TD7d09XqSlKbefOq8TdtbXoTcC0NzFQkw== # =29jZ # -----END PGP SIGNATURE----- # gpg: Signature made Sat 11 Mar 2023 10:44:28 AM EST # gpg: using RSA key 647F28654894E3BD457199BE38DBBDC86092693E # gpg: Can't check signature: No public key
2023-03-17x86, vmlinux.lds: Add RUNTIME_DISCARD_EXIT to generic DISCARDSH.J. Lu
commit 84d5f77fc2ee4e010c2c037750e32f06e55224b0 upstream. In the x86 kernel, .exit.text and .exit.data sections are discarded at runtime, not by the linker. Add RUNTIME_DISCARD_EXIT to generic DISCARDS and define it in the x86 kernel linker script to keep them. The sections are added before the DISCARD directive so document here only the situation explicitly as this change doesn't have any effect on the generated kernel. Also, other architectures like ARM64 will use it too so generalize the approach with the RUNTIME_DISCARD_EXIT define. [ bp: Massage and extend commit message. ] Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20200326193021.255002-1-hjl.tools@gmail.com Signed-off-by: Tom Saeger <tom.saeger@oracle.com>
2023-03-17x86/CPU/AMD: Disable XSAVES on AMD family 0x17Andrew Cooper
commit b0563468eeac88ebc70559d52a0b66efc37e4e9d upstream. AMD Erratum 1386 is summarised as: XSAVES Instruction May Fail to Save XMM Registers to the Provided State Save Area This piece of accidental chronomancy causes the %xmm registers to occasionally reset back to an older value. Ignore the XSAVES feature on all AMD Zen1/2 hardware. The XSAVEC instruction (which works fine) is equivalent on affected parts. [ bp: Typos, move it into the F17h-specific function. ] Reported-by: Tavis Ormandy <taviso@gmail.com> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: <stable@kernel.org> Link: https://lore.kernel.org/r/20230307174643.1240184-1-andrew.cooper3@citrix.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-03-11x86/resctl: fix scheduler confusion with 'current'Linus Torvalds
commit 7fef099702527c3b2c5234a2ea6a24411485a13a upstream. The implementation of 'current' on x86 is very intentionally special: it is a very common thing to look up, and it uses 'this_cpu_read_stable()' to get the current thread pointer efficiently from per-cpu storage. And the keyword in there is 'stable': the current thread pointer never changes as far as a single thread is concerned. Even if when a thread is preempted, or moved to another CPU, or even across an explicit call 'schedule()' that thread will still have the same value for 'current'. It is, after all, the kernel base pointer to thread-local storage. That's why it's stable to begin with, but it's also why it's important enough that we have that special 'this_cpu_read_stable()' access for it. So this is all done very intentionally to allow the compiler to treat 'current' as a value that never visibly changes, so that the compiler can do CSE and combine multiple different 'current' accesses into one. However, there is obviously one very special situation when the currently running thread does actually change: inside the scheduler itself. So the scheduler code paths are special, and do not have a 'current' thread at all. Instead there are _two_ threads: the previous and the next thread - typically called 'prev' and 'next' (or prev_p/next_p) internally. So this is all actually quite straightforward and simple, and not all that complicated. Except for when you then have special code that is run in scheduler context, that code then has to be aware that 'current' isn't really a valid thing. Did you mean 'prev'? Did you mean 'next'? In fact, even if then look at the code, and you use 'current' after the new value has been assigned to the percpu variable, we have explicitly told the compiler that 'current' is magical and always stable. So the compiler is quite free to use an older (or newer) value of 'current', and the actual assignment to the percpu storage is not relevant even if it might look that way. Which is exactly what happened in the resctl code, that blithely used 'current' in '__resctrl_sched_in()' when it really wanted the new process state (as implied by the name: we're scheduling 'into' that new resctl state). And clang would end up just using the old thread pointer value at least in some configurations. This could have happened with gcc too, and purely depends on random compiler details. Clang just seems to have been more aggressive about moving the read of the per-cpu current_task pointer around. The fix is trivial: just make the resctl code adhere to the scheduler rules of using the prev/next thread pointer explicitly, instead of using 'current' in a situation where it just wasn't valid. That same code is then also used outside of the scheduler context (when a thread resctl state is explicitly changed), and then we will just pass in 'current' as that pointer, of course. There is no ambiguity in that case. The fix may be trivial, but noticing and figuring out what went wrong was not. The credit for that goes to Stephane Eranian. Reported-by: Stephane Eranian <eranian@google.com> Link: https://lore.kernel.org/lkml/20230303231133.1486085-1-eranian@google.com/ Link: https://lore.kernel.org/lkml/alpine.LFD.2.01.0908011214330.3304@localhost.localdomain/ Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Tony Luck <tony.luck@intel.com> Tested-by: Stephane Eranian <eranian@google.com> Tested-by: Babu Moger <babu.moger@amd.com> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-03-11x86/resctrl: Apply READ_ONCE/WRITE_ONCE to task_struct.{rmid,closid}Valentin Schneider
commit 6d3b47ddffed70006cf4ba360eef61e9ce097d8f upstream. A CPU's current task can have its {closid, rmid} fields read locally while they are being concurrently written to from another CPU. This can happen anytime __resctrl_sched_in() races with either __rdtgroup_move_task() or rdt_move_group_tasks(). Prevent load / store tearing for those accesses by giving them the READ_ONCE() / WRITE_ONCE() treatment. Signed-off-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/9921fda88ad81afb9885b517fbe864a2bc7c35a9.1608243147.git.reinette.chatre@intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-03-11firmware/efi sysfb_efi: Add quirk for Lenovo IdeaPad Duet 3Darrell Kavanagh
[ Upstream commit e1d447157f232c650e6f32c9fb89ff3d0207c69a ] Another Lenovo convertable which reports a landscape resolution of 1920x1200 with a pitch of (1920 * 4) bytes, while the actual framebuffer has a resolution of 1200x1920 with a pitch of (1200 * 4) bytes. Signed-off-by: Darrell Kavanagh <darrell.kavanagh@gmail.com> Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-03-11x86/speculation: Allow enabling STIBP with legacy IBRSKP Singh
commit 6921ed9049bc7457f66c1596c5b78aec0dae4a9d upstream. When plain IBRS is enabled (not enhanced IBRS), the logic in spectre_v2_user_select_mitigation() determines that STIBP is not needed. The IBRS bit implicitly protects against cross-thread branch target injection. However, with legacy IBRS, the IBRS bit is cleared on returning to userspace for performance reasons which leaves userspace threads vulnerable to cross-thread branch target injection against which STIBP protects. Exclude IBRS from the spectre_v2_in_ibrs_mode() check to allow for enabling STIBP (through seccomp/prctl() by default or always-on, if selected by spectre_v2_user kernel cmdline parameter). [ bp: Massage. ] Fixes: 7c693f54c873 ("x86/speculation: Add spectre_v2=ibrs option to support Kernel IBRS") Reported-by: José Oliveira <joseloliveira11@gmail.com> Reported-by: Rodrigo Branco <rodrigo@kernelhacking.com> Signed-off-by: KP Singh <kpsingh@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20230220120127.1975241-1-kpsingh@kernel.org Link: https://lore.kernel.org/r/20230221184908.2349578-1-kpsingh@kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2023-03-11x86/microcode/AMD: Fix mixed steppings supportBorislav Petkov (AMD)
commit 7ff6edf4fef38ab404ee7861f257e28eaaeed35f upstream. The AMD side of the loader has always claimed to support mixed steppings. But somewhere along the way, it broke that by assuming that the cached patch blob is a single one instead of it being one per *node*. So turn it into a per-node one so that each node can stash the blob relevant for it. [ NB: Fixes tag is not really the exactly correct one but it is good enough. ] Fixes: fe055896c040 ("x86/microcode: Merge the early microcode loader") Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: <stable@kernel.org> # 2355370cd941 ("x86/microcode/amd: Remove load_microcode_amd()'s bsp parameter") Cc: <stable@kernel.org> # a5ad92134bd1 ("x86/microcode/AMD: Add a @cpu parameter to the reloading functions") Link: https://lore.kernel.org/r/20230130161709.11615-4-bp@alien8.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>