aboutsummaryrefslogtreecommitdiffstats
path: root/arch
AgeCommit message (Collapse)Author
2019-09-23Merge tag 'v4.18.45' into v4.18/standard/baseBruce Ashfield
This is the 4.18.45 stable release # gpg: Signature made Sat 21 Sep 2019 12:20:15 PM EDT # gpg: using RSA key EBCE84042C07D1D6 # gpg: Can't check signature: No public key
2019-09-23Merge tag 'v4.18.44' into v4.18/standard/baseBruce Ashfield
This is the 4.18.44 stable release # gpg: Signature made Tue 17 Sep 2019 10:35:59 AM EDT # gpg: using RSA key EBCE84042C07D1D6 # gpg: Can't check signature: No public key
2019-09-21powerpc/tm: Fix restoring FP/VMX facility incorrectly on interruptsGustavo Romero
commit a8318c13e79badb92bc6640704a64cc022a6eb97 upstream. When in userspace and MSR FP=0 the hardware FP state is unrelated to the current process. This is extended for transactions where if tbegin is run with FP=0, the hardware checkpoint FP state will also be unrelated to the current process. Due to this, we need to ensure this hardware checkpoint is updated with the correct state before we enable FP for this process. Unfortunately we get this wrong when returning to a process from a hardware interrupt. A process that starts a transaction with FP=0 can take an interrupt. When the kernel returns back to that process, we change to FP=1 but with hardware checkpoint FP state not updated. If this transaction is then rolled back, the FP registers now contain the wrong state. The process looks like this: Userspace: Kernel Start userspace with MSR FP=0 TM=1 < ----- ... tbegin bne Hardware interrupt ---- > <do_IRQ...> .... ret_from_except restore_math() /* sees FP=0 */ restore_fp() tm_active_with_fp() /* sees FP=1 (Incorrect) */ load_fp_state() FP = 0 -> 1 < ----- Return to userspace with MSR TM=1 FP=1 with junk in the FP TM checkpoint TM rollback reads FP junk When returning from the hardware exception, tm_active_with_fp() is incorrectly making restore_fp() call load_fp_state() which is setting FP=1. The fix is to remove tm_active_with_fp(). tm_active_with_fp() is attempting to handle the case where FP state has been changed inside a transaction. In this case the checkpointed and transactional FP state is different and hence we must restore the FP state (ie. we can't do lazy FP restore inside a transaction that's used FP). It's safe to remove tm_active_with_fp() as this case is handled by restore_tm_state(). restore_tm_state() detects if FP has been using inside a transaction and will set load_fp and call restore_math() to ensure the FP state (checkpoint and transaction) is restored. This is a data integrity problem for the current process as the FP registers are corrupted. It's also a security problem as the FP registers from one process may be leaked to another. Similarly for VMX. A simple testcase to replicate this will be posted to tools/testing/selftests/powerpc/tm/tm-poison.c This fixes CVE-2019-15031. Fixes: a7771176b439 ("powerpc: Don't enable FP/Altivec if not checkpointed") Cc: stable@vger.kernel.org # 4.15+ Signed-off-by: Gustavo Romero <gromero@linux.ibm.com> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190904045529.23002-2-gromero@linux.vnet.ibm.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-21powerpc/tm: Fix FP/VMX unavailable exceptions inside a transactionGustavo Romero
commit 8205d5d98ef7f155de211f5e2eb6ca03d95a5a60 upstream. When we take an FP unavailable exception in a transaction we have to account for the hardware FP TM checkpointed registers being incorrect. In this case for this process we know the current and checkpointed FP registers must be the same (since FP wasn't used inside the transaction) hence in the thread_struct we copy the current FP registers to the checkpointed ones. This copy is done in tm_reclaim_thread(). We use thread->ckpt_regs.msr to determine if FP was on when in userspace. thread->ckpt_regs.msr represents the state of the MSR when exiting userspace. This is setup by check_if_tm_restore_required(). Unfortunatley there is an optimisation in giveup_all() which returns early if tsk->thread.regs->msr (via local variable `usermsr`) has FP=VEC=VSX=SPE=0. This optimisation means that check_if_tm_restore_required() is not called and hence thread->ckpt_regs.msr is not updated and will contain an old value. This can happen if due to load_fp=255 we start a userspace process with MSR FP=1 and then we are context switched out. In this case thread->ckpt_regs.msr will contain FP=1. If that same process is then context switched in and load_fp overflows, MSR will have FP=0. If that process now enters a transaction and does an FP instruction, the FP unavailable will not update thread->ckpt_regs.msr (the bug) and MSR FP=1 will be retained in thread->ckpt_regs.msr. tm_reclaim_thread() will then not perform the required memcpy and the checkpointed FP regs in the thread struct will contain the wrong values. The code path for this happening is: Userspace: Kernel Start userspace with MSR FP/VEC/VSX/SPE=0 TM=1 < ----- ... tbegin bne fp instruction FP unavailable ---- > fp_unavailable_tm() tm_reclaim_current() tm_reclaim_thread() giveup_all() return early since FP/VMX/VSX=0 /* ckpt MSR not updated (Incorrect) */ tm_reclaim() /* thread_struct ckpt FP regs contain junk (OK) */ /* Sees ckpt MSR FP=1 (Incorrect) */ no memcpy() performed /* thread_struct ckpt FP regs not fixed (Incorrect) */ tm_recheckpoint() /* Put junk in hardware checkpoint FP regs */ .... < ----- Return to userspace with MSR TM=1 FP=1 with junk in the FP TM checkpoint TM rollback reads FP junk This is a data integrity problem for the current process as the FP registers are corrupted. It's also a security problem as the FP registers from one process may be leaked to another. This patch moves up check_if_tm_restore_required() in giveup_all() to ensure thread->ckpt_regs.msr is updated correctly. A simple testcase to replicate this will be posted to tools/testing/selftests/powerpc/tm/tm-poison.c Similarly for VMX. This fixes CVE-2019-15030. Fixes: f48e91e87e67 ("powerpc/tm: Fix FP and VMX register corruption") Cc: stable@vger.kernel.org # 4.12+ Signed-off-by: Gustavo Romero <gromero@linux.vnet.ibm.com> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190904045529.23002-1-gromero@linux.vnet.ibm.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-21x86/speculation/swapgs: Exclude ATOMs from speculation through SWAPGSThomas Gleixner
commit f36cf386e3fec258a341d446915862eded3e13d8 upstream. Intel provided the following information: On all current Atom processors, instructions that use a segment register value (e.g. a load or store) will not speculatively execute before the last writer of that segment retires. Thus they will not use a speculatively written segment value. That means on ATOMs there is no speculation through SWAPGS, so the SWAPGS entry paths can be excluded from the extra LFENCE if PTI is disabled. Create a separate bug flag for the through SWAPGS speculation and mark all out-of-order ATOMs and AMD/HYGON CPUs as not affected. The in-order ATOMs are excluded from the whole mitigation mess anyway. Reported-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Tyler Hicks <tyhicks@canonical.com> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> [PG: drop HYGON chunk - doesn't exist in 4.18.x kernels.] Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-21x86/entry/64: Use JMP instead of JMPQJosh Poimboeuf
commit 64dbc122b20f75183d8822618c24f85144a5a94d upstream. Somehow the swapgs mitigation entry code patch ended up with a JMPQ instruction instead of JMP, where only the short jump is needed. Some assembler versions apparently fail to optimize JMPQ into a two-byte JMP when possible, instead always using a 7-byte JMP with relocation. For some reason that makes the entry code explode with a #GP during boot. Change it back to "JMP" as originally intended. Fixes: 18ec54fdd6d1 ("x86/speculation: Prepare entry code for Spectre v1 swapgs mitigations") Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-21x86/speculation: Enable Spectre v1 swapgs mitigationsJosh Poimboeuf
commit a2059825986a1c8143fd6698774fa9d83733bb11 upstream. The previous commit added macro calls in the entry code which mitigate the Spectre v1 swapgs issue if the X86_FEATURE_FENCE_SWAPGS_* features are enabled. Enable those features where applicable. The mitigations may be disabled with "nospectre_v1" or "mitigations=off". There are different features which can affect the risk of attack: - When FSGSBASE is enabled, unprivileged users are able to place any value in GS, using the wrgsbase instruction. This means they can write a GS value which points to any value in kernel space, which can be useful with the following gadget in an interrupt/exception/NMI handler: if (coming from user space) swapgs mov %gs:<percpu_offset>, %reg1 // dependent load or store based on the value of %reg // for example: mov %(reg1), %reg2 If an interrupt is coming from user space, and the entry code speculatively skips the swapgs (due to user branch mistraining), it may speculatively execute the GS-based load and a subsequent dependent load or store, exposing the kernel data to an L1 side channel leak. Note that, on Intel, a similar attack exists in the above gadget when coming from kernel space, if the swapgs gets speculatively executed to switch back to the user GS. On AMD, this variant isn't possible because swapgs is serializing with respect to future GS-based accesses. NOTE: The FSGSBASE patch set hasn't been merged yet, so the above case doesn't exist quite yet. - When FSGSBASE is disabled, the issue is mitigated somewhat because unprivileged users must use prctl(ARCH_SET_GS) to set GS, which restricts GS values to user space addresses only. That means the gadget would need an additional step, since the target kernel address needs to be read from user space first. Something like: if (coming from user space) swapgs mov %gs:<percpu_offset>, %reg1 mov (%reg1), %reg2 // dependent load or store based on the value of %reg2 // for example: mov %(reg2), %reg3 It's difficult to audit for this gadget in all the handlers, so while there are no known instances of it, it's entirely possible that it exists somewhere (or could be introduced in the future). Without tooling to analyze all such code paths, consider it vulnerable. Effects of SMAP on the !FSGSBASE case: - If SMAP is enabled, and the CPU reports RDCL_NO (i.e., not susceptible to Meltdown), the kernel is prevented from speculatively reading user space memory, even L1 cached values. This effectively disables the !FSGSBASE attack vector. - If SMAP is enabled, but the CPU *is* susceptible to Meltdown, SMAP still prevents the kernel from speculatively reading user space memory. But it does *not* prevent the kernel from reading the user value from L1, if it has already been cached. This is probably only a small hurdle for an attacker to overcome. Thanks to Dave Hansen for contributing the speculative_smap() function. Thanks to Andrew Cooper for providing the inside scoop on whether swapgs is serializing on AMD. [ tglx: Fixed the USER fence decision and polished the comment as suggested by Dave Hansen ] Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-21x86/speculation: Prepare entry code for Spectre v1 swapgs mitigationsJosh Poimboeuf
commit 18ec54fdd6d18d92025af097cd042a75cf0ea24c upstream. Spectre v1 isn't only about array bounds checks. It can affect any conditional checks. The kernel entry code interrupt, exception, and NMI handlers all have conditional swapgs checks. Those may be problematic in the context of Spectre v1, as kernel code can speculatively run with a user GS. For example: if (coming from user space) swapgs mov %gs:<percpu_offset>, %reg mov (%reg), %reg1 When coming from user space, the CPU can speculatively skip the swapgs, and then do a speculative percpu load using the user GS value. So the user can speculatively force a read of any kernel value. If a gadget exists which uses the percpu value as an address in another load/store, then the contents of the kernel value may become visible via an L1 side channel attack. A similar attack exists when coming from kernel space. The CPU can speculatively do the swapgs, causing the user GS to get used for the rest of the speculative window. The mitigation is similar to a traditional Spectre v1 mitigation, except: a) index masking isn't possible; because the index (percpu offset) isn't user-controlled; and b) an lfence is needed in both the "from user" swapgs path and the "from kernel" non-swapgs path (because of the two attacks described above). The user entry swapgs paths already have SWITCH_TO_KERNEL_CR3, which has a CR3 write when PTI is enabled. Since CR3 writes are serializing, the lfences can be skipped in those cases. On the other hand, the kernel entry swapgs paths don't depend on PTI. To avoid unnecessary lfences for the user entry case, create two separate features for alternative patching: X86_FEATURE_FENCE_SWAPGS_USER X86_FEATURE_FENCE_SWAPGS_KERNEL Use these features in entry code to patch in lfences where needed. The features aren't enabled yet, so there's no functional change. Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-21x86/cpufeatures: Combine word 11 and 12 into a new scattered features wordFenghua Yu
commit acec0ce081de0c36459eea91647faf99296445a3 upstream. It's a waste for the four X86_FEATURE_CQM_* feature bits to occupy two whole feature bits words. To better utilize feature words, re-define word 11 to host scattered features and move the four X86_FEATURE_CQM_* features into Linux defined word 11. More scattered features can be added in word 11 in the future. Rename leaf 11 in cpuid_leafs to CPUID_LNX_4 to reflect it's a Linux-defined leaf. Rename leaf 12 as CPUID_DUMMY which will be replaced by a meaningful name in the next patch when CPUID.7.1:EAX occupies world 12. Maximum number of RMID and cache occupancy scale are retrieved from CPUID.0xf.1 after scattered CQM features are enumerated. Carve out the code into a separate function. KVM doesn't support resctrl now. So it's safe to move the X86_FEATURE_CQM_* features to scattered features word 11 for KVM. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Aaron Lewis <aaronlewis@google.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Babu Moger <babu.moger@amd.com> Cc: "Chang S. Bae" <chang.seok.bae@intel.com> Cc: "Sean J Christopherson" <sean.j.christopherson@intel.com> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Juergen Gross <jgross@suse.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: kvm ML <kvm@vger.kernel.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Nadav Amit <namit@vmware.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Pavel Tatashin <pasha.tatashin@oracle.com> Cc: Peter Feiner <pfeiner@google.com> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com> Cc: Ravi V Shankar <ravi.v.shankar@intel.com> Cc: Sherry Hurwitz <sherry.hurwitz@amd.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Lendacky <Thomas.Lendacky@amd.com> Cc: x86 <x86@kernel.org> Link: https://lkml.kernel.org/r/1560794416-217638-2-git-send-email-fenghua.yu@intel.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-21x86/cpufeatures: Carve out CQM features retrievalBorislav Petkov
commit 45fc56e629caa451467e7664fbd4c797c434a6c4 upstream. ... into a separate function for better readability. Split out from a patch from Fenghua Yu <fenghua.yu@intel.com> to keep the mechanical, sole code movement separate for easy review. No functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: x86@kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16x86/microcode: Fix the microcode load on CPU hotplug for realThomas Gleixner
commit 5423f5ce5ca410b3646f355279e4e937d452e622 upstream. A recent change moved the microcode loader hotplug callback into the early startup phase which is running with interrupts disabled. It missed that the callbacks invoke sysfs functions which might sleep causing nice 'might sleep' splats with proper debugging enabled. Split the callbacks and only load the microcode in the early startup phase and move the sysfs handling back into the later threaded and preemptible bringup phase where it was before. Fixes: 78f4e932f776 ("x86/microcode, cpuhotplug: Add a microcode loader CPU hotplug callback") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: stable@vger.kernel.org Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1906182228350.1766@nanos.tec.linutronix.de
2019-09-16x86/ftrace: Fix warning and considate ftrace_jmp_replace() and ↵Steven Rostedt (VMware)
ftrace_call_replace() commit 745cfeaac09ce359130a5451d90cb0bd4094c290 upstream. Arnd reported the following compiler warning: arch/x86/kernel/ftrace.c:669:23: error: 'ftrace_jmp_replace' defined but not used [-Werror=unused-function] The ftrace_jmp_replace() function now only has a single user and should be simply moved by that user. But looking at the code, it shows that ftrace_jmp_replace() is similar to ftrace_call_replace() except that instead of using the opcode of 0xe8 it uses 0xe9. It makes more sense to consolidate that function into one implementation that both ftrace_jmp_replace() and ftrace_call_replace() use by passing in the op code separate. The structure in ftrace_code_union is also modified to replace the "e8" field with the more appropriate name "op". Cc: stable@vger.kernel.org Reported-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Arnd Bergmann <arnd@arndb.de> Link: http://lkml.kernel.org/r/20190304200748.1418790-1-arnd@arndb.de Fixes: d2a68c4effd8 ("x86/ftrace: Do not call function graph from dynamic trampolines") Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16MIPS: Fix bounds check virt_addr_validHauke Mehrtens
commit d6ed083f5cc621e15c15b56c3b585fd524dbcb0f upstream. The bounds check used the uninitialized variable vaddr, it should use the given parameter kaddr instead. When using the uninitialized value the compiler assumed it to be 0 and optimized this function to just return 0 in all cases. This should make the function check the range of the given address and only do the page map check in case it is in the expected range of virtual addresses. Fixes: 074a1e1167af ("MIPS: Bounds check virt_addr_valid") Cc: stable@vger.kernel.org # v4.12+ Cc: Paul Burton <paul.burton@mips.com> Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: ralf@linux-mips.org Cc: jhogan@kernel.org Cc: f4bug@amsat.org Cc: linux-mips@vger.kernel.org Cc: ysu@wavecomp.com Cc: jcristau@debian.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16KVM: PPC: Book3S HV: Don't take kvm->lock around kvm_for_each_vcpuPaul Mackerras
commit 5a3f49364c3ffa1107bd88f8292406e98c5d206c upstream. Currently the HV KVM code takes the kvm->lock around calls to kvm_for_each_vcpu() and kvm_get_vcpu_by_id() (which can call kvm_for_each_vcpu() internally). However, that leads to a lock order inversion problem, because these are called in contexts where the vcpu mutex is held, but the vcpu mutexes nest within kvm->lock according to Documentation/virtual/kvm/locking.txt. Hence there is a possibility of deadlock. To fix this, we simply don't take the kvm->lock mutex around these calls. This is safe because the implementations of kvm_for_each_vcpu() and kvm_get_vcpu_by_id() have been designed to be able to be called locklessly. Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Reviewed-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16KVM: PPC: Book3S: Use new mutex to synchronize access to rtas token listPaul Mackerras
commit 1659e27d2bc1ef47b6d031abe01b467f18cb72d9 upstream. Currently the Book 3S KVM code uses kvm->lock to synchronize access to the kvm->arch.rtas_tokens list. Because this list is scanned inside kvmppc_rtas_hcall(), which is called with the vcpu mutex held, taking kvm->lock cause a lock inversion problem, which could lead to a deadlock. To fix this, we add a new mutex, kvm->arch.rtas_token_lock, which nests inside the vcpu mutexes, and use that instead of kvm->lock when accessing the rtas token list. This removes the lockdep_assert_held() in kvmppc_rtas_tokens_free(). At this point we don't hold the new mutex, but that is OK because kvmppc_rtas_tokens_free() is only called when the whole VM is being destroyed, and at that point nothing can be looking up a token in the list. Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16ia64: fix build errors by exporting paddr_to_nid()Randy Dunlap
commit 9a626c4a6326da4433a0d4d4a8a7d1571caf1ed3 upstream. Fix build errors on ia64 when DISCONTIGMEM=y and NUMA=y by exporting paddr_to_nid(). Fixes these build errors: ERROR: "paddr_to_nid" [sound/core/snd-pcm.ko] undefined! ERROR: "paddr_to_nid" [net/sunrpc/sunrpc.ko] undefined! ERROR: "paddr_to_nid" [fs/cifs/cifs.ko] undefined! ERROR: "paddr_to_nid" [drivers/video/fbdev/core/fb.ko] undefined! ERROR: "paddr_to_nid" [drivers/usb/mon/usbmon.ko] undefined! ERROR: "paddr_to_nid" [drivers/usb/core/usbcore.ko] undefined! ERROR: "paddr_to_nid" [drivers/md/raid1.ko] undefined! ERROR: "paddr_to_nid" [drivers/md/dm-mod.ko] undefined! ERROR: "paddr_to_nid" [drivers/md/dm-crypt.ko] undefined! ERROR: "paddr_to_nid" [drivers/md/dm-bufio.ko] undefined! ERROR: "paddr_to_nid" [drivers/ide/ide-core.ko] undefined! ERROR: "paddr_to_nid" [drivers/ide/ide-cd_mod.ko] undefined! ERROR: "paddr_to_nid" [drivers/gpu/drm/drm.ko] undefined! ERROR: "paddr_to_nid" [drivers/char/agp/agpgart.ko] undefined! ERROR: "paddr_to_nid" [drivers/block/nbd.ko] undefined! ERROR: "paddr_to_nid" [drivers/block/loop.ko] undefined! ERROR: "paddr_to_nid" [drivers/block/brd.ko] undefined! ERROR: "paddr_to_nid" [crypto/ccm.ko] undefined! Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: linux-ia64@vger.kernel.org Signed-off-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16x86/CPU/AMD: Don't force the CPB cap when running under a hypervisorFrank van der Linden
commit 2ac44ab608705948564791ce1d15d43ba81a1e38 upstream. For F17h AMD CPUs, the CPB capability ('Core Performance Boost') is forcibly set, because some versions of that chip incorrectly report that they do not have it. However, a hypervisor may filter out the CPB capability, for good reasons. For example, KVM currently does not emulate setting the CPB bit in MSR_K7_HWCR, and unchecked MSR access errors will be thrown when trying to set it as a guest: unchecked MSR access error: WRMSR to 0xc0010015 (tried to write 0x0000000001000011) at rIP: 0xffffffff890638f4 (native_write_msr+0x4/0x20) Call Trace: boost_set_msr+0x50/0x80 [acpi_cpufreq] cpuhp_invoke_callback+0x86/0x560 sort_range+0x20/0x20 cpuhp_thread_fun+0xb0/0x110 smpboot_thread_fn+0xef/0x160 kthread+0x113/0x130 kthread_create_worker_on_cpu+0x70/0x70 ret_from_fork+0x35/0x40 To avoid this issue, don't forcibly set the CPB capability for a CPU when running under a hypervisor. Signed-off-by: Frank van der Linden <fllinden@amazon.com> Acked-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bp@alien8.de Cc: jiaxun.yang@flygoat.com Fixes: 0237199186e7 ("x86/CPU/AMD: Set the CPB bit unconditionally on F17h") Link: http://lkml.kernel.org/r/20190522221745.GA15789@dev-dsk-fllinden-2c-c1893d73.us-west-2.amazon.com [ Minor edits to the changelog. ] Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16powerpc/powernv: Return for invalid IMC domainAnju T Sudhakar
commit b59bd3527fe3c1939340df558d7f9d568fc9f882 upstream. Currently init_imc_pmu() can fail either because we try to register an IMC unit with an invalid domain (i.e an IMC node not supported by the kernel) or something went wrong while registering a valid IMC unit. In both the cases kernel provides a 'Register failed' error message. For example when trace-imc node is not supported by the kernel, but skiboot advertises a trace-imc node we print: IMC Unknown Device type IMC PMU (null) Register failed To avoid confusion just print the unknown device type message, before attempting PMU registration, so the second message isn't printed. Fixes: 8f95faaac56c ("powerpc/powernv: Detect and create IMC device") Reported-by: Pavaman Subramaniyam <pavsubra@in.ibm.com> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com> Reviewed-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> [mpe: Reword change log a bit] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16perf/x86/intel/ds: Fix EVENT vs. UEVENT PEBS constraintsStephane Eranian
commit 23e3983a466cd540ffdd2bbc6e0c51e31934f941 upstream. This patch fixes an bug revealed by the following commit: 6b89d4c1ae85 ("perf/x86/intel: Fix INTEL_FLAGS_EVENT_CONSTRAINT* masking") That patch modified INTEL_FLAGS_EVENT_CONSTRAINT() to only look at the event code when matching a constraint. If code+umask were needed, then the INTEL_FLAGS_UEVENT_CONSTRAINT() macro was needed instead. This broke with some of the constraints for PEBS events. Several of them, including the one used for cycles:p, cycles:pp, cycles:ppp fell in that category and caused the event to be rejected in PEBS mode. In other words, on some platforms a cmdline such as: $ perf top -e cycles:pp would fail with -EINVAL. This patch fixes this bug by properly using INTEL_FLAGS_UEVENT_CONSTRAINT() when needed in the PEBS constraint tables. Reported-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Stephane Eranian <eranian@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: kan.liang@intel.com Link: http://lkml.kernel.org/r/20190521005246.423-1-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16x86/resctrl: Prevent NULL pointer dereference when local MBM is disabledPrarit Bhargava
commit c7563e62a6d720aa3b068e26ddffab5f0df29263 upstream. Booting with kernel parameter "rdt=cmt,mbmtotal,memlocal,l3cat,mba" and executing "mount -t resctrl resctrl -o mba_MBps /sys/fs/resctrl" results in a NULL pointer dereference on systems which do not have local MBM support enabled.. BUG: kernel NULL pointer dereference, address: 0000000000000020 PGD 0 P4D 0 Oops: 0000 [#1] SMP PTI CPU: 0 PID: 722 Comm: kworker/0:3 Not tainted 5.2.0-0.rc3.git0.1.el7_UNSUPPORTED.x86_64 #2 Workqueue: events mbm_handle_overflow RIP: 0010:mbm_handle_overflow+0x150/0x2b0 Only enter the bandwith update loop if the system has local MBM enabled. Fixes: de73f38f7680 ("x86/intel_rdt/mba_sc: Feedback loop to dynamically update mem bandwidth") Signed-off-by: Prarit Bhargava <prarit@redhat.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Reinette Chatre <reinette.chatre@intel.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20190610171544.13474-1-prarit@redhat.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16x86/mm/KASLR: Compute the size of the vmemmap section properlyBaoquan He
commit 00e5a2bbcc31d5fea853f8daeba0f06c1c88c3ff upstream. The size of the vmemmap section is hardcoded to 1 TB to support the maximum amount of system RAM in 4-level paging mode - 64 TB. However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level, 64 TB of vmemmap area is needed: 4 * 1000^5 PB / 4096 bytes page size * 64 bytes per page struct / 1000^4 TB = 62.5 TB. This hardcoding may cause vmemmap to corrupt the following cpu_entry_area section, if KASLR puts vmemmap very close to it and the actual vmemmap size is bigger than 1 TB. So calculate the actual size of the vmemmap region needed and then align it up to 1 TB boundary. In 4-level paging mode it is always 1 TB. In 5-level it's adjusted on demand. The current code reserves 0.5 PB for vmemmap on 5-level. With this change, the space can be saved and thus used to increase entropy for the randomization. [ bp: Spell out how the 64 TB needed for vmemmap is computed and massage commit message. ] Fixes: eedb92abb9bb ("x86/mm: Make virtual memory layout dynamic for CONFIG_X86_5LEVEL=y") Signed-off-by: Baoquan He <bhe@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Kirill A. Shutemov <kirill@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: kirill.shutemov@linux.intel.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: stable <stable@vger.kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190523025744.3756-1-bhe@redhat.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16x86/kasan: Fix boot with 5-level paging and KASANAndrey Ryabinin
commit f3176ec9420de0c385023afa3e4970129444ac2f upstream. Since commit d52888aa2753 ("x86/mm: Move LDT remap out of KASLR region on 5-level paging") kernel doesn't boot with KASAN on 5-level paging machines. The bug is actually in early_p4d_offset() and introduced by commit 12a8cc7fcf54 ("x86/kasan: Use the same shadow offset for 4- and 5-level paging") early_p4d_offset() tries to convert pgd_val(*pgd) value to a physical address. This doesn't make sense because pgd_val() already contains the physical address. It did work prior to commit d52888aa2753 because the result of "__pa_nodebug(pgd_val(*pgd)) & PTE_PFN_MASK" was the same as "pgd_val(*pgd) & PTE_PFN_MASK". __pa_nodebug() just set some high bits which were masked out by applying PTE_PFN_MASK. After the change of the PAGE_OFFSET offset in commit d52888aa2753 __pa_nodebug(pgd_val(*pgd)) started to return a value with more high bits set and PTE_PFN_MASK wasn't enough to mask out all of them. So it returns a wrong not even canonical address and crashes on the attempt to dereference it. Switch back to pgd_val() & PTE_PFN_MASK to cure the issue. Fixes: 12a8cc7fcf54 ("x86/kasan: Use the same shadow offset for 4- and 5-level paging") Reported-by: Kirill A. Shutemov <kirill@shutemov.name> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: kasan-dev@googlegroups.com Cc: stable@vger.kernel.org Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/20190614143149.2227-1-aryabinin@virtuozzo.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16x86/microcode, cpuhotplug: Add a microcode loader CPU hotplug callbackBorislav Petkov
commit 78f4e932f7760d965fb1569025d1576ab77557c5 upstream. Adric Blake reported the following warning during suspend-resume: Enabling non-boot CPUs ... x86: Booting SMP configuration: smpboot: Booting Node 0 Processor 1 APIC 0x2 unchecked MSR access error: WRMSR to 0x10f (tried to write 0x0000000000000000) \ at rIP: 0xffffffff8d267924 (native_write_msr+0x4/0x20) Call Trace: intel_set_tfa intel_pmu_cpu_starting ? x86_pmu_dead_cpu x86_pmu_starting_cpu cpuhp_invoke_callback ? _raw_spin_lock_irqsave notify_cpu_starting start_secondary secondary_startup_64 microcode: sig=0x806ea, pf=0x80, revision=0x96 microcode: updated to revision 0xb4, date = 2019-04-01 CPU1 is up The MSR in question is MSR_TFA_RTM_FORCE_ABORT and that MSR is emulated by microcode. The log above shows that the microcode loader callback happens after the PMU restoration, leading to the conjecture that because the microcode hasn't been updated yet, that MSR is not present yet, leading to the #GP. Add a microcode loader-specific hotplug vector which comes before the PERF vectors and thus executes earlier and makes sure the MSR is present. Fixes: 400816f60c54 ("perf/x86/intel: Implement support for TSX Force Abort") Reported-by: Adric Blake <promarbler14@gmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: <stable@vger.kernel.org> Cc: x86@kernel.org Link: https://bugzilla.kernel.org/show_bug.cgi?id=203637 Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16KVM: s390: fix memory slot handling for KVM_SET_USER_MEMORY_REGIONChristian Borntraeger
commit 19ec166c3f39fe1d3789888a74cc95544ac266d4 upstream. kselftests exposed a problem in the s390 handling for memory slots. Right now we only do proper memory slot handling for creation of new memory slots. Neither MOVE, nor DELETION are handled properly. Let us implement those. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16arm64/mm: Inhibit huge-vmap with ptdumpMark Rutland
commit 7ba36eccb3f83983a651efd570b4f933ecad1b5c upstream. The arm64 ptdump code can race with concurrent modification of the kernel page tables. At the time this was added, this was sound as: * Modifications to leaf entries could result in stale information being logged, but would not result in a functional problem. * Boot time modifications to non-leaf entries (e.g. freeing of initmem) were performed when the ptdump code cannot be invoked. * At runtime, modifications to non-leaf entries only occurred in the vmalloc region, and these were strictly additive, as intermediate entries were never freed. However, since commit: commit 324420bf91f6 ("arm64: add support for ioremap() block mappings") ... it has been possible to create huge mappings in the vmalloc area at runtime, and as part of this existing intermediate levels of table my be removed and freed. It's possible for the ptdump code to race with this, and continue to walk tables which have been freed (and potentially poisoned or reallocated). As a result of this, the ptdump code may dereference bogus addresses, which could be fatal. Since huge-vmap is a TLB and memory optimization, we can disable it when the runtime ptdump code is in use to avoid this problem. Cc: Catalin Marinas <catalin.marinas@arm.com> Fixes: 324420bf91f60582 ("arm64: add support for ioremap() block mappings") Acked-by: Ard Biesheuvel <ard.biesheuvel@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16s390/kasan: fix strncpy_from_user kasan checksVasily Gorbik
commit 01eb42afb45719cb41bb32c278e068073738899d upstream. arch/s390/lib/uaccess.c is built without kasan instrumentation. Kasan checks are performed explicitly in copy_from_user/copy_to_user functions. But since those functions could be inlined, calls from files like uaccess.c with instrumentation disabled won't generate kasan reports. This is currently the case with strncpy_from_user function which was revealed by newly added kasan test. Avoid inlining of copy_from_user/copy_to_user when the kernel is built with kasan support to make sure kasan checks are fully functional. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16ARM: exynos: Fix undefined instruction during Exynos5422 resumeMarek Szyprowski
commit 4d8e3e951a856777720272ce27f2c738a3eeef8c upstream. During early system resume on Exynos5422 with performance counters enabled the following kernel oops happens: Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP ARM Modules linked in: CPU: 0 PID: 1433 Comm: bash Tainted: G W 5.0.0-rc5-next-20190208-00023-gd5fb5a8a13e6-dirty #5480 Hardware name: SAMSUNG EXYNOS (Flattened Device Tree) ... Flags: nZCv IRQs off FIQs off Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: 4451006a DAC: 00000051 Process bash (pid: 1433, stack limit = 0xb7e0e22f) ... (reset_ctrl_regs) from [<c0112ad0>] (dbg_cpu_pm_notify+0x1c/0x24) (dbg_cpu_pm_notify) from [<c014c840>] (notifier_call_chain+0x44/0x84) (notifier_call_chain) from [<c014cbc0>] (__atomic_notifier_call_chain+0x7c/0x128) (__atomic_notifier_call_chain) from [<c01ffaac>] (cpu_pm_notify+0x30/0x54) (cpu_pm_notify) from [<c055116c>] (syscore_resume+0x98/0x3f4) (syscore_resume) from [<c0189350>] (suspend_devices_and_enter+0x97c/0xe74) (suspend_devices_and_enter) from [<c0189fb8>] (pm_suspend+0x770/0xc04) (pm_suspend) from [<c0187740>] (state_store+0x6c/0xcc) (state_store) from [<c09fa698>] (kobj_attr_store+0x14/0x20) (kobj_attr_store) from [<c030159c>] (sysfs_kf_write+0x4c/0x50) (sysfs_kf_write) from [<c0300620>] (kernfs_fop_write+0xfc/0x1e0) (kernfs_fop_write) from [<c0282be8>] (__vfs_write+0x2c/0x160) (__vfs_write) from [<c0282ea4>] (vfs_write+0xa4/0x16c) (vfs_write) from [<c0283080>] (ksys_write+0x40/0x8c) (ksys_write) from [<c0101000>] (ret_fast_syscall+0x0/0x28) Undefined instruction is triggered during CP14 reset, because bits: #16 (Secure privileged invasive debug disabled) and #17 (Secure privileged noninvasive debug disable) are set in DSCR. Those bits depend on SPNIDEN and SPIDEN lines, which are provided by Secure JTAG hardware block. That block in turn is powered from cluster 0 (big/Eagle), but the Exynos5422 boots on cluster 1 (LITTLE/KFC). To fix this issue it is enough to turn on the power on the cluster 0 for a while. This lets the Secure JTAG block to propagate the needed signals to LITTLE/KFC cores and change their DSCR. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16ARM: dts: exynos: Always enable necessary APIO_1V8 and ABB_1V8 regulators on ↵Krzysztof Kozlowski
Arndale Octa commit 5ab99cf7d5e96e3b727c30e7a8524c976bd3723d upstream. The PVDD_APIO_1V8 (LDO2) and PVDD_ABB_1V8 (LDO8) regulators were turned off by Linux kernel as unused. However they supply critical parts of SoC so they should be always on: 1. PVDD_APIO_1V8 supplies SYS pins (gpx[0-3], PSHOLD), HDMI level shift, RTC, VDD1_12 (DRAM internal 1.8 V logic), pull-up for PMIC interrupt lines, TTL/UARTR level shift, reset pins and SW-TACT1 button. It also supplies unused blocks like VDDQ_SRAM (for SROM controller) and VDDQ_GPIO (gpm7, gpy7). The LDO2 cannot be turned off (S2MPS11 keeps it on anyway) so marking it "always-on" only reflects its real status. 2. PVDD_ABB_1V8 supplies Adaptive Body Bias Generator for ARM cores, memory and Mali (G3D). Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16ARM: OMAP2+: pm33xx-core: Do not Turn OFF CEFUSE as PPA may be using itKabir Sahane
commit 72aff4ecf1cb85a3c6e6b42ccbda0bc631b090b3 upstream. This area is used to store keys by HSPPA in case of AM438x SOC. Leave it active. Signed-off-by: Kabir Sahane <x0153567@ti.com> Signed-off-by: Andrew F. Davis <afd@ti.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16ARM: dts: imx6qdl: Specify IMX6QDL_CLK_IPG as "ipg" clock to SDMAAndrey Smirnov
commit b14c872eebc501b9640b04f4a152df51d6eaf2fc upstream. Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX6QDL_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality(this at least breaks RAVE SP serdev driver on RDU2). Fix the code to specify IMX6QDL_CLK_IPG as "ipg" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Reviewed-by: Lucas Stach <l.stach@pengutronix.de> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Tested-by: Adam Ford <aford173@gmail.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16ARM: dts: imx6sx: Specify IMX6SX_CLK_IPG as "ipg" clock to SDMAAndrey Smirnov
commit 8979117765c19edc3b01cc0ef853537bf93eea4b upstream. Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX6SX_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX6SX_CLK_IPG as "ipg" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16ARM: dts: imx6ul: Specify IMX6UL_CLK_IPG as "ipg" clock to SDMAAndrey Smirnov
commit 7b3132ecefdd1fcdf6b86e62021d0e55ea8034db upstream. Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX6UL_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX6UL_CLK_IPG as "ipg" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16ARM: dts: imx7d: Specify IMX7D_CLK_IPG as "ipg" clock to SDMAAndrey Smirnov
commit 412b032a1dc72fc9d1c258800355efa6671b6315 upstream. Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX7D_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX7D_CLK_IPG as "ipg" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16ARM: dts: imx6sx: Specify IMX6SX_CLK_IPG as "ahb" clock to SDMAAndrey Smirnov
commit cc839d0f8c284fcb7591780b568f13415bbb737c upstream. Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX6SL_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX6SL_CLK_AHB as "ahb" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16ARM: dts: imx53: Specify IMX5_CLK_IPG as "ahb" clock to SDMAAndrey Smirnov
commit 28c168018e0902c67eb9c60d0fc4c8aa166c4efe upstream. Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX5_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX5_CLK_AHB as "ahb" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16ARM: dts: imx50: Specify IMX5_CLK_IPG as "ahb" clock to SDMAAndrey Smirnov
commit b7b4fda2636296471e29b78c2aa9535d7bedb7a0 upstream. Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX5_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX5_CLK_AHB as "ahb" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16ARM: dts: imx51: Specify IMX5_CLK_IPG as "ahb" clock to SDMAAndrey Smirnov
commit 918bbde8085ae147a43dcb491953e0dd8f3e9d6a upstream. Since 25aaa75df1e6 SDMA driver uses clock rates of "ipg" and "ahb" clock to determine if it needs to configure the IP block as operating at 1:1 or 1:2 clock ratio (ACR bit in SDMAARM_CONFIG). Specifying both clocks as IMX5_CLK_SDMA results in driver incorrectly thinking that ratio is 1:1 which results in broken SDMA funtionality. Fix the code to specify IMX5_CLK_AHB as "ahb" clock for SDMA, to avoid detecting incorrect clock ratio. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Angus Ainslie (Purism) <angus@akkea.ca> Cc: Chris Healy <cphealy@gmail.com> Cc: Lucas Stach <l.stach@pengutronix.de> Cc: Fabio Estevam <fabio.estevam@nxp.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16x86/PCI: Fix PCI IRQ routing table memory leakWenwen Wang
commit ea094d53580f40c2124cef3d072b73b2425e7bfd upstream. In pcibios_irq_init(), the PCI IRQ routing table 'pirq_table' is first found through pirq_find_routing_table(). If the table is not found and CONFIG_PCI_BIOS is defined, the table is then allocated in pcibios_get_irq_routing_table() using kmalloc(). Later, if the I/O APIC is used, this table is actually not used. In that case, the allocated table is not freed, which is a memory leak. Free the allocated table if it is not used. Signed-off-by: Wenwen Wang <wang6495@umn.edu> [bhelgaas: added Ingo's reviewed-by, since the only change since v1 was to use the irq_routing_table local variable name he suggested] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16mips: Make sure dt memory regions are validSerge Semin
commit 93fa5b280761a4dbb14c5330f260380385ab2b49 upstream. There are situations when memory regions coming from dts may be too big for the platform physical address space. This especially concerns XPA-capable systems. Bootloader may determine more than 4GB memory available and pass it to the kernel over dts memory node, while kernel is built without XPA/64BIT support. In this case the region may either simply be truncated by add_memory_region() method or by u64->phys_addr_t type casting. But in worst case the method can even drop the memory region if it exceeds PHYS_ADDR_MAX size. So lets make sure the retrieved from dts memory regions are valid, and if some of them aren't, just manually truncate them with a warning printed out. Signed-off-by: Serge Semin <fancer.lancer@gmail.com> Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Thomas Bogendoerfer <tbogendoerfer@suse.de> Cc: Huacai Chen <chenhc@lemote.com> Cc: Stefan Agner <stefan@agner.ch> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Alexandre Belloni <alexandre.belloni@bootlin.com> Cc: Juergen Gross <jgross@suse.com> Cc: Serge Semin <Sergey.Semin@t-platforms.ru> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16uml: fix a boot splat wrt use of cpu_all_maskMaciej Żenczykowski
commit 689a58605b63173acb0a8cf954af6a8f60440c93 upstream. Memory: 509108K/542612K available (3835K kernel code, 919K rwdata, 1028K rodata, 129K init, 211K bss, 33504K reserved, 0K cma-reserved) NR_IRQS: 15 clocksource: timer: mask: 0xffffffffffffffff max_cycles: 0x1cd42e205, max_idle_ns: 881590404426 ns ------------[ cut here ]------------ WARNING: CPU: 0 PID: 0 at kernel/time/clockevents.c:458 clockevents_register_device+0x72/0x140 posix-timer cpumask == cpu_all_mask, using cpu_possible_mask instead Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 5.1.0-rc4-00048-ged79cc87302b #4 Stack: 604ebda0 603c5370 604ebe20 6046fd17 00000000 6006fcbb 604ebdb0 603c53b5 604ebe10 6003bfc4 604ebdd0 9000001ca Call Trace: [<6006fcbb>] ? printk+0x0/0x94 [<60083160>] ? clockevents_register_device+0x72/0x140 [<6001f16e>] show_stack+0x13b/0x155 [<603c5370>] ? dump_stack_print_info+0xe2/0xeb [<6006fcbb>] ? printk+0x0/0x94 [<603c53b5>] dump_stack+0x2a/0x2c [<6003bfc4>] __warn+0x10e/0x13e [<60070320>] ? vprintk_func+0xc8/0xcf [<60030fd6>] ? block_signals+0x0/0x16 [<6006fcbb>] ? printk+0x0/0x94 [<6003c08b>] warn_slowpath_fmt+0x97/0x99 [<600311a1>] ? set_signals+0x0/0x3f [<6003bff4>] ? warn_slowpath_fmt+0x0/0x99 [<600842cb>] ? tick_oneshot_mode_active+0x44/0x4f [<60030fd6>] ? block_signals+0x0/0x16 [<6006fcbb>] ? printk+0x0/0x94 [<6007d2d5>] ? __clocksource_select+0x20/0x1b1 [<60030fd6>] ? block_signals+0x0/0x16 [<6006fcbb>] ? printk+0x0/0x94 [<60083160>] clockevents_register_device+0x72/0x140 [<60031192>] ? get_signals+0x0/0xf [<60030fd6>] ? block_signals+0x0/0x16 [<6006fcbb>] ? printk+0x0/0x94 [<60002eec>] um_timer_setup+0xc8/0xca [<60001b59>] start_kernel+0x47f/0x57e [<600035bc>] start_kernel_proc+0x49/0x4d [<6006c483>] ? kmsg_dump_register+0x82/0x8a [<6001de62>] new_thread_handler+0x81/0xb2 [<60003571>] ? kmsg_dumper_stdout_init+0x1a/0x1c [<60020c75>] uml_finishsetup+0x54/0x59 random: get_random_bytes called from init_oops_id+0x27/0x34 with crng_init=0 ---[ end trace 00173d0117a88acb ]--- Calibrating delay loop... 6941.90 BogoMIPS (lpj=34709504) Signed-off-by: Maciej Żenczykowski <maze@google.com> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com> Cc: linux-um@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Richard Weinberger <richard@nod.at> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-16ARM: prevent tracing IPI_CPU_BACKTRACEArnd Bergmann
commit be167862ae7dd85c56d385209a4890678e1b0488 upstream. Patch series "compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING", v3. This patch (of 11): When function tracing for IPIs is enabled, we get a warning for an overflow of the ipi_types array with the IPI_CPU_BACKTRACE type as triggered by raise_nmi(): arch/arm/kernel/smp.c: In function 'raise_nmi': arch/arm/kernel/smp.c:489:2: error: array subscript is above array bounds [-Werror=array-bounds] trace_ipi_raise(target, ipi_types[ipinr]); This is a correct warning as we actually overflow the array here. This patch raise_nmi() to call __smp_cross_call() instead of smp_cross_call(), to avoid calling into ftrace. For clarification, I'm also adding a two new code comments describing how this one is special. The warning appears to have shown up after commit e7273ff49acf ("ARM: 8488/1: Make IPI_CPU_BACKTRACE a "non-secure" SGI"), which changed the number assignment from '15' to '8', but as far as I can tell has existed since the IPI tracepoints were first introduced. If we decide to backport this patch to stable kernels, we probably need to backport e7273ff49acf as well. [yamada.masahiro@socionext.com: rebase on v5.1-rc1] Link: http://lkml.kernel.org/r/20190423034959.13525-2-yamada.masahiro@socionext.com Fixes: e7273ff49acf ("ARM: 8488/1: Make IPI_CPU_BACKTRACE a "non-secure" SGI") Fixes: 365ec7b17327 ("ARM: add IPI tracepoints") # v3.17 Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Mathieu Malaterre <malat@debian.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Stefan Agner <stefan@agner.ch> Cc: Boris Brezillon <bbrezillon@kernel.org> Cc: Miquel Raynal <miquel.raynal@bootlin.com> Cc: Richard Weinberger <richard@nod.at> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Brian Norris <computersforpeace@gmail.com> Cc: Marek Vasut <marek.vasut@gmail.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Borislav Petkov <bp@suse.de> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-15Merge tag 'v4.18.43' into v4.18/standard/baseBruce Ashfield
This is the 4.18.43 stable release
2019-09-12MIPS: pistachio: Build uImage.gz by defaultPaul Burton
commit e4f2d1af7163becb181419af9dece9206001e0a6 upstream. The pistachio platform uses the U-Boot bootloader & generally boots a kernel in the uImage format. As such it's useful to build one when building the kernel, but to do so currently requires the user to manually specify a uImage target on the make command line. Make uImage.gz the pistachio platform's default build target, so that the default is to build a kernel image that we can actually boot on a board such as the MIPS Creator Ci40. Marked for stable backport as far as v4.1 where pistachio support was introduced. This is primarily useful for CI systems such as kernelci.org which will benefit from us building a suitable image which can then be booted as part of automated testing, extending our test coverage to the affected stable branches. Signed-off-by: Paul Burton <paul.burton@mips.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Kevin Hilman <khilman@baylibre.com> Tested-by: Kevin Hilman <khilman@baylibre.com> URL: https://groups.io/g/kernelci/message/388 Cc: stable@vger.kernel.org # v4.1+ Cc: linux-mips@vger.kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-12MIPS: Bounds check virt_addr_validPaul Burton
commit 074a1e1167afd82c26f6d03a9a8b997d564bb241 upstream. The virt_addr_valid() function is meant to return true iff virt_to_page() will return a valid struct page reference. This is true iff the address provided is found within the unmapped address range between PAGE_OFFSET & MAP_BASE, but we don't currently check for that condition. Instead we simply mask the address to obtain what will be a physical address if the virtual address is indeed in the desired range, shift it to form a PFN & then call pfn_valid(). This can incorrectly return true if called with a virtual address which, after masking, happens to form a physical address corresponding to a valid PFN. For example we may vmalloc an address in the kernel mapped region starting a MAP_BASE & obtain the virtual address: addr = 0xc000000000002000 When masked by virt_to_phys(), which uses __pa() & in turn CPHYSADDR(), we obtain the following (bogus) physical address: addr = 0x2000 In a common system with PHYS_OFFSET=0 this will correspond to a valid struct page which should really be accessed by virtual address PAGE_OFFSET+0x2000, causing virt_addr_valid() to incorrectly return 1 indicating that the original address corresponds to a struct page. This is equivalent to the ARM64 change made in commit ca219452c6b8 ("arm64: Correctly bounds check virt_addr_valid"). This fixes fallout when hardened usercopy is enabled caused by the related commit 517e1fbeb65f ("mm/usercopy: Drop extra is_vmalloc_or_module() check") which removed a check for the vmalloc range that was present from the introduction of the hardened usercopy feature. Signed-off-by: Paul Burton <paul.burton@mips.com> References: ca219452c6b8 ("arm64: Correctly bounds check virt_addr_valid") References: 517e1fbeb65f ("mm/usercopy: Drop extra is_vmalloc_or_module() check") Reported-by: Julien Cristau <jcristau@debian.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: YunQiang Su <ysu@wavecomp.com> URL: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=929366 Cc: stable@vger.kernel.org # v4.12+ Cc: linux-mips@vger.kernel.org Cc: Yunqiang Su <ysu@wavecomp.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-12s390/mm: fix address space detection in exception handlingGerald Schaefer
commit 962f0af83c239c0aef05639631e871c874b00f99 upstream. Commit 0aaba41b58bc ("s390: remove all code using the access register mode") removed access register mode from the kernel, and also from the address space detection logic. However, user space could still switch to access register mode (trans_exc_code == 1), and exceptions in that mode would not be correctly assigned. Fix this by adding a check for trans_exc_code == 1 to get_fault_type(), and remove the wrong comment line before that function. Fixes: 0aaba41b58bc ("s390: remove all code using the access register mode") Reviewed-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: <stable@vger.kernel.org> # v4.15+ Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-12x86/insn-eval: Fix use-after-free access to LDT entryJann Horn
commit de9f869616dd95e95c00bdd6b0fcd3421e8a4323 upstream. get_desc() computes a pointer into the LDT while holding a lock that protects the LDT from being freed, but then drops the lock and returns the (now potentially dangling) pointer to its caller. Fix it by giving the caller a copy of the LDT entry instead. Fixes: 670f928ba09b ("x86/insn-eval: Add utility function to get segment descriptor") Cc: stable@vger.kernel.org Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-12x86/kprobes: Set instruction page as executableNadav Amit
commit 7298e24f904224fa79eb8fd7e0fbd78950ccf2db upstream. Set the page as executable after allocation. This patch is a preparatory patch for a following patch that makes module allocated pages non-executable. While at it, do some small cleanup of what appears to be unnecessary masking. Signed-off-by: Nadav Amit <namit@vmware.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: <akpm@linux-foundation.org> Cc: <ard.biesheuvel@linaro.org> Cc: <deneen.t.dock@intel.com> Cc: <kernel-hardening@lists.openwall.com> Cc: <kristen@linux.intel.com> Cc: <linux_dti@icloud.com> Cc: <will.deacon@arm.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190426001143.4983-11-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-12x86/ftrace: Set trampoline pages as executableNadav Amit
commit 3c0dab44e22782359a0a706cbce72de99a22aa75 upstream. Since alloc_module() will not set the pages as executable soon, set ftrace trampoline pages as executable after they are allocated. For the time being, do not change ftrace to use the text_poke() interface. As a result, ftrace still breaks W^X. Signed-off-by: Nadav Amit <namit@vmware.com> Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: <akpm@linux-foundation.org> Cc: <ard.biesheuvel@linaro.org> Cc: <deneen.t.dock@intel.com> Cc: <kernel-hardening@lists.openwall.com> Cc: <kristen@linux.intel.com> Cc: <linux_dti@icloud.com> Cc: <will.deacon@arm.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rik van Riel <riel@surriel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190426001143.4983-10-namit@vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-12x86/ftrace: Do not call function graph from dynamic trampolinesSteven Rostedt (VMware)
commit d2a68c4effd821f0871d20368f76b609349c8a3b upstream. Since commit 79922b8009c07 ("ftrace: Optimize function graph to be called directly"), dynamic trampolines should not be calling the function graph tracer at the end. If they do, it could cause the function graph tracer to trace functions that it filtered out. Right now it does not cause a problem because there's a test to check if the function graph tracer is attached to the same function as the function tracer, which for now is true. But the function graph tracer is undergoing changes that can make this no longer true which will cause the function graph tracer to trace other functions. For example: # cd /sys/kernel/tracing/ # echo do_IRQ > set_ftrace_filter # mkdir instances/foo # echo ip_rcv > instances/foo/set_ftrace_filter # echo function_graph > current_tracer # echo function > instances/foo/current_tracer Would cause the function graph tracer to trace both do_IRQ and ip_rcv, if the current tests change. As the current tests prevent this from being a problem, this code does not need to be backported. But it does make the code cleaner. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-09-12arm64: Fix the arm64_personality() syscall wrapper redirectionCatalin Marinas
commit 00377277166bac6939d8f72b429301369acaf2d8 upstream. Following commit 4378a7d4be30 ("arm64: implement syscall wrappers"), the syscall function names gained the '__arm64_' prefix. Ensure that we have the correct #define for redirecting a default syscall through a wrapper. Fixes: 4378a7d4be30 ("arm64: implement syscall wrappers") Cc: <stable@vger.kernel.org> # 4.19.x- Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>