summaryrefslogtreecommitdiffstats
path: root/arch
AgeCommit message (Collapse)Author
2020-08-13powerpc: Fix circular dependency between percpu.h and mmu.hMichael Ellerman
commit 0c83b277ada72b585e6a3e52b067669df15bcedb upstream. Recently random.h started including percpu.h (see commit f227e3ec3b5c ("random32: update the net random state on interrupt and activity")), which broke corenet64_smp_defconfig: In file included from /linux/arch/powerpc/include/asm/paca.h:18, from /linux/arch/powerpc/include/asm/percpu.h:13, from /linux/include/linux/random.h:14, from /linux/lib/uuid.c:14: /linux/arch/powerpc/include/asm/mmu.h:139:22: error: unknown type name 'next_tlbcam_idx' 139 | DECLARE_PER_CPU(int, next_tlbcam_idx); This is due to a circular header dependency: asm/mmu.h includes asm/percpu.h, which includes asm/paca.h, which includes asm/mmu.h Which means DECLARE_PER_CPU() isn't defined when mmu.h needs it. We can fix it by moving the include of paca.h below the include of asm-generic/percpu.h. This moves the include of paca.h out of the #ifdef __powerpc64__, but that is OK because paca.h is almost entirely inside #ifdef CONFIG_PPC64 anyway. It also moves the include of paca.h out of the #ifdef CONFIG_SMP, which could possibly break something, but seems to have no ill effects. Fixes: f227e3ec3b5c ("random32: update the net random state on interrupt and activity") Cc: stable@vger.kernel.org # v5.8 Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200804130558.292328-1-mpe@ellerman.id.au Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13genirq/affinity: Make affinity setting if activated opt-inThomas Gleixner
commit f0c7baca180046824e07fc5f1326e83a8fd150c7 upstream. John reported that on a RK3288 system the perf per CPU interrupts are all affine to CPU0 and provided the analysis: "It looks like what happens is that because the interrupts are not per-CPU in the hardware, armpmu_request_irq() calls irq_force_affinity() while the interrupt is deactivated and then request_irq() with IRQF_PERCPU | IRQF_NOBALANCING. Now when irq_startup() runs with IRQ_STARTUP_NORMAL, it calls irq_setup_affinity() which returns early because IRQF_PERCPU and IRQF_NOBALANCING are set, leaving the interrupt on its original CPU." This was broken by the recent commit which blocked interrupt affinity setting in hardware before activation of the interrupt. While this works in general, it does not work for this particular case. As contrary to the initial analysis not all interrupt chip drivers implement an activate callback, the safe cure is to make the deferred interrupt affinity setting at activation time opt-in. Implement the necessary core logic and make the two irqchip implementations for which this is required opt-in. In hindsight this would have been the right thing to do, but ... Fixes: baedb87d1b53 ("genirq/affinity: Handle affinity setting on inactive interrupts correctly") Reported-by: John Keeping <john@metanate.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Marc Zyngier <maz@kernel.org> Acked-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/87blk4tzgm.fsf@nanos.tec.linutronix.de Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13x86/apic/vector: Force interupt handler invocation to irq contextThomas Gleixner
commit 008f1d60fe25810d4554916744b0975d76601b64 upstream. Sathyanarayanan reported that the PCI-E AER error injection mechanism can result in a NULL pointer dereference in apic_ack_edge(): BUG: unable to handle kernel NULL pointer dereference at 0000000000000078 RIP: 0010:apic_ack_edge+0x1e/0x40 Call Trace: handle_edge_irq+0x7d/0x1e0 generic_handle_irq+0x27/0x30 aer_inject_write+0x53a/0x720 It crashes in irq_complete_move() which dereferences get_irq_regs() which is obviously NULL when this is called from non interrupt context. Of course the pointer could be checked, but that just papers over the real issue. Invoking the low level interrupt handling mechanism from random code can wreckage the fragile interrupt affinity mechanism of x86 as interrupts can only be moved in interrupt context or with special care when a CPU goes offline and the move has to be enforced. In the best case this triggers the warning in the MSI affinity setter, but if the call happens on the correct CPU it just corrupts state and might prevent further interrupt delivery for the affected device. Mark the APIC interrupts as unsuitable for being invoked in random contexts. This prevents the AER injection from proliferating the wreckage, but that's less broken than the current state of affairs and more correct than just papering over the problem by sprinkling random checks all over the place and silently corrupting state. Reported-by: sathyanarayanan.kuppuswamy@linux.intel.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200306130623.684591280@linutronix.de Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13arm64/alternatives: move length validation inside the subsectionSami Tolvanen
commit 966a0acce2fca776391823381dba95c40e03c339 upstream. Commit f7b93d42945c ("arm64/alternatives: use subsections for replacement sequences") breaks LLVM's integrated assembler, because due to its one-pass design, it cannot compute instruction sequence lengths before the layout for the subsection has been finalized. This change fixes the build by moving the .org directives inside the subsection, so they are processed after the subsection layout is known. Fixes: f7b93d42945c ("arm64/alternatives: use subsections for replacement sequences") Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1078 Link: https://lore.kernel.org/r/20200730153701.3892953-1-samitolvanen@google.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13genirq/affinity: Handle affinity setting on inactive interrupts correctlyThomas Gleixner
commit baedb87d1b53532f81b4bd0387f83b05d4f7eb9a upstream. Setting interrupt affinity on inactive interrupts is inconsistent when hierarchical irq domains are enabled. The core code should just store the affinity and not call into the irq chip driver for inactive interrupts because the chip drivers may not be in a state to handle such requests. X86 has a hacky workaround for that but all other irq chips have not which causes problems e.g. on GIC V3 ITS. Instead of adding more ugly hacks all over the place, solve the problem in the core code. If the affinity is set on an inactive interrupt then: - Store it in the irq descriptors affinity mask - Update the effective affinity to reflect that so user space has a consistent view - Don't call into the irq chip driver This is the core equivalent of the X86 workaround and works correctly because the affinity setting is established in the irq chip when the interrupt is activated later on. Note, that this is only effective when hierarchical irq domains are enabled by the architecture. Doing it unconditionally would break legacy irq chip implementations. For hierarchial irq domains this works correctly as none of the drivers can have a dependency on affinity setting in inactive state by design. Remove the X86 workaround as it is not longer required. Fixes: 02edee152d6e ("x86/apic/vector: Ignore set_affinity call for inactive interrupts") Reported-by: Ali Saidi <alisaidi@amazon.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Ali Saidi <alisaidi@amazon.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200529015501.15771-1-alisaidi@amazon.com Link: https://lkml.kernel.org/r/877dv2rv25.fsf@nanos.tec.linutronix.de Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13arm64: compat: Ensure upper 32 bits of x0 are zero on syscall returnWill Deacon
commit 15956689a0e60aa0c795174f3c310b60d8794235 upstream. Although we zero the upper bits of x0 on entry to the kernel from an AArch32 task, we do not clear them on the exception return path and can therefore expose 64-bit sign extended syscall return values to userspace via interfaces such as the 'perf_regs' ABI, which deal exclusively with 64-bit registers. Explicitly clear the upper 32 bits of x0 on return from a compat system call. Cc: <stable@vger.kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Keno Fischer <keno@juliacomputing.com> Cc: Luis Machado <luis.machado@linaro.org> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13arm64: ptrace: Consistently use pseudo-singlestep exceptionsWill Deacon
commit ac2081cdc4d99c57f219c1a6171526e0fa0a6fff upstream. Although the arm64 single-step state machine can be fast-forwarded in cases where we wish to generate a SIGTRAP without actually executing an instruction, this has two major limitations outside of simply skipping an instruction due to emulation. 1. Stepping out of a ptrace signal stop into a signal handler where SIGTRAP is blocked. Fast-forwarding the stepping state machine in this case will result in a forced SIGTRAP, with the handler reset to SIG_DFL. 2. The hardware implicitly fast-forwards the state machine when executing an SVC instruction for issuing a system call. This can interact badly with subsequent ptrace stops signalled during the execution of the system call (e.g. SYSCALL_EXIT or seccomp traps), as they may corrupt the stepping state by updating the PSTATE for the tracee. Resolve both of these issues by injecting a pseudo-singlestep exception on entry to a signal handler and also on return to userspace following a system call. Cc: <stable@vger.kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Tested-by: Luis Machado <luis.machado@linaro.org> Reported-by: Keno Fischer <keno@juliacomputing.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13arm64: ptrace: Override SPSR.SS when single-stepping is enabledWill Deacon
commit 3a5a4366cecc25daa300b9a9174f7fdd352b9068 upstream. Luis reports that, when reverse debugging with GDB, single-step does not function as expected on arm64: | I've noticed, under very specific conditions, that a PTRACE_SINGLESTEP | request by GDB won't execute the underlying instruction. As a consequence, | the PC doesn't move, but we return a SIGTRAP just like we would for a | regular successful PTRACE_SINGLESTEP request. The underlying problem is that when the CPU register state is restored as part of a reverse step, the SPSR.SS bit is cleared and so the hardware single-step state can transition to the "active-pending" state, causing an unexpected step exception to be taken immediately if a step operation is attempted. In hindsight, we probably shouldn't have exposed SPSR.SS in the pstate accessible by the GPR regset, but it's a bit late for that now. Instead, simply prevent userspace from configuring the bit to a value which is inconsistent with the TIF_SINGLESTEP state for the task being traced. Cc: <stable@vger.kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Keno Fischer <keno@juliacomputing.com> Link: https://lore.kernel.org/r/1eed6d69-d53d-9657-1fc9-c089be07f98c@linaro.org Reported-by: Luis Machado <luis.machado@linaro.org> Tested-by: Luis Machado <luis.machado@linaro.org> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13powerpc/book3s64/pkeys: Fix pkey_access_permitted() for execute disable pkeyAneesh Kumar K.V
commit 192b6a780598976feb7321ff007754f8511a4129 upstream. Even if the IAMR value denies execute access, the current code returns true from pkey_access_permitted() for an execute permission check, if the AMR read pkey bit is cleared. This results in repeated page fault loop with a test like below: #define _GNU_SOURCE #include <errno.h> #include <stdio.h> #include <stdlib.h> #include <signal.h> #include <inttypes.h> #include <assert.h> #include <malloc.h> #include <unistd.h> #include <pthread.h> #include <sys/mman.h> #ifdef SYS_pkey_mprotect #undef SYS_pkey_mprotect #endif #ifdef SYS_pkey_alloc #undef SYS_pkey_alloc #endif #ifdef SYS_pkey_free #undef SYS_pkey_free #endif #undef PKEY_DISABLE_EXECUTE #define PKEY_DISABLE_EXECUTE 0x4 #define SYS_pkey_mprotect 386 #define SYS_pkey_alloc 384 #define SYS_pkey_free 385 #define PPC_INST_NOP 0x60000000 #define PPC_INST_BLR 0x4e800020 #define PROT_RWX (PROT_READ | PROT_WRITE | PROT_EXEC) static int sys_pkey_mprotect(void *addr, size_t len, int prot, int pkey) { return syscall(SYS_pkey_mprotect, addr, len, prot, pkey); } static int sys_pkey_alloc(unsigned long flags, unsigned long access_rights) { return syscall(SYS_pkey_alloc, flags, access_rights); } static int sys_pkey_free(int pkey) { return syscall(SYS_pkey_free, pkey); } static void do_execute(void *region) { /* jump to region */ asm volatile( "mtctr %0;" "bctrl" : : "r"(region) : "ctr", "lr"); } static void do_protect(void *region) { size_t pgsize; int i, pkey; pgsize = getpagesize(); pkey = sys_pkey_alloc(0, PKEY_DISABLE_EXECUTE); assert (pkey > 0); /* perform mprotect */ assert(!sys_pkey_mprotect(region, pgsize, PROT_RWX, pkey)); do_execute(region); /* free pkey */ assert(!sys_pkey_free(pkey)); } int main(int argc, char **argv) { size_t pgsize, numinsns; unsigned int *region; int i; /* allocate memory region to protect */ pgsize = getpagesize(); region = memalign(pgsize, pgsize); assert(region != NULL); assert(!mprotect(region, pgsize, PROT_RWX)); /* fill page with NOPs with a BLR at the end */ numinsns = pgsize / sizeof(region[0]); for (i = 0; i < numinsns - 1; i++) region[i] = PPC_INST_NOP; region[i] = PPC_INST_BLR; do_protect(region); return EXIT_SUCCESS; } The fix is to only check the IAMR for an execute check, the AMR value is not relevant. Fixes: f2407ef3ba22 ("powerpc: helper to validate key-access permissions of a pte") Cc: stable@vger.kernel.org # v4.16+ Reported-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> [mpe: Add detail to change log, tweak wording & formatting] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200712132047.1038594-1-aneesh.kumar@linux.ibm.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13riscv: use 16KB kernel stack on 64-bitAndreas Schwab
commit 0cac21b02ba5f3095fd2dcc77c26a25a0b2432ed upstream. With the current 8KB stack size there are frequent overflows in a 64-bit configuration. We may split IRQ stacks off in the future, but this fixes a number of issues right now. Signed-off-by: Andreas Schwab <schwab@suse.de> Reviewed-by: Anup Patel <anup@brainfault.org> [Palmer: mention irqstack in the commit text] Fixes: 7db91e57a0ac ("RISC-V: Task implementation") Cc: stable@vger.kernel.org Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13ARM: dts: socfpga: Align L2 cache-controller nodename with dtschemaKrzysztof Kozlowski
commit d7adfe5ffed9faa05f8926223086b101e14f700d upstream. Fix dtschema validator warnings like: l2-cache@fffff000: $nodename:0: 'l2-cache@fffff000' does not match '^(cache-controller|cpu)(@[0-9a-f,]+)*$' Fixes: 475dc86d08de ("arm: dts: socfpga: Add a base DTSI for Altera's Arria10 SOC") Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13ARM: dts: Fix dcan driver probe failed on am437x platformdillon min
commit 2a4117df9b436a0e4c79d211284ab2097bcd00dc upstream. Got following d_can probe errors with kernel 5.8-rc1 on am437x [ 10.730822] CAN device driver interface Starting Wait for Network to be Configured... [ OK ] Reached target Network. [ 10.787363] c_can_platform 481cc000.can: probe failed [ 10.792484] c_can_platform: probe of 481cc000.can failed with error -2 [ 10.799457] c_can_platform 481d0000.can: probe failed [ 10.804617] c_can_platform: probe of 481d0000.can failed with error -2 actually, Tony has fixed this issue on am335x with the patch [3] Since am437x has the same clock structure with am335x [1][2], so reuse the code from Tony Lindgren's patch [3] to fix it. [1]: https://www.ti.com/lit/pdf/spruh73 Chapter-23, Figure 23-1. DCAN Integration [2]: https://www.ti.com/lit/pdf/spruhl7 Chapter-25, Figure 25-1. DCAN Integration [3]: commit 516f1117d0fb ("ARM: dts: Configure osc clock for d_can on am335x") Fixes: 1a5cd7c23cc5 ("bus: ti-sysc: Enable all clocks directly during init to read revision") Signed-off-by: dillon min <dillon.minfei@gmail.com> [tony@atomide.com: aligned commit message a bit for readability] Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13arm64: dts: meson-gxl-s805x: reduce initial Mali450 core frequencyNeil Armstrong
commit b2037dafcf082cd24b88ae9283af628235df36e1 upstream. When starting at 744MHz, the Mali 450 core crashes on S805X based boards: lima d00c0000.gpu: IRQ ppmmu3 not found lima d00c0000.gpu: IRQ ppmmu4 not found lima d00c0000.gpu: IRQ ppmmu5 not found lima d00c0000.gpu: IRQ ppmmu6 not found lima d00c0000.gpu: IRQ ppmmu7 not found Internal error: synchronous external abort: 96000210 [#1] PREEMPT SMP Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.7.2+ #492 Hardware name: Libre Computer AML-S805X-AC (DT) pstate: 40000005 (nZcv daif -PAN -UAO) pc : lima_gp_init+0x28/0x188 ... Call trace: lima_gp_init+0x28/0x188 lima_device_init+0x334/0x534 lima_pdev_probe+0xa4/0xe4 ... Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b Reverting to a safer 666Mhz frequency on the S805X that doesn't use the GP0 PLL makes it more stable. Fixes: fd47716479f5 ("ARM64: dts: add S805X based P241 board") Fixes: 0449b8e371ac ("arm64: dts: meson: add libretech aml-s805x-ac board") Signed-off-by: Neil Armstrong <narmstrong@baylibre.com> Signed-off-by: Kevin Hilman <khilman@baylibre.com> Link: https://lore.kernel.org/r/20200618132737.14243-1-narmstrong@baylibre.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13arm64: dts: meson: add missing gxl rng clockJerome Brunet
commit 95ca6f06dd4827ff63be5154120c7a8511cd9a41 upstream. The peripheral clock of the RNG is missing for gxl while it is present for gxbb. Fixes: 1b3f6d148692 ("ARM64: dts: meson-gx: add clock CLKID_RNG0 to hwrng node") Signed-off-by: Jerome Brunet <jbrunet@baylibre.com> Signed-off-by: Kevin Hilman <khilman@baylibre.com> Reviewed-by: Neil Armstrong <narmstrong@baylibre.com> Link: https://lore.kernel.org/r/20200617125346.1163527-1-jbrunet@baylibre.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13arm64: dts: g12-common: add parkmode_disable_ss_quirk on DWC3 controllerNeil Armstrong
commit a81bcfb6ac20cdd2e8dec3da14c8bbe1d18f6321 upstream. When high load on the DWC3 SuperSpeed port, the controller crashes with: [ 221.141621] xhci-hcd xhci-hcd.0.auto: xHCI host not responding to stop endpoint command. [ 221.157631] xhci-hcd xhci-hcd.0.auto: Host halt failed, -110 [ 221.157635] xhci-hcd xhci-hcd.0.auto: xHCI host controller not responding, assume dead [ 221.159901] xhci-hcd xhci-hcd.0.auto: xHCI host not responding to stop endpoint command. [ 221.159961] hub 2-1.1:1.0: hub_ext_port_status failed (err = -22) [ 221.160076] xhci-hcd xhci-hcd.0.auto: HC died; cleaning up [ 221.165946] usb 2-1.1-port1: cannot reset (err = -22) Setting the parkmode_disable_ss_quirk quirk fixes the issue. Reported-by: Tim <elatllat@gmail.com> Signed-off-by: Neil Armstrong <narmstrong@baylibre.com> Signed-off-by: Kevin Hilman <khilman@baylibre.com> Cc: Jianxin Pan <jianxin.pan@amlogic.com> CC: Dongjin Kim <tobetter@gmail.com> Link: https://lore.kernel.org/r/20200221091532.8142-4-narmstrong@baylibre.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13ARM: at91: pm: add quirk for sam9x60's ulp1Claudiu Beznea
commit bb1a0e87e1c54cd884e9b92b1cec06b186edc7a0 upstream. On SAM9X60 2 nop operations has to be introduced after setting WAITMODE bit in CKGR_MOR. Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com> Link: https://lore.kernel.org/r/1579522208-19523-9-git-send-email-claudiu.beznea@microchip.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13arm64/alternatives: don't patch up internal branchesArd Biesheuvel
commit 5679b28142193a62f6af93249c0477be9f0c669b upstream. Commit f7b93d42945c ("arm64/alternatives: use subsections for replacement sequences") moved the alternatives replacement sequences into subsections, in order to keep the as close as possible to the code that they replace. Unfortunately, this broke the logic in branch_insn_requires_update, which assumed that any branch into kernel executable code was a branch that required updating, which is no longer the case now that the code sequences that are patched in are in the same section as the patch site itself. So the only way to discriminate branches that require updating and ones that don't is to check whether the branch targets the replacement sequence itself, and so we can drop the call to kernel_text_address() entirely. Fixes: f7b93d42945c ("arm64/alternatives: use subsections for replacement sequences") Reported-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Alexandru Elisei <alexandru.elisei@arm.com> Link: https://lore.kernel.org/r/20200709125953.30918-1-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13arm64/alternatives: use subsections for replacement sequencesArd Biesheuvel
commit f7b93d42945cc71e1346dd5ae07c59061d56745e upstream. When building very large kernels, the logic that emits replacement sequences for alternatives fails when relative branches are present in the code that is emitted into the .altinstr_replacement section and patched in at the original site and fixed up. The reason is that the linker will insert veneers if relative branches go out of range, and due to the relative distance of the .altinstr_replacement from the .text section where its branch targets usually live, veneers may be emitted at the end of the .altinstr_replacement section, with the relative branches in the sequence pointed at the veneers instead of the actual target. The alternatives patching logic will attempt to fix up the branch to point to its original target, which will be the veneer in this case, but given that the patch site is likely to be far away as well, it will be out of range and so patching will fail. There are other cases where these veneers are problematic, e.g., when the target of the branch is in .text while the patch site is in .init.text, in which case putting the replacement sequence inside .text may not help either. So let's use subsections to emit the replacement code as closely as possible to the patch site, to ensure that veneers are only likely to be emitted if they are required at the patch site as well, in which case they will be in range for the replacement sequence both before and after it is transported to the patch site. This will prevent alternative sequences in non-init code from being released from memory after boot, but this is tolerable given that the entire section is only 512 KB on an allyesconfig build (which weighs in at 500+ MB for the entire Image). Also, note that modules today carry the replacement sequences in non-init sections as well, and any of those that target init code will be emitted into init sections after this change. This fixes an early crash when booting an allyesconfig kernel on a system where any of the alternatives sequences containing relative branches are activated at boot (e.g., ARM64_HAS_PAN on TX2) Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Dave P Martin <dave.martin@arm.com> Link: https://lore.kernel.org/r/20200630081921.13443-1-ardb@kernel.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13m68k: mm: fix node memblock initAngelo Dureghello
commit c43e55796dd4d13f4855971a4d7970ce2cd94db4 upstream. After pulling 5.7.0 (linux-next merge), mcf5441x mmu boot was hanging silently. memblock_add() seems not appropriate, since using MAX_NUMNODES as node id, while memblock_add_node() sets up memory for node id 0. Signed-off-by: Angelo Dureghello <angelo.dureghello@timesys.com> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Greg Ungerer <gerg@linux-m68k.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13m68k: nommu: register start of the memory with memblockMike Rapoport
commit d63bd8c81d8ab64db506ffde569cc8ff197516e2 upstream. The m68k nommu setup code didn't register the beginning of the physical memory with memblock because it was anyway occupied by the kernel. However, commit fa3354e4ea39 ("mm: free_area_init: use maximal zone PFNs rather than zone sizes") changed zones initialization to use memblock.memory to detect the zone extents and this caused inconsistency between zone PFNs and the actual PFNs: BUG: Bad page state in process swapper pfn:20165 page:41fe0ca0 refcount:0 mapcount:1 mapping:00000000 index:0x0 flags: 0x0() raw: 00000000 00000100 00000122 00000000 00000000 00000000 00000000 00000000 page dumped because: nonzero mapcount CPU: 0 PID: 1 Comm: swapper Not tainted 5.8.0-rc1-00001-g3a38f8a60c65-dirty #1 Stack from 404c9ebc: 404c9ebc 4029ab28 4029ab28 40088470 41fe0ca0 40299e21 40299df1 404ba2a4 00020165 00000000 41fd2c10 402c7ba0 41fd2c04 40088504 41fe0ca0 40299e21 00000000 40088a12 41fe0ca0 41fe0ca4 0000020a 00000000 00000001 402ca000 00000000 41fe0ca0 41fd2c10 41fd2c10 00000000 00000000 402b2388 00000001 400a0934 40091056 404c9f44 404c9f44 40088db4 402c7ba0 00000001 41fd2c04 41fe0ca0 41fd2000 41fe0ca0 40089e02 4026ecf4 40089e4e 41fe0ca0 ffffffff Call Trace: [<40088470>] 0x40088470 [<40088504>] 0x40088504 [<40088a12>] 0x40088a12 [<402ca000>] 0x402ca000 [<400a0934>] 0x400a0934 Adjust the memory registration with memblock to include the beginning of the physical memory and make sure that the area occupied by the kernel is marked as reserved. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Greg Ungerer <gerg@linux-m68k.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13x86/fpu: Reset MXCSR to default in kernel_fpu_begin()Petteri Aimonen
commit 7ad816762f9bf89e940e618ea40c43138b479e10 upstream. Previously, kernel floating point code would run with the MXCSR control register value last set by userland code by the thread that was active on the CPU core just before kernel call. This could affect calculation results if rounding mode was changed, or a crash if a FPU/SIMD exception was unmasked. Restore MXCSR to the kernel's default value. [ bp: Carve out from a bigger patch by Petteri, add feature check, add FNINIT call too (amluto). ] Signed-off-by: Petteri Aimonen <jpa@git.mail.kapsi.fi> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://bugzilla.kernel.org/show_bug.cgi?id=207979 Link: https://lkml.kernel.org/r/20200624114646.28953-2-bp@alien8.de Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-13Revert "powerpc/kasan: Fix shadow pages allocation failure"Christophe Leroy
commit b506923ee44ae87fc9f4de16b53feb313623e146 upstream. This reverts commit d2a91cef9bbdeb87b7449fdab1a6be6000930210. This commit moved too much work in kasan_init(). The allocation of shadow pages has to be moved for the reason explained in that patch, but the allocation of page tables still need to be done before switching to the final hash table. First revert the incorrect commit, following patch redoes it properly. Fixes: d2a91cef9bbd ("powerpc/kasan: Fix shadow pages allocation failure") Cc: stable@vger.kernel.org Reported-by: Erhard F. <erhard_f@mailbox.org> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://bugzilla.kernel.org/show_bug.cgi?id=208181 Link: https://lore.kernel.org/r/3667deb0911affbf999b99f87c31c77d5e870cd2.1593690707.git.christophe.leroy@csgroup.eu Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03ARM: percpu.h: fix build errorGrygorii Strashko
commit aa54ea903abb02303bf55855fb51e3fcee135d70 upstream. Fix build error for the case: defined(CONFIG_SMP) && !defined(CONFIG_CPU_V6) config: keystone_defconfig CC arch/arm/kernel/signal.o In file included from ../include/linux/random.h:14, from ../arch/arm/kernel/signal.c:8: ../arch/arm/include/asm/percpu.h: In function ‘__my_cpu_offset’: ../arch/arm/include/asm/percpu.h:29:34: error: ‘current_stack_pointer’ undeclared (first use in this function); did you mean ‘user_stack_pointer’? : "Q" (*(const unsigned long *)current_stack_pointer)); ^~~~~~~~~~~~~~~~~~~~~ user_stack_pointer Fixes: f227e3ec3b5c ("random32: update the net random state on interrupt and activity") Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03s390/mm: fix huge pte soft dirty copyingJanosch Frank
commit 528a9539348a0234375dfaa1ca5dbbb2f8f8e8d2 upstream. If the pmd is soft dirty we must mark the pte as soft dirty (and not dirty). This fixes some cases for guest migration with huge page backings. Cc: <stable@vger.kernel.org> # 4.8 Fixes: bc29b7ac1d9f ("s390/mm: clean up pte/pmd encoding") Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03ARC: elf: use right ELF_ARCHVineet Gupta
commit b7faf971081a4e56147f082234bfff55135305cb upstream. Cc: <stable@vger.kernel.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03ARC: entry: fix potential EFA clobber when TIF_SYSCALL_TRACEVineet Gupta
commit 00fdec98d9881bf5173af09aebd353ab3b9ac729 upstream. Trap handler for syscall tracing reads EFA (Exception Fault Address), in case strace wants PC of trap instruction (EFA is not part of pt_regs as of current code). However this EFA read is racy as it happens after dropping to pure kernel mode (re-enabling interrupts). A taken interrupt could context-switch, trigger a different task's trap, clobbering EFA for this execution context. Fix this by reading EFA early, before re-enabling interrupts. A slight side benefit is de-duplication of FAKE_RET_FROM_EXCPN in trap handler. The trap handler is common to both ARCompact and ARCv2 builds too. This just came out of code rework/review and no real problem was reported but is clearly a potential problem specially for strace. Cc: <stable@vger.kernel.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03KVM: arm64: Fix kvm_reset_vcpu() return code being incorrect with SVESteven Price
commit 66b7e05dc0239c5817859f261098ba9cc2efbd2b upstream. If SVE is enabled then 'ret' can be assigned the return value of kvm_vcpu_enable_sve() which may be 0 causing future "goto out" sites to erroneously return 0 on failure rather than -EINVAL as expected. Remove the initialisation of 'ret' and make setting the return value explicit to avoid this situation in the future. Fixes: 9a3cdf26e336 ("KVM: arm64/sve: Allow userspace to enable SVE for vcpus") Cc: stable@vger.kernel.org Reported-by: James Morse <james.morse@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200617105456.28245-1-steven.price@arm.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03KVM: x86: Mark CR4.TSD as being possibly owned by the guestSean Christopherson
commit 7c83d096aed055a7763a03384f92115363448b71 upstream. Mark CR4.TSD as being possibly owned by the guest as that is indeed the case on VMX. Without TSD being tagged as possibly owned by the guest, a targeted read of CR4 to get TSD could observe a stale value. This bug is benign in the current code base as the sole consumer of TSD is the emulator (for RDTSC) and the emulator always "reads" the entirety of CR4 when grabbing bits. Add a build-time assertion in to ensure VMX doesn't hand over more CR4 bits without also updating x86. Fixes: 52ce3c21aec3 ("x86,kvm,vmx: Don't trap writes to CR4.TSD") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200703040422.31536-2-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03KVM: x86: Inject #GP if guest attempts to toggle CR4.LA57 in 64-bit modeSean Christopherson
commit d74fcfc1f0ff4b6c26ecef1f9e48d8089ab4eaac upstream. Inject a #GP on MOV CR4 if CR4.LA57 is toggled in 64-bit mode, which is illegal per Intel's SDM: CR4.LA57 57-bit linear addresses (bit 12 of CR4) ... blah blah blah ... This bit cannot be modified in IA-32e mode. Note, the pseudocode for MOV CR doesn't call out the fault condition, which is likely why the check was missed during initial development. This is arguably an SDM bug and will hopefully be fixed in future release of the SDM. Fixes: fd8cb433734ee ("KVM: MMU: Expose the LA57 feature to VM.") Cc: stable@vger.kernel.org Reported-by: Sebastien Boeuf <sebastien.boeuf@intel.com> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200703021714.5549-1-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03KVM: x86: bit 8 of non-leaf PDPEs is not reservedPaolo Bonzini
commit 5ecad245de2ae23dc4e2dbece92f8ccfbaed2fa7 upstream. Bit 8 would be the "global" bit, which does not quite make sense for non-leaf page table entries. Intel ignores it; AMD ignores it in PDEs and PDPEs, but reserves it in PML4Es. Probably, earlier versions of the AMD manual documented it as reserved in PDPEs as well, and that behavior made it into KVM as well as kvm-unit-tests; fix it. Cc: stable@vger.kernel.org Reported-by: Nadav Amit <namit@vmware.com> Fixes: a0c0feb57992 ("KVM: x86: reserve bit 8 of non-leaf PDPEs and PML4Es in 64-bit mode on AMD", 2014-09-03) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03KVM: arm64: Annotate hyp NMI-related functions as __always_inlineAlexandru Elisei
commit 7733306bd593c737c63110175da6c35b4b8bb32c upstream. The "inline" keyword is a hint for the compiler to inline a function. The functions system_uses_irq_prio_masking() and gic_write_pmr() are used by the code running at EL2 on a non-VHE system, so mark them as __always_inline to make sure they'll always be part of the .hyp.text section. This fixes the following splat when trying to run a VM: [ 47.625273] Kernel panic - not syncing: HYP panic: [ 47.625273] PS:a00003c9 PC:0000ca0b42049fc4 ESR:86000006 [ 47.625273] FAR:0000ca0b42049fc4 HPFAR:0000000010001000 PAR:0000000000000000 [ 47.625273] VCPU:0000000000000000 [ 47.647261] CPU: 1 PID: 217 Comm: kvm-vcpu-0 Not tainted 5.8.0-rc1-ARCH+ #61 [ 47.654508] Hardware name: Globalscale Marvell ESPRESSOBin Board (DT) [ 47.661139] Call trace: [ 47.663659] dump_backtrace+0x0/0x1cc [ 47.667413] show_stack+0x18/0x24 [ 47.670822] dump_stack+0xb8/0x108 [ 47.674312] panic+0x124/0x2f4 [ 47.677446] panic+0x0/0x2f4 [ 47.680407] SMP: stopping secondary CPUs [ 47.684439] Kernel Offset: disabled [ 47.688018] CPU features: 0x240402,20002008 [ 47.692318] Memory Limit: none [ 47.695465] ---[ end Kernel panic - not syncing: HYP panic: [ 47.695465] PS:a00003c9 PC:0000ca0b42049fc4 ESR:86000006 [ 47.695465] FAR:0000ca0b42049fc4 HPFAR:0000000010001000 PAR:0000000000000000 [ 47.695465] VCPU:0000000000000000 ]--- The instruction abort was caused by the code running at EL2 trying to fetch an instruction which wasn't mapped in the EL2 translation tables. Using objdump showed the two functions as separate symbols in the .text section. Fixes: 85738e05dc38 ("arm64: kvm: Unmask PMR before entering guest") Cc: stable@vger.kernel.org Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20200618171254.1596055-1-alexandru.elisei@arm.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03KVM: arm64: Stop clobbering x0 for HVC_SOFT_RESTARTAndrew Scull
commit b9e10d4a6c9f5cbe6369ce2c17ebc67d2e5a4be5 upstream. HVC_SOFT_RESTART is given values for x0-2 that it should installed before exiting to the new address so should not set x0 to stub HVC success or failure code. Fixes: af42f20480bf1 ("arm64: hyp-stub: Zero x0 on successful stub handling") Cc: stable@vger.kernel.org Signed-off-by: Andrew Scull <ascull@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200706095259.1338221-1-ascull@google.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03KVM: arm64: Fix definition of PAGE_HYP_DEVICEWill Deacon
commit 68cf617309b5f6f3a651165f49f20af1494753ae upstream. PAGE_HYP_DEVICE is intended to encode attribute bits for an EL2 stage-1 pte mapping a device. Unfortunately, it includes PROT_DEVICE_nGnRE which encodes attributes for EL1 stage-1 mappings such as UXN and nG, which are RES0 for EL2, and DBM which is meaningless as TCR_EL2.HD is not set. Fix the definition of PAGE_HYP_DEVICE so that it doesn't set RES0 bits at EL2. Acked-by: Marc Zyngier <maz@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20200708162546.26176-1-will@kernel.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03arm64: kgdb: Fix single-step exception handling oopsWei Li
commit 8523c006264df65aac7d77284cc69aac46a6f842 upstream. After entering kdb due to breakpoint, when we execute 'ss' or 'go' (will delay installing breakpoints, do single-step first), it won't work correctly, and it will enter kdb due to oops. It's because the reason gotten in kdb_stub() is not as expected, and it seems that the ex_vector for single-step should be 0, like what arch powerpc/sh/parisc has implemented. Before the patch: Entering kdb (current=0xffff8000119e2dc0, pid 0) on processor 0 due to Keyboard Entry [0]kdb> bp printk Instruction(i) BP #0 at 0xffff8000101486cc (printk) is enabled addr at ffff8000101486cc, hardtype=0 installed=0 [0]kdb> g / # echo h > /proc/sysrq-trigger Entering kdb (current=0xffff0000fa878040, pid 266) on processor 3 due to Breakpoint @ 0xffff8000101486cc [3]kdb> ss Entering kdb (current=0xffff0000fa878040, pid 266) on processor 3 Oops: (null) due to oops @ 0xffff800010082ab8 CPU: 3 PID: 266 Comm: sh Not tainted 5.7.0-rc4-13839-gf0e5ad491718 #6 Hardware name: linux,dummy-virt (DT) pstate: 00000085 (nzcv daIf -PAN -UAO) pc : el1_irq+0x78/0x180 lr : __handle_sysrq+0x80/0x190 sp : ffff800015003bf0 x29: ffff800015003d20 x28: ffff0000fa878040 x27: 0000000000000000 x26: ffff80001126b1f0 x25: ffff800011b6a0d8 x24: 0000000000000000 x23: 0000000080200005 x22: ffff8000101486cc x21: ffff800015003d30 x20: 0000ffffffffffff x19: ffff8000119f2000 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000 x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000 x11: 0000000000000000 x10: 0000000000000000 x9 : 0000000000000000 x8 : ffff800015003e50 x7 : 0000000000000002 x6 : 00000000380b9990 x5 : ffff8000106e99e8 x4 : ffff0000fadd83c0 x3 : 0000ffffffffffff x2 : ffff800011b6a0d8 x1 : ffff800011b6a000 x0 : ffff80001130c9d8 Call trace: el1_irq+0x78/0x180 printk+0x0/0x84 write_sysrq_trigger+0xb0/0x118 proc_reg_write+0xb4/0xe0 __vfs_write+0x18/0x40 vfs_write+0xb0/0x1b8 ksys_write+0x64/0xf0 __arm64_sys_write+0x14/0x20 el0_svc_common.constprop.2+0xb0/0x168 do_el0_svc+0x20/0x98 el0_sync_handler+0xec/0x1a8 el0_sync+0x140/0x180 [3]kdb> After the patch: Entering kdb (current=0xffff8000119e2dc0, pid 0) on processor 0 due to Keyboard Entry [0]kdb> bp printk Instruction(i) BP #0 at 0xffff8000101486cc (printk) is enabled addr at ffff8000101486cc, hardtype=0 installed=0 [0]kdb> g / # echo h > /proc/sysrq-trigger Entering kdb (current=0xffff0000fa852bc0, pid 268) on processor 0 due to Breakpoint @ 0xffff8000101486cc [0]kdb> g Entering kdb (current=0xffff0000fa852bc0, pid 268) on processor 0 due to Breakpoint @ 0xffff8000101486cc [0]kdb> ss Entering kdb (current=0xffff0000fa852bc0, pid 268) on processor 0 due to SS trap @ 0xffff800010082ab8 [0]kdb> Fixes: 44679a4f142b ("arm64: KGDB: Add step debugging support") Signed-off-by: Wei Li <liwei391@huawei.com> Tested-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org> Link: https://lore.kernel.org/r/20200509214159.19680-2-liwei391@huawei.com Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03x86/entry: Increase entry_stack size to a full pagePeter Zijlstra
commit c7aadc09321d8f9a1d3bd1e6d8a47222ecddf6c5 upstream. Marco crashed in bad_iret with a Clang11/KCSAN build due to overflowing the stack. Now that we run C code on it, expand it to a full page. Suggested-by: Andy Lutomirski <luto@amacapital.net> Reported-by: Marco Elver <elver@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com> Tested-by: Marco Elver <elver@google.com> Link: https://lkml.kernel.org/r/20200618144801.819246178@infradead.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03ARM: imx6: add missing put_device() call in imx6q_suspend_init()yu kuai
commit 4845446036fc9c13f43b54a65c9b757c14f5141b upstream. if of_find_device_by_node() succeed, imx6q_suspend_init() doesn't have a corresponding put_device(). Thus add a jump target to fix the exception handling for this function implementation. Signed-off-by: yu kuai <yukuai3@huawei.com> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03s390/kasan: fix early pgm check handler executionVasily Gorbik
commit 998f5bbe3dbdab81c1cfb1aef7c3892f5d24f6c7 upstream. Currently if early_pgm_check_handler is called it ends up in pgm check loop. The problem is that early_pgm_check_handler is instrumented by KASAN but executed without DAT flag enabled which leads to addressing exception when KASAN checks try to access shadow memory. Fix that by executing early handlers with DAT flag on under KASAN as expected. Reported-and-tested-by: Alexander Egorenkov <egorenar@linux.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03powerpc/kvm/book3s64: Fix kernel crash with nested kvm & DEBUG_VIRTUALAneesh Kumar K.V
commit c1ed1754f271f6b7acb1bfdc8cfb62220fbed423 upstream. With CONFIG_DEBUG_VIRTUAL=y, __pa() checks for addr value and if it's less than PAGE_OFFSET it leads to a BUG(). #define __pa(x) ({ VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET); (unsigned long)(x) & 0x0fffffffffffffffUL; }) kernel BUG at arch/powerpc/kvm/book3s_64_mmu_radix.c:43! cpu 0x70: Vector: 700 (Program Check) at [c0000018a2187360] pc: c000000000161b30: __kvmhv_copy_tofrom_guest_radix+0x130/0x1f0 lr: c000000000161d5c: kvmhv_copy_from_guest_radix+0x3c/0x80 ... kvmhv_copy_from_guest_radix+0x3c/0x80 kvmhv_load_from_eaddr+0x48/0xc0 kvmppc_ld+0x98/0x1e0 kvmppc_load_last_inst+0x50/0x90 kvmppc_hv_emulate_mmio+0x288/0x2b0 kvmppc_book3s_radix_page_fault+0xd8/0x2b0 kvmppc_book3s_hv_page_fault+0x37c/0x1050 kvmppc_vcpu_run_hv+0xbb8/0x1080 kvmppc_vcpu_run+0x34/0x50 kvm_arch_vcpu_ioctl_run+0x2fc/0x410 kvm_vcpu_ioctl+0x2b4/0x8f0 ksys_ioctl+0xf4/0x150 sys_ioctl+0x28/0x80 system_call_exception+0x104/0x1d0 system_call_common+0xe8/0x214 kvmhv_copy_tofrom_guest_radix() uses a NULL value for to/from to indicate direction of copy. Avoid calling __pa() if the value is NULL to avoid the BUG(). Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> [mpe: Massage change log a bit to mention CONFIG_DEBUG_VIRTUAL] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200611120159.680284-1-aneesh.kumar@linux.ibm.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03ARM: dts: omap4-droid4: Fix spi configuration and increase rateTony Lindgren
commit 0df12a01f4857495816b05f048c4c31439446e35 upstream. We can currently sometimes get "RXS timed out" errors and "EOT timed out" errors with spi transfers. These errors can be made easy to reproduce by reading the cpcap iio values in a loop while keeping the CPUs busy by also reading /dev/urandom. The "RXS timed out" errors we can fix by adding spi-cpol and spi-cpha in addition to the spi-cs-high property we already have. The "EOT timed out" errors we can fix by increasing the spi clock rate to 9.6 MHz. Looks similar MC13783 PMIC says it works at spi clock rates up to 20 MHz, so let's assume we can pick any rate up to 20 MHz also for cpcap. Cc: maemo-leste@lists.dyne.org Cc: Merlijn Wajer <merlijn@wizzup.org> Cc: Pavel Machek <pavel@ucw.cz> Cc: Sebastian Reichel <sre@kernel.org> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03KVM: s390: reduce number of IO pins to 1Christian Borntraeger
commit 774911290c589e98e3638e73b24b0a4d4530e97c upstream. The current number of KVM_IRQCHIP_NUM_PINS results in an order 3 allocation (32kb) for each guest start/restart. This can result in OOM killer activity even with free swap when the memory is fragmented enough: kernel: qemu-system-s39 invoked oom-killer: gfp_mask=0x440dc0(GFP_KERNEL_ACCOUNT|__GFP_COMP|__GFP_ZERO), order=3, oom_score_adj=0 kernel: CPU: 1 PID: 357274 Comm: qemu-system-s39 Kdump: loaded Not tainted 5.4.0-29-generic #33-Ubuntu kernel: Hardware name: IBM 8562 T02 Z06 (LPAR) kernel: Call Trace: kernel: ([<00000001f848fe2a>] show_stack+0x7a/0xc0) kernel: [<00000001f8d3437a>] dump_stack+0x8a/0xc0 kernel: [<00000001f8687032>] dump_header+0x62/0x258 kernel: [<00000001f8686122>] oom_kill_process+0x172/0x180 kernel: [<00000001f8686abe>] out_of_memory+0xee/0x580 kernel: [<00000001f86e66b8>] __alloc_pages_slowpath+0xd18/0xe90 kernel: [<00000001f86e6ad4>] __alloc_pages_nodemask+0x2a4/0x320 kernel: [<00000001f86b1ab4>] kmalloc_order+0x34/0xb0 kernel: [<00000001f86b1b62>] kmalloc_order_trace+0x32/0xe0 kernel: [<00000001f84bb806>] kvm_set_irq_routing+0xa6/0x2e0 kernel: [<00000001f84c99a4>] kvm_arch_vm_ioctl+0x544/0x9e0 kernel: [<00000001f84b8936>] kvm_vm_ioctl+0x396/0x760 kernel: [<00000001f875df66>] do_vfs_ioctl+0x376/0x690 kernel: [<00000001f875e304>] ksys_ioctl+0x84/0xb0 kernel: [<00000001f875e39a>] __s390x_sys_ioctl+0x2a/0x40 kernel: [<00000001f8d55424>] system_call+0xd8/0x2c8 As far as I can tell s390x does not use the iopins as we bail our for anything other than KVM_IRQ_ROUTING_S390_ADAPTER and the chip/pin is only used for KVM_IRQ_ROUTING_IRQCHIP. So let us use a small number to reduce the memory footprint. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Link: https://lore.kernel.org/r/20200617083620.5409-1-borntraeger@de.ibm.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-24copy_xstate_to_kernel: Fix typo which caused GDB regressionKevin Buettner
commit 5714ee50bb4375bd586858ad800b1d9772847452 upstream. This fixes a regression encountered while running the gdb.base/corefile.exp test in GDB's test suite. In my testing, the typo prevented the sw_reserved field of struct fxregs_state from being output to the kernel XSAVES area. Thus the correct mask corresponding to XCR0 was not present in the core file for GDB to interrogate, resulting in the following behavior: [kev@f32-1 gdb]$ ./gdb -q testsuite/outputs/gdb.base/corefile/corefile testsuite/outputs/gdb.base/corefile/corefile.core Reading symbols from testsuite/outputs/gdb.base/corefile/corefile... [New LWP 232880] warning: Unexpected size of section `.reg-xstate/232880' in core file. With the typo fixed, the test works again as expected. Signed-off-by: Kevin Buettner <kevinb@redhat.com> Fixes: 9e4636545933 ("copy_xstate_to_kernel(): don't leave parts of destination uninitialized") Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Dave Airlie <airlied@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-24ARM: imx: Remove imx_add_imx_dma() unused irq_err argumentBjorn Helgaas
commit f8951dce10c092075e39ef12c29022548e4c63db upstream. No callers of imx_add_imx_dma() need an error IRQ, so they supply 0 as "irq_err", which means we register a resource of IRQ 0, which is invalid and causes a warning if used. Remove the "irq_err" argument altogether so there's no chance of trying to use the invalid IRQ 0. Fixes: a85a6c86c25be ("driver core: platform: Clarify that IRQ 0 is invalid") Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Shawn Guo <shawnguo@kernel.org> Cc: Sascha Hauer <s.hauer@pengutronix.de> Cc: kernel@pengutronix.de Cc: Fabio Estevam <festevam@gmail.com> Cc: linux-imx@nxp.com Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-24ARM: imx: Provide correct number of resources when registering gpio devicesGuenter Roeck
commit 2a83544007aba792167615c393e6154824f3a175 upstream. Since commit a85a6c86c25be ("driver core: platform: Clarify that IRQ 0 is invalid"), the kernel is a bit touchy when it encounters interrupt 0. As a result, there are lots of warnings such as the following when booting systems such as 'kzm'. WARNING: CPU: 0 PID: 1 at drivers/base/platform.c:224 platform_get_irq_optional+0x118/0x128 0 is an invalid IRQ number Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.8.0-rc3 #1 Hardware name: Kyoto Microcomputer Co., Ltd. KZM-ARM11-01 [<c01127d4>] (unwind_backtrace) from [<c010c620>] (show_stack+0x10/0x14) [<c010c620>] (show_stack) from [<c06f5f54>] (dump_stack+0xe8/0x120) [<c06f5f54>] (dump_stack) from [<c0128878>] (__warn+0xe4/0x108) [<c0128878>] (__warn) from [<c0128910>] (warn_slowpath_fmt+0x74/0xbc) [<c0128910>] (warn_slowpath_fmt) from [<c08b8e84>] (platform_get_irq_optional+0x118/0x128) [<c08b8e84>] (platform_get_irq_optional) from [<c08b8eb4>] (platform_irq_count+0x20/0x3c) [<c08b8eb4>] (platform_irq_count) from [<c0728660>] (mxc_gpio_probe+0x8c/0x494) [<c0728660>] (mxc_gpio_probe) from [<c08b93cc>] (platform_drv_probe+0x48/0x98) [<c08b93cc>] (platform_drv_probe) from [<c08b703c>] (really_probe+0x214/0x344) [<c08b703c>] (really_probe) from [<c08b7274>] (driver_probe_device+0x58/0xb4) [<c08b7274>] (driver_probe_device) from [<c08b7478>] (device_driver_attach+0x58/0x60) [<c08b7478>] (device_driver_attach) from [<c08b7504>] (__driver_attach+0x84/0xc0) [<c08b7504>] (__driver_attach) from [<c08b50f8>] (bus_for_each_dev+0x78/0xb8) [<c08b50f8>] (bus_for_each_dev) from [<c08b62cc>] (bus_add_driver+0x154/0x1e0) [<c08b62cc>] (bus_add_driver) from [<c08b82b8>] (driver_register+0x74/0x108) [<c08b82b8>] (driver_register) from [<c0102320>] (do_one_initcall+0x80/0x3b4) [<c0102320>] (do_one_initcall) from [<c1501008>] (kernel_init_freeable+0x170/0x208) [<c1501008>] (kernel_init_freeable) from [<c0e178d4>] (kernel_init+0x8/0x11c) [<c0e178d4>] (kernel_init) from [<c0100134>] (ret_from_fork+0x14/0x20) As it turns out, mxc_register_gpio() is a bit lax when setting the number of resources: it registers a resource with interrupt 0 when in reality there is no such interrupt. Fix the problem by not declaring the second interrupt resource if there is no second interrupt. Fixes: a85a6c86c25be ("driver core: platform: Clarify that IRQ 0 is invalid") Cc: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Shawn Guo <shawnguo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-24MIPS: Add missing EHB in mtc0 -> mfc0 sequence for DSPenHauke Mehrtens
commit fcec538ef8cca0ad0b84432235dccd9059c8e6f8 upstream. This resolves the hazard between the mtc0 in the change_c0_status() and the mfc0 in configure_exception_vector(). Without resolving this hazard configure_exception_vector() could read an old value and would restore this old value again. This would revert the changes change_c0_status() did. I checked this by printing out the read_c0_status() at the end of per_cpu_trap_init() and the ST0_MX is not set without this patch. The hazard is documented in the MIPS Architecture Reference Manual Vol. III: MIPS32/microMIPS32 Privileged Resource Architecture (MD00088), rev 6.03 table 8.1 which includes: Producer | Consumer | Hazard ----------|----------|---------------------------- mtc0 | mfc0 | any coprocessor 0 register I saw this hazard on an Atheros AR9344 rev 2 SoC with a MIPS 74Kc CPU. There the change_c0_status() function would activate the DSPen by setting ST0_MX in the c0_status register. This was reverted and then the system got a DSP exception when the DSP registers were saved in save_dsp() in the first process switch. The crash looks like this: [ 0.089999] Mount-cache hash table entries: 1024 (order: 0, 4096 bytes, linear) [ 0.097796] Mountpoint-cache hash table entries: 1024 (order: 0, 4096 bytes, linear) [ 0.107070] Kernel panic - not syncing: Unexpected DSP exception [ 0.113470] Rebooting in 1 seconds.. We saw this problem in OpenWrt only on the MIPS 74Kc based Atheros SoCs, not on the 24Kc based SoCs. We only saw it with kernel 5.4 not with kernel 4.19, in addition we had to use GCC 8.4 or 9.X, with GCC 8.3 it did not happen. In the kernel I bisected this problem to commit 9012d011660e ("compiler: allow all arches to enable CONFIG_OPTIMIZE_INLINING"), but when this was reverted it also happened after commit 172dcd935c34b ("MIPS: Always allocate exception vector for MIPSr2+"). Commit 0b24cae4d535 ("MIPS: Add missing EHB in mtc0 -> mfc0 sequence.") does similar changes to a different file. I am not sure if there are more places affected by this problem. Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de> Cc: <stable@vger.kernel.org> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-24MIPS: lantiq: xway: sysctrl: fix the GPHY clock alias namesMartin Blumenstingl
commit 03e62fd67d3ab33f39573fc8787d89dc9b4d7255 upstream. The dt-bindings for the GSWIP describe that the node should be named "switch". Use the same name in sysctrl.c so the GSWIP driver can actually find the "gphy0" and "gphy1" clocks. Fixes: 14fceff4771e51 ("net: dsa: Add Lantiq / Intel DSA driver for vrx200") Cc: stable@vger.kernel.org Signed-off-by: Martin Blumenstingl <martin.blumenstingl@googlemail.com> Acked-by: Hauke Mehrtens <hauke@hauke-m.de> Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-24s390/debug: avoid kernel warning on too large number of pagesChristian Borntraeger
commit 827c4913923e0b441ba07ba4cc41e01181102303 upstream. When specifying insanely large debug buffers a kernel warning is printed. The debug code does handle the error gracefully, though. Instead of duplicating the check let us silence the warning to avoid crashes when panic_on_warn is used. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-16arm64: perf: Report the PC value in REGS_ABI_32 modeJiping Ma
commit 8dfe804a4031ca6ba3a3efb2048534249b64f3a5 upstream. A 32-bit perf querying the registers of a compat task using REGS_ABI_32 will receive zeroes from w15, when it expects to find the PC. Return the PC value for register dwarf register 15 when returning register values for a compat task to perf. Cc: <stable@vger.kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Jiping Ma <jiping.ma2@windriver.com> Link: https://lore.kernel.org/r/1589165527-188401-1-git-send-email-jiping.ma2@windriver.com [will: Shuffled code and added a comment] Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-16x86/asm/64: Align start of __clear_user() loop to 16-bytesMatt Fleming
commit bb5570ad3b54e7930997aec76ab68256d5236d94 upstream. x86 CPUs can suffer severe performance drops if a tight loop, such as the ones in __clear_user(), straddles a 16-byte instruction fetch window, or worse, a 64-byte cacheline. This issues was discovered in the SUSE kernel with the following commit, 1153933703d9 ("x86/asm/64: Micro-optimize __clear_user() - Use immediate constants") which increased the code object size from 10 bytes to 15 bytes and caused the 8-byte copy loop in __clear_user() to be split across a 64-byte cacheline. Aligning the start of the loop to 16-bytes makes this fit neatly inside a single instruction fetch window again and restores the performance of __clear_user() which is used heavily when reading from /dev/zero. Here are some numbers from running libmicro's read_z* and pread_z* microbenchmarks which read from /dev/zero: Zen 1 (Naples) libmicro-file 5.7.0-rc6 5.7.0-rc6 5.7.0-rc6 revert-1153933703d9+ align16+ Time mean95-pread_z100k 9.9195 ( 0.00%) 5.9856 ( 39.66%) 5.9938 ( 39.58%) Time mean95-pread_z10k 1.1378 ( 0.00%) 0.7450 ( 34.52%) 0.7467 ( 34.38%) Time mean95-pread_z1k 0.2623 ( 0.00%) 0.2251 ( 14.18%) 0.2252 ( 14.15%) Time mean95-pread_zw100k 9.9974 ( 0.00%) 6.0648 ( 39.34%) 6.0756 ( 39.23%) Time mean95-read_z100k 9.8940 ( 0.00%) 5.9885 ( 39.47%) 5.9994 ( 39.36%) Time mean95-read_z10k 1.1394 ( 0.00%) 0.7483 ( 34.33%) 0.7482 ( 34.33%) Note that this doesn't affect Haswell or Broadwell microarchitectures which seem to avoid the alignment issue by executing the loop straight out of the Loop Stream Detector (verified using perf events). Fixes: 1153933703d9 ("x86/asm/64: Micro-optimize __clear_user() - Use immediate constants") Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: <stable@vger.kernel.org> # v4.19+ Link: https://lkml.kernel.org/r/20200618102002.30034-1-matt@codeblueprint.co.uk Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-16x86/cpu: Use pinning mask for CR4 bits needing to be 0Kees Cook
commit a13b9d0b97211579ea63b96c606de79b963c0f47 upstream. The X86_CR4_FSGSBASE bit of CR4 should not change after boot[1]. Older kernels should enforce this bit to zero, and newer kernels need to enforce it depending on boot-time configuration (e.g. "nofsgsbase"). To support a pinned bit being either 1 or 0, use an explicit mask in combination with the expected pinned bit values. [1] https://lore.kernel.org/lkml/20200527103147.GI325280@hirez.programming.kicks-ass.net Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/202006082013.71E29A42@keescook Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-16KVM: nVMX: Plumb L2 GPA through to PML emulationSean Christopherson
commit 2dbebf7ae1ed9a420d954305e2c9d5ed39ec57c3 upstream. Explicitly pass the L2 GPA to kvm_arch_write_log_dirty(), which for all intents and purposes is vmx_write_pml_buffer(), instead of having the latter pull the GPA from vmcs.GUEST_PHYSICAL_ADDRESS. If the dirty bit update is the result of KVM emulation (rare for L2), then the GPA in the VMCS may be stale and/or hold a completely unrelated GPA. Fixes: c5f983f6e8455 ("nVMX: Implement emulated Page Modification Logging") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200622215832.22090-2-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>