summaryrefslogtreecommitdiffstats
path: root/lib
AgeCommit message (Collapse)Author
2020-08-13random32: remove net_rand_state from the latent entropy gcc pluginLinus Torvalds
commit 83bdc7275e6206f560d247be856bceba3e1ed8f2 upstream. It turns out that the plugin right now ends up being really unhappy about the change from 'static' to 'extern' storage that happened in commit f227e3ec3b5c ("random32: update the net random state on interrupt and activity"). This is probably a trivial fix for the latent_entropy plugin, but for now, just remove net_rand_state from the list of things the plugin worries about. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Emese Revfy <re.emese@gmail.com> Cc: Kees Cook <keescook@chromium.org> Cc: Willy Tarreau <w@1wt.eu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-08-03random32: update the net random state on interrupt and activityWilly Tarreau
commit f227e3ec3b5cad859ad15666874405e8c1bbc1d4 upstream. This modifies the first 32 bits out of the 128 bits of a random CPU's net_rand_state on interrupt or CPU activity to complicate remote observations that could lead to guessing the network RNG's internal state. Note that depending on some network devices' interrupt rate moderation or binding, this re-seeding might happen on every packet or even almost never. In addition, with NOHZ some CPUs might not even get timer interrupts, leaving their local state rarely updated, while they are running networked processes making use of the random state. For this reason, we also perform this update in update_process_times() in order to at least update the state when there is user or system activity, since it's the only case we care about. Reported-by: Amit Klein <aksecurity@gmail.com> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Eric Dumazet <edumazet@google.com> Cc: "Jason A. Donenfeld" <Jason@zx2c4.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: <stable@vger.kernel.org> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-16test_objagg: Fix potential memory leak in error handlingAditya Pakki
commit a6379f0ad6375a707e915518ecd5c2270afcd395 upstream. In case of failure of check_expect_hints_stats(), the resources allocated by objagg_hints_get should be freed. The patch fixes this issue. Signed-off-by: Aditya Pakki <pakki001@umn.edu> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-13lib/zlib: remove outdated and incorrect pre-increment optimizationJann Horn
commit acaab7335bd6f0c0b54ce3a00bd7f18222ce0f5f upstream. The zlib inflate code has an old micro-optimization based on the assumption that for pre-increment memory accesses, the compiler will generate code that fits better into the processor's pipeline than what would be generated for post-increment memory accesses. This optimization was already removed in upstream zlib in 2016: https://github.com/madler/zlib/commit/9aaec95e8211 This optimization causes UB according to C99, which says in section 6.5.6 "Additive operators": "If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined". This UB is not only a theoretical concern, but can also cause trouble for future work on compiler-based sanitizers. According to the zlib commit, this optimization also is not optimal anymore with modern compilers. Replace uses of OFF, PUP and UP_UNALIGNED with their definitions in the POSTINC case, and remove the macro definitions, just like in the upstream patch. Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Mikhail Zaslonko <zaslonko@linux.ibm.com> Link: http://lkml.kernel.org/r/20200507123112.252723-1-jannh@google.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-11lib/test_kasan: add bitops testsMarco Elver
commit 19a33ca6c209f5e22c73b6beb4b1974153e93050 upstream. Patch series "Bitops instrumentation for KASAN", v5. This patch (of 3): This adds bitops tests to the test_kasan module. In a follow-up patch, support for bitops instrumentation will be added. Link: http://lkml.kernel.org/r/20190613125950.197667-2-elver@google.com Signed-off-by: Marco Elver <elver@google.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-11lib/mpi: Fix 64-bit MIPS build with ClangNathan Chancellor
commit 18f1ca46858eac22437819937ae44aa9a8f9f2fa upstream. When building 64r6_defconfig with CONFIG_MIPS32_O32 disabled and CONFIG_CRYPTO_RSA enabled: lib/mpi/generic_mpih-mul1.c:37:24: error: invalid use of a cast in a inline asm context requiring an l-value: remove the cast or build with -fheinous-gnu-extensions umul_ppmm(prod_high, prod_low, s1_ptr[j], s2_limb); ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~ lib/mpi/longlong.h:664:22: note: expanded from macro 'umul_ppmm' : "=d" ((UDItype)(w0)) ~~~~~~~~~~^~~ lib/mpi/generic_mpih-mul1.c:37:13: error: invalid use of a cast in a inline asm context requiring an l-value: remove the cast or build with -fheinous-gnu-extensions umul_ppmm(prod_high, prod_low, s1_ptr[j], s2_limb); ~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ lib/mpi/longlong.h:668:22: note: expanded from macro 'umul_ppmm' : "=d" ((UDItype)(w1)) ~~~~~~~~~~^~~ 2 errors generated. This special case for umul_ppmm for MIPS64r6 was added in commit bbc25bee37d2b ("lib/mpi: Fix umul_ppmm() for MIPS64r6"), due to GCC being inefficient and emitting a __multi3 intrinsic. There is no such issue with clang; with this patch applied, I can build this configuration without any problems and there are no link errors like mentioned in the commit above (which I can still reproduce with GCC 9.3.0 when that commit is reverted). Only use this definition when GCC is being used. This really should have been caught by commit b0c091ae04f67 ("lib/mpi: Eliminate unused umul_ppmm definitions for MIPS") when I was messing around in this area but I was not testing 64-bit MIPS at the time. Link: https://github.com/ClangBuiltLinux/linux/issues/885 Reported-by: Dmitry Golovin <dima@golovin.in> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-07vsprintf: Prevent crash when dereferencing invalid pointers for %pDJia He
commit 36594b317c656bec8f968db93701d2cb9bc9155c upstream. Commit 3e5903eb9cff ("vsprintf: Prevent crash when dereferencing invalid pointers") prevents most crash except for %pD. There is an additional pointer dereferencing before dentry_name. At least, vma->file can be NULL and be passed to printk %pD in print_bad_pte, which can cause crash. This patch fixes it with introducing a new file_dentry_name. Link: http://lkml.kernel.org/r/20190809012457.56685-1-justin.he@arm.com Fixes: 3e5903eb9cff ("vsprintf: Prevent crash when dereferencing invalid pointers") To: Geert Uytterhoeven <geert+renesas@glider.be> To: Thomas Gleixner <tglx@linutronix.de> To: Andy Shevchenko <andriy.shevchenko@linux.intel.com> To: linux-kernel@vger.kernel.org Cc: Kees Cook <keescook@chromium.org> Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org> Cc: Shuah Khan <shuah@kernel.org> Cc: "Tobin C. Harding" <tobin@kernel.org> Signed-off-by: Jia He <justin.he@arm.com> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-07-07lib/lzo: fix ambiguous encoding bug in lzo-rleDave Rodgman
commit b5265c813ce4efbfa2e46fd27cdf9a7f44a35d2e upstream. In some rare cases, for input data over 32 KB, lzo-rle could encode two different inputs to the same compressed representation, so that decompression is then ambiguous (i.e. data may be corrupted - although zram is not affected because it operates over 4 KB pages). This modifies the compressor without changing the decompressor or the bitstream format, such that: - there is no change to how data produced by the old compressor is decompressed - an old decompressor will correctly decode data from the updated compressor - performance and compression ratio are not affected - we avoid introducing a new bitstream format In testing over 12.8M real-world files totalling 903 GB, three files were affected by this bug. I also constructed 37M semi-random 64 KB files totalling 2.27 TB, and saw no affected files. Finally I tested over files constructed to contain each of the ~1024 possible bad input sequences; for all of these cases, updated lzo-rle worked correctly. There is no significant impact to performance or compression ratio. Signed-off-by: Dave Rodgman <dave.rodgman@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Dave Rodgman <dave.rodgman@arm.com> Cc: Willy Tarreau <w@1wt.eu> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <ngupta@vflare.org> Cc: Chao Yu <yuchao0@huawei.com> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/20200507100203.29785-1-dave.rodgman@arm.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-06-24lib/vsprintf: Reinstate printing of legacy clock IDsGeert Uytterhoeven
commit 4ca96aa99f3e1e530f63559c0cc63ae186ecd677 upstream. When using the legacy clock framework, clock pointers are no longer printed as IDs, as the !CONFIG_COMMON_CLK case was accidentally considered an error case. Fix this by reverting to the old behavior, which allows to distinguish clocks by ID, as the legacy clock framework does not store names with clocks. Fixes: 0b74d4d763fd4ee9 ("vsprintf: Consolidate handling of unknown pointer specifiers") Link: http://lkml.kernel.org/r/20190701140009.23683-1-geert+renesas@glider.be Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: linux-kernel@vger.kernel.org Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-06-08vsprintf: don't obfuscate NULL and error pointersIlya Dryomov
commit 7bd57fbc4a4ddedc664cad0bbced1b469e24e921 upstream. I don't see what security concern is addressed by obfuscating NULL and IS_ERR() error pointers, printed with %p/%pK. Given the number of sites where %p is used (over 10000) and the fact that NULL pointers aren't uncommon, it probably wouldn't take long for an attacker to find the hash that corresponds to 0. Although harder, the same goes for most common error values, such as -1, -2, -11, -14, etc. The NULL part actually fixes a regression: NULL pointers weren't obfuscated until commit 3e5903eb9cff ("vsprintf: Prevent crash when dereferencing invalid pointers") which went into 5.2. I'm tacking the IS_ERR() part on here because error pointers won't leak kernel addresses and printing them as pointers shouldn't be any different from e.g. %d with PTR_ERR_OR_ZERO(). Obfuscating them just makes debugging based on existing pr_debug and friends excruciating. Note that the "always print 0's for %pK when kptr_restrict == 2" behaviour which goes way back is left as is. Example output with the patch applied: ptr error-ptr NULL %p: 0000000001f8cc5b fffffffffffffff2 0000000000000000 %pK, kptr = 0: 0000000001f8cc5b fffffffffffffff2 0000000000000000 %px: ffff888048c04020 fffffffffffffff2 0000000000000000 %pK, kptr = 1: ffff888048c04020 fffffffffffffff2 0000000000000000 %pK, kptr = 2: 0000000000000000 0000000000000000 0000000000000000 Fixes: 3e5903eb9cff ("vsprintf: Prevent crash when dereferencing invalid pointers") Signed-off-by: Ilya Dryomov <idryomov@gmail.com> Reviewed-by: Petr Mladek <pmladek@suse.com> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-06-04lib: devres: add a helper function for ioremap_ucTuowen Zhao
commit e537654b7039aacfe8ae629d49655c0e5692ad44 upstream. Implement a resource managed strongly uncachable ioremap function. Cc: <stable@vger.kernel.org> # v4.19+ Tested-by: AceLan Kao <acelan.kao@canonical.com> Signed-off-by: Tuowen Zhao <ztuowen@gmail.com> Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Luis Chamberlain <mcgrof@kernel.org> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-06-04lib/mpi: Fix building for powerpc with clangNathan Chancellor
commit 5990cdee689c6885b27c6d969a3d58b09002b0bc upstream. 0day reports over and over on an powerpc randconfig with clang: lib/mpi/generic_mpih-mul1.c:37:13: error: invalid use of a cast in a inline asm context requiring an l-value: remove the cast or build with -fheinous-gnu-extensions Remove the superfluous casts, which have been done previously for x86 and arm32 in commit dea632cadd12 ("lib/mpi: fix build with clang") and commit 7b7c1df2883d ("lib/mpi/longlong.h: fix building with 32-bit x86"). Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://github.com/ClangBuiltLinux/linux/issues/991 Link: https://lore.kernel.org/r/20200413195041.24064-1-natechancellor@gmail.com Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-06-04lib/raid6/test: fix build on distros whose /bin/sh is not bashMasahiro Yamada
commit 06bd48b6cd97ef3889b68c8e09014d81dbc463f1 upstream. You can build a user-space test program for the raid6 library code, like this: $ cd lib/raid6/test $ make The command in $(shell ...) function is evaluated by /bin/sh by default. (or, you can specify the shell by passing SHELL=<shell> from command line) Currently '>&/dev/null' is used to sink both stdout and stderr. Because this code is bash-ism, it only works when /bin/sh is a symbolic link to bash (this is the case on RHEL etc.) This does not work on Ubuntu where /bin/sh is a symbolic link to dash. I see lots of /bin/sh: 1: Syntax error: Bad fd number and warning "your version of binutils lacks ... support" Replace it with portable '>/dev/null 2>&1'. Fixes: 4f8c55c5ad49 ("lib/raid6: build proper files on corresponding arch") Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Acked-by: H. Peter Anvin (Intel) <hpa@zytor.com> Reviewed-by: Jason A. Donenfeld <Jason@zx2c4.com> Acked-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-06-01kbuild, btf: Fix dependencies for DEBUG_INFO_BTFSlava Bacherikov
commit 7d32e69310d67e6b04af04f26193f79dfc2f05c7 upstream. Currently turning on DEBUG_INFO_SPLIT when DEBUG_INFO_BTF is also enabled will produce invalid btf file, since gen_btf function in link-vmlinux.sh script doesn't handle *.dwo files. Enabling DEBUG_INFO_REDUCED will also produce invalid btf file, and using GCC_PLUGIN_RANDSTRUCT with BTF makes no sense. Fixes: e83b9f55448a ("kbuild: add ability to generate BTF type info for vmlinux") Reported-by: Jann Horn <jannh@google.com> Reported-by: Liu Yiding <liuyd.fnst@cn.fujitsu.com> Signed-off-by: Slava Bacherikov <slava@bacher09.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: KP Singh <kpsingh@google.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200402204138.408021-1-slava@bacher09.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-06-01crypto: arc4 - refactor arc4 core code into separate libraryArd Biesheuvel
commit dc51f25752bfcb5f1edbac1ca4ce16af7b3bd507 upstream. Refactor the core rc4 handling so we can move most users to a library interface, permitting us to drop the cipher interface entirely in a future patch. This is part of an effort to simplify the crypto API and improve its robustness against incorrect use. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-06-01xarray: Fix early termination of xas_for_each_markedMatthew Wilcox (Oracle)
commit 7e934cf5ace1dceeb804f7493fa28bb697ed3c52 upstream. xas_for_each_marked() is using entry == NULL as a termination condition of the iteration. When xas_for_each_marked() is used protected only by RCU, this can however race with xas_store(xas, NULL) in the following way: TASK1 TASK2 page_cache_delete() find_get_pages_range_tag() xas_for_each_marked() xas_find_marked() off = xas_find_chunk() xas_store(&xas, NULL) xas_init_marks(&xas); ... rcu_assign_pointer(*slot, NULL); entry = xa_entry(off); And thus xas_for_each_marked() terminates prematurely possibly leading to missed entries in the iteration (translating to missing writeback of some pages or a similar problem). If we find a NULL entry that has been marked, skip it (unless we're trying to allocate an entry). Reported-by: Jan Kara <jack@suse.cz> CC: stable@vger.kernel.org Fixes: ef8e5717db01 ("page cache: Convert delete_batch to XArray") Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-06-01XArray: Fix xas_pause for large multi-index entriesMatthew Wilcox (Oracle)
commit c36d451ad386b34f452fc3c8621ff14b9eaa31a6 upstream. Inspired by the recent Coverity report, I looked for other places where the offset wasn't being converted to an unsigned long before being shifted, and I found one in xas_pause() when the entry being paused is of order >32. Fixes: b803b42823d0 ("xarray: Add XArray iterators") Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: stable@vger.kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-06-01uapi: rename ext2_swab() to swab() and share globally in swab.hYury Norov
commit d5767057c9a76a29f073dad66b7fa12a90e8c748 upstream. ext2_swab() is defined locally in lib/find_bit.c However it is not specific to ext2, neither to bitmaps. There are many potential users of it, so rename it to just swab() and move to include/uapi/linux/swab.h ABI guarantees that size of unsigned long corresponds to BITS_PER_LONG, therefore drop unneeded cast. Link: http://lkml.kernel.org/r/20200103202846.21616-1-yury.norov@gmail.com Signed-off-by: Yury Norov <yury.norov@gmail.com> Cc: Allison Randal <allison@lohutok.net> Cc: Joe Perches <joe@perches.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: William Breathitt Gray <vilhelm.gray@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-05-15lib/stackdepot.c: fix global out-of-bounds in stack_slabsAlexander Potapenko
commit 305e519ce48e935702c32241f07d393c3c8fed3e upstream. Walter Wu has reported a potential case in which init_stack_slab() is called after stack_slabs[STACK_ALLOC_MAX_SLABS - 1] has already been initialized. In that case init_stack_slab() will overwrite stack_slabs[STACK_ALLOC_MAX_SLABS], which may result in a memory corruption. Link: http://lkml.kernel.org/r/20200218102950.260263-1-glider@google.com Fixes: cd11016e5f521 ("mm, kasan: stackdepot implementation. Enable stackdepot for SLAB") Signed-off-by: Alexander Potapenko <glider@google.com> Reported-by: Walter Wu <walter-zh.wu@mediatek.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kate Stewart <kstewart@linuxfoundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-05-15lib/scatterlist.c: adjust indentation in __sg_alloc_tableNathan Chancellor
commit 4e456fee215677584cafa7f67298a76917e89c64 upstream. Clang warns: ../lib/scatterlist.c:314:5: warning: misleading indentation; statement is not part of the previous 'if' [-Wmisleading-indentation] return -ENOMEM; ^ ../lib/scatterlist.c:311:4: note: previous statement is here if (prv) ^ 1 warning generated. This warning occurs because there is a space before the tab on this line. Remove it so that the indentation is consistent with the Linux kernel coding style and clang no longer warns. Link: http://lkml.kernel.org/r/20191218033606.11942-1-natechancellor@gmail.com Link: https://github.com/ClangBuiltLinux/linux/issues/830 Fixes: edce6820a9fd ("scatterlist: prevent invalid free when alloc fails") Signed-off-by: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-05-04lib/test_kasan.c: fix memory leak in kmalloc_oob_krealloc_more()Gustavo A. R. Silva
commit 3e21d9a501bf99aee2e5835d7f34d8c823f115b5 upstream. In case memory resources for _ptr2_ were allocated, release them before return. Notice that in case _ptr1_ happens to be NULL, krealloc() behaves exactly like kmalloc(). Addresses-Coverity-ID: 1490594 ("Resource leak") Link: http://lkml.kernel.org/r/20200123160115.GA4202@embeddedor Fixes: 3f15801cdc23 ("lib: add kasan test module") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Reviewed-by: Dmitry Vyukov <dvyukov@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-04-16XArray: Fix xa_find_next for large multi-index entriesMatthew Wilcox (Oracle)
commit bd40b17ca49d7d110adf456e647701ce74de2241 upstream. Coverity pointed out that xas_sibling() was shifting xa_offset without promoting it to an unsigned long first, so the shift could cause an overflow and we'd get the wrong answer. The fix is obvious, and the new test-case provokes UBSAN to report an error: runtime error: shift exponent 60 is too large for 32-bit type 'int' Fixes: 19c30f4dd092 ("XArray: Fix xa_find_after with multi-index entries") Reported-by: Bjorn Helgaas <bhelgaas@google.com> Reported-by: Kees Cook <keescook@chromium.org> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: stable@vger.kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-04-16XArray: Fix xas_pause at ULONG_MAXMatthew Wilcox (Oracle)
commit 82a22311b7a68a78709699dc8c098953b70e4fd2 upstream. If we were unlucky enough to call xas_pause() when the index was at ULONG_MAX (or a multi-slot entry which ends at ULONG_MAX), we would wrap the index back around to 0 and restart the iteration from the beginning. Use the XAS_BOUNDS state to indicate that we should just stop the iteration. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-04-16lib: Reduce user_access_begin() boundaries in strncpy_from_user() and ↵Christophe Leroy
strnlen_user() commit ab10ae1c3bef56c29bac61e1201c752221b87b41 upstream. The range passed to user_access_begin() by strncpy_from_user() and strnlen_user() starts at 'src' and goes up to the limit of userspace although reads will be limited by the 'count' param. On 32 bits powerpc (book3s/32) access has to be granted for each 256Mbytes segment and the cost increases with the number of segments to unlock. Limit the range with 'count' param. Fixes: 594cc251fdd0 ("make 'user_access_begin()' do 'access_ok()'") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-04-16XArray: Fix xas_find returning too many entriesMatthew Wilcox (Oracle)
commit c44aa5e8ab58b5f4cf473970ec784c3333496a2e upstream. If you call xas_find() with the initial index > max, it should have returned NULL but was returning the entry at index. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: stable@vger.kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-04-16XArray: Fix xa_find_after with multi-index entriesMatthew Wilcox (Oracle)
commit 19c30f4dd0923ef191f35c652ee4058e91e89056 upstream. If the entry is of an order which is a multiple of XA_CHUNK_SIZE, the current detection of sibling entries does not work. Factor out an xas_sibling() function to make xa_find_after() a little more understandable, and write a new implementation that doesn't suffer from the same bug. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: stable@vger.kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-04-16XArray: Fix infinite loop with entry at ULONG_MAXMatthew Wilcox (Oracle)
commit 430f24f94c8a174d411a550d7b5529301922e67a upstream. If there is an entry at ULONG_MAX, xa_for_each() will overflow the 'index + 1' in xa_find_after() and wrap around to 0. Catch this case and terminate the loop by returning NULL. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: stable@vger.kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-04-12bug: move WARN_ON() "cut here" into exception handlerKees Cook
commit a44f71a9ab99b509fec9d5a9f5c222debd89934f upstream. The original clean up of "cut here" missed the WARN_ON() case (that does not have a printk message), which was fixed recently by adding an explicit printk of "cut here". This had the downside of adding a printk() to every WARN_ON() caller, which reduces the utility of using an instruction exception to streamline the resulting code. By making this a new BUGFLAG, all of these can be removed and "cut here" can be handled by the exception handler. This was very pronounced on PowerPC, but the effect can be seen on x86 as well. The resulting text size of a defconfig build shows some small savings from this patch: text data bss dec hex filename 19691167 5134320 1646664 26472151 193eed7 vmlinux.before 19676362 5134260 1663048 26473670 193f4c6 vmlinux.after This change also opens the door for creating something like BUG_MSG(), where a custom printk() before issuing BUG(), without confusing the "cut here" line. Link: http://lkml.kernel.org/r/201908200943.601DD59DCE@keescook Fixes: 6b15f678fb7d ("include/asm-generic/bug.h: fix "cut here" for WARN_ON for __WARN_TAINT architectures") Signed-off-by: Kees Cook <keescook@chromium.org> Reported-by: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Christophe Leroy <christophe.leroy@c-s.fr> Cc: Drew Davenport <ddavenport@chromium.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: "Steven Rostedt (VMware)" <rostedt@goodmis.org> Cc: Feng Tang <feng.tang@intel.com> Cc: Petr Mladek <pmladek@suse.com> Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Cc: Borislav Petkov <bp@suse.de> Cc: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-03-20sbitmap: only queue kyber's wait callback if not already activeDavid Jeffery
commit df034c93f15ee71df231ff9fe311d27ff08a2a52 upstream. Under heavy loads where the kyber I/O scheduler hits the token limits for its scheduling domains, kyber can become stuck. When active requests complete, kyber may not be woken up leaving the I/O requests in kyber stuck. This stuck state is due to a race condition with kyber and the sbitmap functions it uses to run a callback when enough requests have completed. The running of a sbt_wait callback can race with the attempt to insert the sbt_wait. Since sbitmap_del_wait_queue removes the sbt_wait from the list first then sets the sbq field to NULL, kyber can see the item as not on a list but the call to sbitmap_add_wait_queue will see sbq as non-NULL. This results in the sbt_wait being inserted onto the wait list but ws_active doesn't get incremented. So the sbitmap queue does not know there is a waiter on a wait list. Since sbitmap doesn't think there is a waiter, kyber may never be informed that there are domain tokens available and the I/O never advances. With the sbt_wait on a wait list, kyber believes it has an active waiter so cannot insert a new waiter when reaching the domain's full state. This race can be fixed by only adding the sbt_wait to the queue if the sbq field is NULL. If sbq is not NULL, there is already an action active which will trigger the re-running of kyber. Let it run and add the sbt_wait to the wait list if still needing to wait. Reviewed-by: Omar Sandoval <osandov@fb.com> Signed-off-by: David Jeffery <djeffery@redhat.com> Reported-by: John Pittman <jpittman@redhat.com> Tested-by: John Pittman <jpittman@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-03-20lib/ubsan: don't serialize UBSAN reportJulien Grall
commit ce5c31db3645b649a31044a4d8b6057f6c723702 upstream. At the moment, UBSAN report will be serialized using a spin_lock(). On RT-systems, spinlocks are turned to rt_spin_lock and may sleep. This will result to the following splat if the undefined behavior is in a context that can sleep: BUG: sleeping function called from invalid context at /src/linux/kernel/locking/rtmutex.c:968 in_atomic(): 1, irqs_disabled(): 128, pid: 3447, name: make 1 lock held by make/3447: #0: 000000009a966332 (&mm->mmap_sem){++++}, at: do_page_fault+0x140/0x4f8 irq event stamp: 6284 hardirqs last enabled at (6283): [<ffff000011326520>] _raw_spin_unlock_irqrestore+0x90/0xa0 hardirqs last disabled at (6284): [<ffff0000113262b0>] _raw_spin_lock_irqsave+0x30/0x78 softirqs last enabled at (2430): [<ffff000010088ef8>] fpsimd_restore_current_state+0x60/0xe8 softirqs last disabled at (2427): [<ffff000010088ec0>] fpsimd_restore_current_state+0x28/0xe8 Preemption disabled at: [<ffff000011324a4c>] rt_mutex_futex_unlock+0x4c/0xb0 CPU: 3 PID: 3447 Comm: make Tainted: G W 5.2.14-rt7-01890-ge6e057589653 #911 Call trace: dump_backtrace+0x0/0x148 show_stack+0x14/0x20 dump_stack+0xbc/0x104 ___might_sleep+0x154/0x210 rt_spin_lock+0x68/0xa0 ubsan_prologue+0x30/0x68 handle_overflow+0x64/0xe0 __ubsan_handle_add_overflow+0x10/0x18 __lock_acquire+0x1c28/0x2a28 lock_acquire+0xf0/0x370 _raw_spin_lock_irqsave+0x58/0x78 rt_mutex_futex_unlock+0x4c/0xb0 rt_spin_unlock+0x28/0x70 get_page_from_freelist+0x428/0x2b60 __alloc_pages_nodemask+0x174/0x1708 alloc_pages_vma+0x1ac/0x238 __handle_mm_fault+0x4ac/0x10b0 handle_mm_fault+0x1d8/0x3b0 do_page_fault+0x1c8/0x4f8 do_translation_fault+0xb8/0xe0 do_mem_abort+0x3c/0x98 el0_da+0x20/0x24 The spin_lock() will protect against multiple CPUs to output a report together, I guess to prevent them from being interleaved. However, they can still interleave with other messages (and even splat from __might_sleep). So the lock usefulness seems pretty limited. Rather than trying to accomodate RT-system by switching to a raw_spin_lock(), the lock is now completely dropped. Link: http://lkml.kernel.org/r/20190920100835.14999-1-julien.grall@arm.com Signed-off-by: Julien Grall <julien.grall@arm.com> Reported-by: Andre Przywara <andre.przywara@arm.com> Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-02-24ubsan, x86: Annotate and allow __ubsan_handle_shift_out_of_bounds() in ↵Peter Zijlstra
uaccess regions commit 9a50dcaf0416a43e1fe411dc61a99c8333c90119 upstream. The new check_zeroed_user() function uses variable shifts inside of a user_access_begin()/user_access_end() section and that results in GCC emitting __ubsan_handle_shift_out_of_bounds() calls, even though through value range analysis it would be able to see that the UB in question is impossible. Annotate and whitelist this UBSAN function; continued use of user_access_begin()/user_access_end() will undoubtedly result in further uses of function. Reported-by: Randy Dunlap <rdunlap@infradead.org> Tested-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Randy Dunlap <rdunlap@infradead.org> Acked-by: Christian Brauner <christian.brauner@ubuntu.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: cyphar@cyphar.com Cc: keescook@chromium.org Cc: linux@rasmusvillemoes.dk Fixes: f5a1a536fa14 ("lib: introduce copy_struct_from_user() helper") Link: https://lkml.kernel.org/r/20191021131149.GA19358@hirez.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-01-31lib: raid6: fix awk build warningsGreg Kroah-Hartman
commit 702600eef73033ddd4eafcefcbb6560f3e3a90f7 upstream. Newer versions of awk spit out these fun warnings: awk: ../lib/raid6/unroll.awk:16: warning: regexp escape sequence `\#' is not a known regexp operator As commit 700c1018b86d ("x86/insn: Fix awk regexp warnings") showed, it turns out that there are a number of awk strings that do not need to be escaped and newer versions of awk now warn about this. Fix the string up so that no warning is produced. The exact same kernel module gets created before and after this patch, showing that it wasn't needed. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/20191206152600.GA75093@kroah.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-01-19idr: Fix idr_alloc_u32 on 32-bit systemsMatthew Wilcox (Oracle)
commit b7e9728f3d7fc5c5c8508d99f1675212af5cfd49 upstream. Attempting to allocate an entry at 0xffffffff when one is already present would succeed in allocating one at 2^32, which would confuse everything. Return -ENOSPC in this case, as expected. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-01-19idr: Fix idr_get_next_ul race with idr_removeMatthew Wilcox (Oracle)
commit 5a74ac4c4a97bd8b7dba054304d598e2a882fea6 upstream. Commit 5c089fd0c734 ("idr: Fix idr_get_next race with idr_remove") neglected to fix idr_get_next_ul(). As far as I can tell, nobody's actually using this interface under the RCU read lock, but fix it now before anybody decides to use it. Fixes: 5c089fd0c734 ("idr: Fix idr_get_next race with idr_remove") Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2020-01-19XArray: Fix xas_next() with a single entry at 0Matthew Wilcox (Oracle)
commit 91abab83839aa2eba073e4a63c729832fdb27ea1 upstream. If there is only a single entry at 0, the first time we call xas_next(), we return the entry. Unfortunately, all subsequent times we call xas_next(), we also return the entry at 0 instead of noticing that the xa_index is now greater than zero. This broke find_get_pages_contig(). Fixes: 64d3e9a9e0cc ("xarray: Step through an XArray") Reported-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-12-29dump_stack: avoid the livelock of the dump_lockKevin Hao
commit 5cbf2fff3bba8d3c6a4d47c1754de1cf57e2b01f upstream. In the current code, we use the atomic_cmpxchg() to serialize the output of the dump_stack(), but this implementation suffers the thundering herd problem. We have observed such kind of livelock on a Marvell cn96xx board(24 cpus) when heavily using the dump_stack() in a kprobe handler. Actually we can let the competitors to wait for the releasing of the lock before jumping to atomic_cmpxchg(). This will definitely mitigate the thundering herd problem. Thanks Linus for the suggestion. [akpm@linux-foundation.org: fix comment] Link: http://lkml.kernel.org/r/20191030031637.6025-1-haokexin@gmail.com Fixes: b58d977432c8 ("dump_stack: serialize the output from dump_stack()") Signed-off-by: Kevin Hao <haokexin@gmail.com> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-11-25lib: textsearch: fix escapes in example codeRandy Dunlap
commit 2105b52e30debe7f19f3218598d8ae777dcc6776 upstream. This textsearch code example does not need the '\' escapes and they can be misleading to someone reading the example. Also, gcc and sparse warn that the "\%d" is an unknown escape sequence. Fixes: 5968a70d7af5 ("textsearch: fix kernel-doc warnings and add kernel-api section") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: netdev@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2019-10-07kmemleak: increase DEBUG_KMEMLEAK_EARLY_LOG_SIZE default to 16KNicolas Boichat
[ Upstream commit b751c52bb587ae66f773b15204ef7a147467f4c7 ] The current default value (400) is too low on many systems (e.g. some ARM64 platform takes up 1000+ entries). syzbot uses 16000 as default value, and has proved to be enough on beefy configurations, so let's pick that value. This consumes more RAM on boot (each entry is 160 bytes, so in total ~2.5MB of RAM), but the memory would later be freed (early_log is __initdata). Link: http://lkml.kernel.org/r/20190730154027.101525-1-drinkcat@chromium.org Signed-off-by: Nicolas Boichat <drinkcat@chromium.org> Suggested-by: Dmitry Vyukov <dvyukov@google.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Dmitry Vyukov <dvyukov@google.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Kees Cook <keescook@chromium.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Joe Lawrence <joe.lawrence@redhat.com> Cc: Uladzislau Rezki <urezki@gmail.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-10-05lib/lzo/lzo1x_compress.c: fix alignment bug in lzo-rleDave Rodgman
commit 09b35b4192f6682dff96a093ab1930998cdb73b4 upstream. Fix an unaligned access which breaks on platforms where this is not permitted (e.g., Sparc). Link: http://lkml.kernel.org/r/20190912145502.35229-1-dave.rodgman@arm.com Signed-off-by: Dave Rodgman <dave.rodgman@arm.com> Cc: Dave Rodgman <dave.rodgman@arm.com> Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com> Cc: Minchan Kim <minchan@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-09-06lib: logic_pio: Add logic_pio_unregister_range()John Garry
commit b884e2de2afc68ce30f7093747378ef972dde253 upstream. Add a function to unregister a logical PIO range. Logical PIO space can still be leaked when unregistering certain LOGIC_PIO_CPU_MMIO regions, but this acceptable for now since there are no callers to unregister LOGIC_PIO_CPU_MMIO regions, and the logical PIO region allocation scheme would need significant work to improve this. Cc: stable@vger.kernel.org Signed-off-by: John Garry <john.garry@huawei.com> Signed-off-by: Wei Xu <xuwei5@hisilicon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-09-06lib: logic_pio: Avoid possible overlap for unregistering regionsJohn Garry
commit 0a27142bd1ee259e24a0be2b0133e5ca5df8da91 upstream. The code was originally written to not support unregistering logical PIO regions. To accommodate supporting unregistering logical PIO regions, subtly modify LOGIC_PIO_CPU_MMIO region registration code, such that the "end" of the registered regions is the "end" of the last region, and not the sum of the sizes of all the registered regions. Cc: stable@vger.kernel.org Signed-off-by: John Garry <john.garry@huawei.com> Signed-off-by: Wei Xu <xuwei5@hisilicon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-09-06lib: logic_pio: Fix RCU usageJohn Garry
commit 06709e81c668f5f56c65b806895b278517bd44e0 upstream. The traversing of io_range_list with list_for_each_entry_rcu() is not properly protected by rcu_read_lock() and rcu_read_unlock(), so add them. These functions mark the critical section scope where the list is protected for the reader, it cannot be "reclaimed". Any updater - in this case, the logical PIO registration functions - cannot update the list until the reader exits this critical section. In addition, the list traversing used in logic_pio_register_range() does not need to use the rcu variant. This is because we are already using io_range_mutex to guarantee mutual exclusion from mutating the list. Cc: stable@vger.kernel.org Fixes: 031e3601869c ("lib: Add generic PIO mapping method") Signed-off-by: John Garry <john.garry@huawei.com> Signed-off-by: Wei Xu <xuwei5@hisilicon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-16test_firmware: fix a memory leak bugWenwen Wang
[ Upstream commit d4fddac5a51c378c5d3e68658816c37132611e1f ] In test_firmware_init(), the buffer pointed to by the global pointer 'test_fw_config' is allocated through kzalloc(). Then, the buffer is initialized in __test_firmware_config_init(). In the case that the initialization fails, the following execution in test_firmware_init() needs to be terminated with an error code returned to indicate this failure. However, the allocated buffer is not freed on this execution path, leading to a memory leak bug. To fix the above issue, free the allocated buffer before returning from test_firmware_init(). Signed-off-by: Wenwen Wang <wenwen@cs.uga.edu> Link: https://lore.kernel.org/r/1563084696-6865-1-git-send-email-wang6495@umn.edu Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06ubsan: build ubsan.c more conservativelyArnd Bergmann
commit af700eaed0564d5d3963a7a51cb0843629d7fe3d upstream. objtool points out several conditions that it does not like, depending on the combination with other configuration options and compiler variants: stack protector: lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch()+0xbf: call to __stack_chk_fail() with UACCESS enabled lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch_v1()+0xbe: call to __stack_chk_fail() with UACCESS enabled stackleak plugin: lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch()+0x4a: call to stackleak_track_stack() with UACCESS enabled lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch_v1()+0x4a: call to stackleak_track_stack() with UACCESS enabled kasan: lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch()+0x25: call to memcpy() with UACCESS enabled lib/ubsan.o: warning: objtool: __ubsan_handle_type_mismatch_v1()+0x25: call to memcpy() with UACCESS enabled The stackleak and kasan options just need to be disabled for this file as we do for other files already. For the stack protector, we already attempt to disable it, but this fails on clang because the check is mixed with the gcc specific -fno-conserve-stack option. According to Andrey Ryabinin, that option is not even needed, dropping it here fixes the stackprotector issue. Link: http://lkml.kernel.org/r/20190722125139.1335385-1-arnd@arndb.de Link: https://lore.kernel.org/lkml/20190617123109.667090-1-arnd@arndb.de/t/ Link: https://lore.kernel.org/lkml/20190722091050.2188664-1-arnd@arndb.de/t/ Fixes: d08965a27e84 ("x86/uaccess, ubsan: Fix UBSAN vs. SMAP") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-08-06mm/ioremap: check virtual address alignment while creating huge mappingsAnshuman Khandual
[ Upstream commit 6b95ab4218bfa59bc315105127ffe03aef3b5742 ] Virtual address alignment is essential in ensuring correct clearing for all intermediate level pgtable entries and freeing associated pgtable pages. An unaligned address can end up randomly freeing pgtable page that potentially still contains valid mappings. Hence also check it's alignment along with existing phys_addr check. Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Toshi Kani <toshi.kani@hpe.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Chintan Pandya <cpandya@codeaurora.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06lib/test_string.c: avoid masking memset16/32/64 failuresPeter Rosin
[ Upstream commit 33d6e0ff68af74be0c846c8e042e84a9a1a0561e ] If a memsetXX implementation is completely broken and fails in the first iteration, when i, j, and k are all zero, the failure is masked as zero is returned. Failing in the first iteration is perhaps the most likely failure, so this makes the tests pretty much useless. Avoid the situation by always setting a random unused bit in the result on failure. Link: http://lkml.kernel.org/r/20190506124634.6807-3-peda@axentia.se Fixes: 03270c13c5ff ("lib/string.c: add testcases for memset16/32/64") Signed-off-by: Peter Rosin <peda@axentia.se> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-08-06lib/test_overflow.c: avoid tainting the kernel and fix wrap sizeKees Cook
[ Upstream commit 8e060c21ae2c265a2b596e9e7f9f97ec274151a4 ] This adds __GFP_NOWARN to the kmalloc()-portions of the overflow test to avoid tainting the kernel. Additionally fixes up the math on wrap size to be architecture and page size agnostic. Link: http://lkml.kernel.org/r/201905282012.0A8767E24@keescook Fixes: ca90800a91ba ("test_overflow: Add memory allocation overflow tests") Signed-off-by: Kees Cook <keescook@chromium.org> Reported-by: Randy Dunlap <rdunlap@infradead.org> Suggested-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-07-26lib/scatterlist: Fix mapping iterator when sg->offset is greater than PAGE_SIZEChristophe Leroy
commit aeb87246537a83c2aff482f3f34a2e0991e02cbc upstream. All mapping iterator logic is based on the assumption that sg->offset is always lower than PAGE_SIZE. But there are situations where sg->offset is such that the SG item is on the second page. In that case sg_copy_to_buffer() fails properly copying the data into the buffer. One of the reason is that the data will be outside the kmapped area used to access that data. This patch fixes the issue by adjusting the mapping iterator offset and pgoffset fields such that offset is always lower than PAGE_SIZE. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Fixes: 4225fc8555a9 ("lib/scatterlist: use page iterator in the mapping iterator") Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-26rslib: Fix handling of of caller provided syndromeFerdinand Blomqvist
[ Upstream commit ef4d6a8556b637ad27c8c2a2cff1dda3da38e9a9 ] Check if the syndrome provided by the caller is zero, and act accordingly. Signed-off-by: Ferdinand Blomqvist <ferdinand.blomqvist@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190620141039.9874-6-ferdinand.blomqvist@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-07-26rslib: Fix decoding of shortened codesFerdinand Blomqvist
[ Upstream commit 2034a42d1747fc1e1eeef2c6f1789c4d0762cb9c ] The decoding of shortenend codes is broken. It only works as expected if there are no erasures. When decoding with erasures, Lambda (the error and erasure locator polynomial) is initialized from the given erasure positions. The pad parameter is not accounted for by the initialisation code, and hence Lambda is initialized from incorrect erasure positions. The fix is to adjust the erasure positions by the supplied pad. Signed-off-by: Ferdinand Blomqvist <ferdinand.blomqvist@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20190620141039.9874-3-ferdinand.blomqvist@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>