summaryrefslogtreecommitdiffstats
path: root/crypto
AgeCommit message (Collapse)Author
2022-06-06crypto: ecrdsa - Fix incorrect use of vli_cmpVitaly Chikunov
commit 7cc7ab73f83ee6d50dc9536bc3355495d8600fad upstream. Correctly compare values that shall be greater-or-equal and not just greater. Fixes: 0d7a78643f69 ("crypto: ecrdsa - add EC-RDSA (GOST 34.10) algorithm") Cc: <stable@vger.kernel.org> Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-04-15crypto: authenc - Fix sleep in atomic context in decrypt_tailHerbert Xu
[ Upstream commit 66eae850333d639fc278d6f915c6fc01499ea893 ] The function crypto_authenc_decrypt_tail discards its flags argument and always relies on the flags from the original request when starting its sub-request. This is clearly wrong as it may cause the SLEEPABLE flag to be set when it shouldn't. Fixes: 92d95ba91772 ("crypto: authenc - Convert to new AEAD interface") Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2022-04-15crypto: rsa-pkcs1pad - fix buffer overread in pkcs1pad_verify_complete()Eric Biggers
commit a24611ea356c7f3f0ec926da11b9482ac1f414fd upstream. Before checking whether the expected digest_info is present, we need to check that there are enough bytes remaining. Fixes: a49de377e051 ("crypto: Add hash param to pkcs1pad") Cc: <stable@vger.kernel.org> # v4.6+ Cc: Tadeusz Struk <tadeusz.struk@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-04-15crypto: rsa-pkcs1pad - restore signature length checkEric Biggers
commit d3481accd974541e6a5d6a1fb588924a3519c36e upstream. RSA PKCS#1 v1.5 signatures are required to be the same length as the RSA key size. RFC8017 specifically requires the verifier to check this (https://datatracker.ietf.org/doc/html/rfc8017#section-8.2.2). Commit a49de377e051 ("crypto: Add hash param to pkcs1pad") changed the kernel to allow longer signatures, but didn't explain this part of the change; it seems to be unrelated to the rest of the commit. Revert this change, since it doesn't appear to be correct. We can be pretty sure that no one is relying on overly-long signatures (which would have to be front-padded with zeroes) being supported, given that they would have been broken since commit c7381b012872 ("crypto: akcipher - new verify API for public key algorithms"). Fixes: a49de377e051 ("crypto: Add hash param to pkcs1pad") Cc: <stable@vger.kernel.org> # v4.6+ Cc: Tadeusz Struk <tadeusz.struk@linaro.org> Suggested-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-04-15crypto: rsa-pkcs1pad - correctly get hash from source scatterlistEric Biggers
commit e316f7179be22912281ce6331d96d7c121fb2b17 upstream. Commit c7381b012872 ("crypto: akcipher - new verify API for public key algorithms") changed akcipher_alg::verify to take in both the signature and the actual hash and do the signature verification, rather than just return the hash expected by the signature as was the case before. To do this, it implemented a hack where the signature and hash are concatenated with each other in one scatterlist. Obviously, for this to work correctly, akcipher_alg::verify needs to correctly extract the two items from the scatterlist it is given. Unfortunately, it doesn't correctly extract the hash in the case where the signature is longer than the RSA key size, as it assumes that the signature's length is equal to the RSA key size. This causes a prefix of the hash, or even the entire hash, to be taken from the *signature*. (Note, the case of a signature longer than the RSA key size should not be allowed in the first place; a separate patch will fix that.) It is unclear whether the resulting scheme has any useful security properties. Fix this by correctly extracting the hash from the scatterlist. Fixes: c7381b012872 ("crypto: akcipher - new verify API for public key algorithms") Cc: <stable@vger.kernel.org> # v5.2+ Reviewed-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17crypto: pcrypt - Delay write to padata->infoDaniel Jordan
[ Upstream commit 68b6dea802cea0dbdd8bd7ccc60716b5a32a5d8a ] These three events can race when pcrypt is used multiple times in a template ("pcrypt(pcrypt(...))"): 1. [taskA] The caller makes the crypto request via crypto_aead_encrypt() 2. [kworkerB] padata serializes the inner pcrypt request 3. [kworkerC] padata serializes the outer pcrypt request 3 might finish before the call to crypto_aead_encrypt() returns in 1, resulting in two possible issues. First, a use-after-free of the crypto request's memory when, for example, taskA writes to the outer pcrypt request's padata->info in pcrypt_aead_enc() after kworkerC completes the request. Second, the outer pcrypt request overwrites the inner pcrypt request's return code with -EINPROGRESS, making a successful request appear to fail. For instance, kworkerB writes the outer pcrypt request's padata->info in pcrypt_aead_done() and then taskA overwrites it in pcrypt_aead_enc(). Avoid both situations by delaying the write of padata->info until after the inner crypto request's return code is checked. This prevents the use-after-free by not touching the crypto request's memory after the next-inner crypto request is made, and stops padata->info from being overwritten. Fixes: 5068c7a883d16 ("crypto: pcrypt - Add pcrypt crypto parallelization wrapper") Reported-by: syzbot+b187b77c8474f9648fae@syzkaller.appspotmail.com Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-11-17crypto: ecc - fix CRYPTO_DEFAULT_RNG dependencyArnd Bergmann
[ Upstream commit 38aa192a05f22f9778f9420e630f0322525ef12e ] The ecc.c file started out as part of the ECDH algorithm but got moved out into a standalone module later. It does not build without CRYPTO_DEFAULT_RNG, so now that other modules are using it as well we can run into this link error: aarch64-linux-ld: ecc.c:(.text+0xfc8): undefined reference to `crypto_default_rng' aarch64-linux-ld: ecc.c:(.text+0xff4): undefined reference to `crypto_put_default_rng' Move the 'select CRYPTO_DEFAULT_RNG' statement into the correct symbol. Fixes: 0d7a78643f69 ("crypto: ecrdsa - add EC-RDSA (GOST 34.10) algorithm") Fixes: 4e6602916bc6 ("crypto: ecdsa - Add support for ECDSA signature verification") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-07-14crypto: shash - avoid comparing pointers to exported functions under CFIArd Biesheuvel
[ Upstream commit 22ca9f4aaf431a9413dcc115dd590123307f274f ] crypto_shash_alg_has_setkey() is implemented by testing whether the .setkey() member of a struct shash_alg points to the default version, called shash_no_setkey(). As crypto_shash_alg_has_setkey() is a static inline, this requires shash_no_setkey() to be exported to modules. Unfortunately, when building with CFI, function pointers are routed via CFI stubs which are private to each module (or to the kernel proper) and so this function pointer comparison may fail spuriously. Let's fix this by turning crypto_shash_alg_has_setkey() into an out of line function. Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-05-11crypto: rng - fix crypto_rng_reset() refcounting when !CRYPTO_STATSEric Biggers
commit 30d0f6a956fc74bb2e948398daf3278c6b08c7e9 upstream. crypto_stats_get() is a no-op when the kernel is compiled without CONFIG_CRYPTO_STATS, so pairing it with crypto_alg_put() unconditionally (as crypto_rng_reset() does) is wrong. Fix this by moving the call to crypto_stats_get() to just before the actual algorithm operation which might need it. This makes it always paired with crypto_stats_rng_seed(). Fixes: eed74b3eba9e ("crypto: rng - Fix a refcounting bug in crypto_rng_reset()") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-11crypto: api - check for ERR pointers in crypto_destroy_tfm()Ard Biesheuvel
[ Upstream commit 83681f2bebb34dbb3f03fecd8f570308ab8b7c2c ] Given that crypto_alloc_tfm() may return ERR pointers, and to avoid crashes on obscure error paths where such pointers are presented to crypto_destroy_tfm() (such as [0]), add an ERR_PTR check there before dereferencing the second argument as a struct crypto_tfm pointer. [0] https://lore.kernel.org/linux-crypto/000000000000de949705bc59e0f6@google.com/ Reported-by: syzbot+12cf5fbfdeba210a89dd@syzkaller.appspotmail.com Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-20crypto: x86 - Regularize glue function prototypesKees Cook
commit 9c1e8836edbbaf3656bc07437b59c04be034ac4e upstream. The crypto glue performed function prototype casting via macros to make indirect calls to assembly routines. Instead of performing casts at the call sites (which trips Control Flow Integrity prototype checking), switch each prototype to a common standard set of arguments which allows the removal of the existing macros. In order to keep pointer math unchanged, internal casting between u128 pointers and u8 pointers is added. Co-developed-by: João Moreira <joao.moreira@intel.com> Signed-off-by: João Moreira <joao.moreira@intel.com> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Cc: Ard Biesheuvel <ardb@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-03-07crypto: tcrypt - avoid signed overflow in byte countArd Biesheuvel
[ Upstream commit 303fd3e1c771077e32e96e5788817f025f0067e2 ] The signed long type used for printing the number of bytes processed in tcrypt benchmarks limits the range to -/+ 2 GiB, which is not sufficient to cover the performance of common accelerated ciphers such as AES-NI when benchmarked with sec=1. So switch to u64 instead. While at it, fix up a missing printk->pr_cont conversion in the AEAD benchmark. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-04crypto: ecdh_helper - Ensure 'len >= secret.len' in decode_key()Daniele Alessandrelli
[ Upstream commit a53ab94eb6850c3657392e2d2ce9b38c387a2633 ] The length ('len' parameter) passed to crypto_ecdh_decode_key() is never checked against the length encoded in the passed buffer ('buf' parameter). This could lead to an out-of-bounds access when the passed length is less than the encoded length. Add a check to prevent that. Fixes: 3c4b23901a0c7 ("crypto: ecdh - Add ECDH software support") Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-01-12crypto: asym_tpm: correct zero out potential secretsGreg Kroah-Hartman
commit f93274ef0fe972c120c96b3207f8fce376231a60 upstream. The function derive_pub_key() should be calling memzero_explicit() instead of memset() in case the complier decides to optimize away the call to memset() because it "knows" no one is going to touch the memory anymore. Cc: stable <stable@vger.kernel.org> Reported-by: Ilil Blum Shem-Tov <ilil.blum.shem-tov@intel.com> Tested-by: Ilil Blum Shem-Tov <ilil.blum.shem-tov@intel.com> Link: https://lore.kernel.org/r/X8ns4AfwjKudpyfe@kroah.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-01-12crypto: ecdh - avoid buffer overflow in ecdh_set_secret()Ard Biesheuvel
commit 0aa171e9b267ce7c52d3a3df7bc9c1fc0203dec5 upstream. Pavel reports that commit 17858b140bf4 ("crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()") fixes one problem but introduces another: the unconditional memcpy() introduced by that commit may overflow the target buffer if the source data is invalid, which could be the result of intentional tampering. So check params.key_size explicitly against the size of the target buffer before validating the key further. Fixes: 17858b140bf4 ("crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()") Reported-by: Pavel Machek <pavel@denx.de> Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-12-30crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()Ard Biesheuvel
commit 17858b140bf49961b71d4e73f1c3ea9bc8e7dda0 upstream. ecdh_set_secret() casts a void* pointer to a const u64* in order to feed it into ecc_is_key_valid(). This is not generally permitted by the C standard, and leads to actual misalignment faults on ARMv6 cores. In some cases, these are fixed up in software, but this still leads to performance hits that are entirely avoidable. So let's copy the key into the ctx buffer first, which we will do anyway in the common case, and which guarantees correct alignment. Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-12-30crypto: af_alg - avoid undefined behavior accessing salg_nameEric Biggers
commit 92eb6c3060ebe3adf381fd9899451c5b047bb14d upstream. Commit 3f69cc60768b ("crypto: af_alg - Allow arbitrarily long algorithm names") made the kernel start accepting arbitrarily long algorithm names in sockaddr_alg. However, the actual length of the salg_name field stayed at the original 64 bytes. This is broken because the kernel can access indices >= 64 in salg_name, which is undefined behavior -- even though the memory that is accessed is still located within the sockaddr structure. It would only be defined behavior if the array were properly marked as arbitrary-length (either by making it a flexible array, which is the recommended way these days, or by making it an array of length 0 or 1). We can't simply change salg_name into a flexible array, since that would break source compatibility with userspace programs that embed sockaddr_alg into another struct, or (more commonly) declare a sockaddr_alg like 'struct sockaddr_alg sa = { .salg_name = "foo" };'. One solution would be to change salg_name into a flexible array only when '#ifdef __KERNEL__'. However, that would keep userspace without an easy way to actually use the longer algorithm names. Instead, add a new structure 'sockaddr_alg_new' that has the flexible array field, and expose it to both userspace and the kernel. Make the kernel use it correctly in alg_bind(). This addresses the syzbot report "UBSAN: array-index-out-of-bounds in alg_bind" (https://syzkaller.appspot.com/bug?extid=92ead4eb8e26a26d465e). Reported-by: syzbot+92ead4eb8e26a26d465e@syzkaller.appspotmail.com Fixes: 3f69cc60768b ("crypto: af_alg - Allow arbitrarily long algorithm names") Cc: <stable@vger.kernel.org> # v4.12+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-10-29crypto: algif_skcipher - EBUSY on aio should be an errorHerbert Xu
[ Upstream commit 2a05b029c1ee045b886ebf9efef9985ca23450de ] I removed the MAY_BACKLOG flag on the aio path a while ago but the error check still incorrectly interpreted EBUSY as success. This may cause the submitter to wait for a request that will never complete. Fixes: dad419970637 ("crypto: algif_skcipher - Do not set...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-10-29crypto: algif_aead - Do not set MAY_BACKLOG on the async pathHerbert Xu
commit cbdad1f246dd98e6c9c32a6e5212337f542aa7e0 upstream. The async path cannot use MAY_BACKLOG because it is not meant to block, which is what MAY_BACKLOG does. On the other hand, both the sync and async paths can make use of MAY_SLEEP. Fixes: 83094e5e9e49 ("crypto: af_alg - add async support to...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-09-03crypto: af_alg - Work around empty control messages without MSG_MOREHerbert Xu
commit c195d66a8a75c60515819b101975f38b7ec6577f upstream. The iwd daemon uses libell which sets up the skcipher operation with two separate control messages. As the first control message is sent without MSG_MORE, it is interpreted as an empty request. While libell should be fixed to use MSG_MORE where appropriate, this patch works around the bug in the kernel so that existing binaries continue to work. We will print a warning however. A separate issue is that the new kernel code no longer allows the control message to be sent twice within the same request. This restriction is obviously incompatible with what iwd was doing (first setting an IV and then sending the real control message). This patch changes the kernel so that this is explicitly allowed. Reported-by: Caleb Jorden <caljorden@hotmail.com> Fixes: f3c802a1f300 ("crypto: algif_aead - Only wake up when...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-08-21crypto: algif_aead - fix uninitialized ctx->initOndrej Mosnacek
[ Upstream commit 21dfbcd1f5cbff9cf2f9e7e43475aed8d072b0dd ] In skcipher_accept_parent_nokey() the whole af_alg_ctx structure is cleared by memset() after allocation, so add such memset() also to aead_accept_parent_nokey() so that the new "init" field is also initialized to zero. Without that the initial ctx->init checks might randomly return true and cause errors. While there, also remove the redundant zero assignments in both functions. Found via libkcapi testsuite. Cc: Stephan Mueller <smueller@chronox.de> Fixes: f3c802a1f300 ("crypto: algif_aead - Only wake up when ctx->more is zero") Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21crypto: af_alg - Fix regression on empty requestsHerbert Xu
[ Upstream commit 662bb52f50bca16a74fe92b487a14d7dccb85e1a ] Some user-space programs rely on crypto requests that have no control metadata. This broke when a check was added to require the presence of control metadata with the ctx->init flag. This patch fixes the regression by setting ctx->init as long as one sendmsg(2) has been made, with or without a control message. Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com> Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Fixes: f3c802a1f300 ("crypto: algif_aead - Only wake up when...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-08-21crypto: algif_aead - Only wake up when ctx->more is zeroHerbert Xu
[ Upstream commit f3c802a1f30013f8f723b62d7fa49eb9e991da23 ] AEAD does not support partial requests so we must not wake up while ctx->more is set. In order to distinguish between the case of no data sent yet and a zero-length request, a new init flag has been added to ctx. SKCIPHER has also been modified to ensure that at least a block of data is available if there is more data to come. Fixes: 2d97591ef43d ("crypto: af_alg - consolidation of...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-22keys: asymmetric: fix error return code in software_key_query()Wei Yongjun
[ Upstream commit 6cbba1f9114a8134cff9138c79add15012fd52b9 ] Fix to return negative error code -ENOMEM from kmalloc() error handling case instead of 0, as done elsewhere in this function. Fixes: f1774cb8956a ("X.509: parse public key parameters from x509 for akcipher") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-07-09crypto: af_alg - fix use-after-free in af_alg_accept() due to bh_lock_sock()Herbert Xu
commit 34c86f4c4a7be3b3e35aa48bd18299d4c756064d upstream. The locking in af_alg_release_parent is broken as the BH socket lock can only be taken if there is a code-path to handle the case where the lock is owned by process-context. Instead of adding such handling, we can fix this by changing the ref counts to atomic_t. This patch also modifies the main refcnt to include both normal and nokey sockets. This way we don't have to fudge the nokey ref count when a socket changes from nokey to normal. Credits go to Mauricio Faria de Oliveira who diagnosed this bug and sent a patch for it: https://lore.kernel.org/linux-crypto/20200605161657.535043-1-mfo@canonical.com/ Reported-by: Brian Moyles <bmoyles@netflix.com> Reported-by: Mauricio Faria de Oliveira <mfo@canonical.com> Fixes: 37f96694cf73 ("crypto: af_alg - Use bh_lock_sock in...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-24crypto: algboss - don't wait during notifier callbackEric Biggers
commit 77251e41f89a813b4090f5199442f217bbf11297 upstream. When a crypto template needs to be instantiated, CRYPTO_MSG_ALG_REQUEST is sent to crypto_chain. cryptomgr_schedule_probe() handles this by starting a thread to instantiate the template, then waiting for this thread to complete via crypto_larval::completion. This can deadlock because instantiating the template may require loading modules, and this (apparently depending on userspace) may need to wait for the crc-t10dif module (lib/crc-t10dif.c) to be loaded. But crc-t10dif's module_init function uses crypto_register_notifier() and therefore takes crypto_chain.rwsem for write. That can't proceed until the notifier callback has finished, as it holds this semaphore for read. Fix this by removing the wait on crypto_larval::completion from within cryptomgr_schedule_probe(). It's actually unnecessary because crypto_alg_mod_lookup() calls crypto_larval_wait() itself after sending CRYPTO_MSG_ALG_REQUEST. This only actually became a problem in v4.20 due to commit b76377543b73 ("crc-t10dif: Pick better transform if one becomes available"), but the unnecessary wait was much older. BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=207159 Reported-by: Mike Gerow <gerow@google.com> Fixes: 398710379f51 ("crypto: algapi - Move larval completion into algboss") Cc: <stable@vger.kernel.org> # v3.6+ Cc: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reported-by: Kai Lüke <kai@kinvolk.io> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-24crypto: algif_skcipher - Cap recv SG list at ctx->usedHerbert Xu
commit 7cf81954705b7e5b057f7dc39a7ded54422ab6e1 upstream. Somewhere along the line the cap on the SG list length for receive was lost. This patch restores it and removes the subsequent test which is now redundant. Fixes: 2d97591ef43d ("crypto: af_alg - consolidation of...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-17crypto: drbg - fix error return code in drbg_alloc_state()Wei Yongjun
commit e0664ebcea6ac5e16da703409fb4bd61f8cd37d9 upstream. Fix to return negative error code -ENOMEM from the kzalloc error handling case instead of 0, as done elsewhere in this function. Reported-by: Xiumei Mu <xmu@redhat.com> Fixes: db07cd26ac6a ("crypto: drbg - add FIPS 140-2 CTRNG for noise source") Cc: <stable@vger.kernel.org> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-17crypto: algapi - Avoid spurious modprobe on LOADEDEric Biggers
commit beeb460cd12ac9b91640b484b6a52dcba9d9fc8f upstream. Currently after any algorithm is registered and tested, there's an unnecessary request_module("cryptomgr") even if it's already loaded. Also, CRYPTO_MSG_ALG_LOADED is sent twice, and thus if the algorithm is "crct10dif", lib/crc-t10dif.c replaces the tfm twice rather than once. This occurs because CRYPTO_MSG_ALG_LOADED is sent using crypto_probing_notify(), which tries to load "cryptomgr" if the notification is not handled (NOTIFY_DONE). This doesn't make sense because "cryptomgr" doesn't handle this notification. Fix this by using crypto_notify() instead of crypto_probing_notify(). Fixes: dd8b083f9a5e ("crypto: api - Introduce notifier for new crypto algorithms") Cc: <stable@vger.kernel.org> # v4.20+ Cc: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-05-20gcc-10: avoid shadowing standard library 'free()' in cryptoLinus Torvalds
commit 1a263ae60b04de959d9ce9caea4889385eefcc7b upstream. gcc-10 has started warning about conflicting types for a few new built-in functions, particularly 'free()'. This results in warnings like: crypto/xts.c:325:13: warning: conflicting types for built-in function ‘free’; expected ‘void(void *)’ [-Wbuiltin-declaration-mismatch] because the crypto layer had its local freeing functions called 'free()'. Gcc-10 is in the wrong here, since that function is marked 'static', and thus there is no chance of confusion with any standard library function namespace. But the simplest thing to do is to just use a different name here, and avoid this gcc mis-feature. [ Side note: gcc knowing about 'free()' is in itself not the mis-feature: the semantics of 'free()' are special enough that a compiler can validly do special things when seeing it. So the mis-feature here is that gcc thinks that 'free()' is some restricted name, and you can't shadow it as a local static function. Making the special 'free()' semantics be a function attribute rather than tied to the name would be the much better model ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-04-17crypto: rng - Fix a refcounting bug in crypto_rng_reset()Dan Carpenter
commit eed74b3eba9eda36d155c11a12b2b4b50c67c1d8 upstream. We need to decrement this refcounter on these error paths. Fixes: f7d76e05d058 ("crypto: user - fix use_after_free of struct xxx_request") Cc: <stable@vger.kernel.org> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-28crypto: rename sm3-256 to sm3 in hash_algo_nameTianjia Zhang
commit 6a30e1b1dcad0ba94fae757f797812d7d8dcb72c upstream. The name sm3-256 is defined in hash_algo_name in hash_info, but the algorithm name implemented in sm3_generic.c is sm3, which will cause the sm3-256 algorithm to be not found in some application scenarios of the hash algorithm, and an ENOENT error will occur. For example, IMA, keys, and other subsystems that reference hash_algo_name all use the hash algorithm of sm3. Fixes: 5ca4c20cfd37 ("keys, trusted: select hash algorithm for TPM2 chips") Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Reviewed-by: Pascal van Leeuwen <pvanleeuwen@rambus.com> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-24crypto: essiv - fix AEAD capitalization and preposition use in help textGeert Uytterhoeven
[ Upstream commit ab3d436bf3e9d05f58ceaa85ff7475bfcd6e45af ] "AEAD" is capitalized everywhere else. Use "an" when followed by a written or spoken vowel. Fixes: be1eb7f78aa8fbe3 ("crypto: essiv - create wrapper template for ESSIV generation") Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-02-14crypto: testmgr - don't try to decrypt uninitialized buffersEric Biggers
commit eb455dbd02cb1074b37872ffca30a81cb2a18eaa upstream. Currently if the comparison fuzz tests encounter an encryption error when generating an skcipher or AEAD test vector, they will still test the decryption side (passing it the uninitialized ciphertext buffer) and expect it to fail with the same error. This is sort of broken because it's not well-defined usage of the API to pass an uninitialized buffer, and furthermore in the AEAD case it's acceptable for the decryption error to be EBADMSG (meaning "inauthentic input") even if the encryption error was something else like EINVAL. Fix this for skcipher by explicitly initializing the ciphertext buffer on error, and for AEAD by skipping the decryption test on error. Reported-by: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against their generic implementation") Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against their generic implementation") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-11crypto: api - Fix race condition in crypto_spawn_algHerbert Xu
commit 73669cc556462f4e50376538d77ee312142e8a8a upstream. The function crypto_spawn_alg is racy because it drops the lock before shooting the dying algorithm. The algorithm could disappear altogether before we shoot it. This patch fixes it by moving the shooting into the locked section. Fixes: 6bfd48096ff8 ("[CRYPTO] api: Added spawns") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-11crypto: pcrypt - Do not clear MAY_SLEEP flag in original requestHerbert Xu
commit e8d998264bffade3cfe0536559f712ab9058d654 upstream. We should not be modifying the original request's MAY_SLEEP flag upon completion. It makes no sense to do so anyway. Reported-by: Eric Biggers <ebiggers@kernel.org> Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-11crypto: api - fix unexpectedly getting generic implementationHerbert Xu
commit 2bbb3375d967155bccc86a5887d4a6e29c56b683 upstream. When CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y, the first lookup of an algorithm that needs to be instantiated using a template will always get the generic implementation, even when an accelerated one is available. This happens because the extra self-tests for the accelerated implementation allocate the generic implementation for comparison purposes, and then crypto_alg_tested() for the generic implementation "fulfills" the original request (i.e. sets crypto_larval::adult). This patch fixes this by only fulfilling the original request if we are currently the best outstanding larval as judged by the priority. If we're not the best then we will ask all waiters on that larval request to retry the lookup. Note that this patch introduces a behaviour change when the module providing the new algorithm is unregistered during the process. Previously we would have failed with ENOENT, after the patch we will instead redo the lookup. Fixes: 9a8a6b3f0950 ("crypto: testmgr - fuzz hashes against...") Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against...") Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against...") Reported-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-11crypto: pcrypt - Avoid deadlock by using per-instance padata queuesHerbert Xu
commit bbefa1dd6a6d53537c11624752219e39959d04fb upstream. If the pcrypt template is used multiple times in an algorithm, then a deadlock occurs because all pcrypt instances share the same padata_instance, which completes requests in the order submitted. That is, the inner pcrypt request waits for the outer pcrypt request while the outer request is already waiting for the inner. This patch fixes this by allocating a set of queues for each pcrypt instance instead of using two global queues. In order to maintain the existing user-space interface, the pinst structure remains global so any sysfs modifications will apply to every pcrypt instance. Note that when an update occurs we have to allocate memory for every pcrypt instance. Should one of the allocations fail we will abort the update without rolling back changes already made. The new per-instance data structure is called padata_shell and is essentially a wrapper around parallel_data. Reproducer: #include <linux/if_alg.h> #include <sys/socket.h> #include <unistd.h> int main() { struct sockaddr_alg addr = { .salg_type = "aead", .salg_name = "pcrypt(pcrypt(rfc4106-gcm-aesni))" }; int algfd, reqfd; char buf[32] = { 0 }; algfd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(algfd, (void *)&addr, sizeof(addr)); setsockopt(algfd, SOL_ALG, ALG_SET_KEY, buf, 20); reqfd = accept(algfd, 0, 0); write(reqfd, buf, 32); read(reqfd, buf, 16); } Reported-by: syzbot+56c7151cad94eec37c521f0e47d2eee53f9361c4@syzkaller.appspotmail.com Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto parallelization wrapper") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-11crypto: api - Check spawn->alg under lock in crypto_drop_spawnHerbert Xu
commit 7db3b61b6bba4310f454588c2ca6faf2958ad79f upstream. We need to check whether spawn->alg is NULL under lock as otherwise the algorithm could be removed from under us after we have checked it and found it to be non-NULL. This could cause us to remove the spawn from a non-existent list. Fixes: 7ede5a5ba55a ("crypto: api - Fix crypto_drop_spawn crash...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-01crypto: pcrypt - Fix user-after-free on module unloadHerbert Xu
commit 07bfd9bdf568a38d9440c607b72342036011f727 upstream. On module unload of pcrypt we must unregister the crypto algorithms first and then tear down the padata structure. As otherwise the crypto algorithms are still alive and can be used while the padata structure is being freed. Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-01crypto: af_alg - Use bh_lock_sock in sk_destructHerbert Xu
commit 37f96694cf73ba116993a9d2d99ad6a75fa7fdb0 upstream. As af_alg_release_parent may be called from BH context (most notably due to an async request that only completes after socket closure, or as reported here because of an RCU-delayed sk_destruct call), we must use bh_lock_sock instead of lock_sock. Reported-by: syzbot+c2f1558d49e25cc36e5e@syzkaller.appspotmail.com Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Fixes: c840ac6af3f8 ("crypto: af_alg - Disallow bind/setkey/...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-01-17crypto: algif_skcipher - Use chunksize instead of blocksizeHerbert Xu
commit 5b0fe9552336338acb52756daf65dd7a4eeca73f upstream. When algif_skcipher does a partial operation it always process data that is a multiple of blocksize. However, for algorithms such as CTR this is wrong because even though it can process any number of bytes overall, the partial block must come at the very end and not in the middle. This is exactly what chunksize is meant to describe so this patch changes blocksize to chunksize. Fixes: 8ff590903d5f ("crypto: algif_skcipher - User-space...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-12-31KEYS: asymmetric: return ENOMEM if akcipher_request_alloc() failsEric Biggers
commit bea37414453eb08d4ceffeb60a9d490dbc930cea upstream. No error code was being set on this error path. Cc: stable@vger.kernel.org Fixes: ad4b1eb5fb33 ("KEYS: asym_tpm: Implement encryption operation [ver #2]") Fixes: c08fed737126 ("KEYS: Implement encrypt, decrypt and sign for software asymmetric key [ver #2]") Reviewed-by: James Morris <jamorris@linux.microsoft.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-12-31crypto: aegis128/simd - build 32-bit ARM for v8 architecture explicitlyArd Biesheuvel
[ Upstream commit 830536770f968ab33ece123b317e252c269098db ] Now that the Clang compiler has taken it upon itself to police the compiler command line, and reject combinations for arguments it views as incompatible, the AEGIS128 no longer builds correctly, and errors out like this: clang-10: warning: ignoring extension 'crypto' because the 'armv7-a' architecture does not support it [-Winvalid-command-line-argument] So let's switch to armv8-a instead, which matches the crypto-neon-fp-armv8 FPU profile we specify. Since neither were actually supported by GCC versions before 4.8, let's tighten the Kconfig dependencies as well so we won't run into errors when building with an ancient compiler. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Tested-by: Nathan Chancellor <natechancellor@gmail.com> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Reported-by: <ci_notify@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-12-31crypto: aegis128-neon - use Clang compatible cflags for ARMArd Biesheuvel
[ Upstream commit 2eb2d198bd6cd0083a5363ce66272fb34a19928f ] The next version of Clang will start policing compiler command line options, and will reject combinations of -march and -mfpu that it thinks are incompatible. This results in errors like clang-10: warning: ignoring extension 'crypto' because the 'armv7-a' architecture does not support it [-Winvalid-command-line-argument] /tmp/aegis128-neon-inner-5ee428.s: Assembler messages: /tmp/aegis128-neon-inner-5ee428.s:73: Error: selected processor does not support `aese.8 q2,q14' in ARM mode when buiding the SIMD aegis128 code for 32-bit ARM, given that the 'armv7-a' -march argument is considered to be compatible with the ARM crypto extensions. Instead, we should use armv8-a, which does allow the crypto extensions to be enabled. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-12-13crypto: user - fix memory leak in crypto_reportstatNavid Emamdoost
commit c03b04dcdba1da39903e23cc4d072abf8f68f2dd upstream. In crypto_reportstat, a new skb is created by nlmsg_new(). This skb is leaked if crypto_reportstat_alg() fails. Required release for skb is added. Fixes: cac5818c25d0 ("crypto: user - Implement a generic crypto statistics") Cc: <stable@vger.kernel.org> Signed-off-by: Navid Emamdoost <navid.emamdoost@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-12-13crypto: user - fix memory leak in crypto_reportNavid Emamdoost
commit ffdde5932042600c6807d46c1550b28b0db6a3bc upstream. In crypto_report, a new skb is created via nlmsg_new(). This skb should be released if crypto_report_alg() fails. Fixes: a38f7907b926 ("crypto: Add userspace configuration API") Cc: <stable@vger.kernel.org> Signed-off-by: Navid Emamdoost <navid.emamdoost@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-12-13crypto: ecdh - fix big endian bug in ECC libraryArd Biesheuvel
commit f398243e9fd6a3a059c1ea7b380c40628dbf0c61 upstream. The elliptic curve arithmetic library used by the EC-DH KPP implementation assumes big endian byte order, and unconditionally reverses the byte and word order of multi-limb quantities. On big endian systems, the byte reordering is not necessary, while the word ordering needs to be retained. So replace the __swab64() invocation with a call to be64_to_cpu() which should do the right thing for both little and big endian builds. Fixes: 3c4b23901a0c ("crypto: ecdh - Add ECDH software support") Cc: <stable@vger.kernel.org> # v4.9+ Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-12-13crypto: af_alg - cast ki_complete ternary op to intAyush Sawal
commit 64e7f852c47ce99f6c324c46d6a299a5a7ebead9 upstream. when libkcapi test is executed using HW accelerator, cipher operation return -74.Since af_alg_async_cb->ki_complete treat err as unsigned int, libkcapi receive 429467222 even though it expect -ve value. Hence its required to cast resultlen to int so that proper error is returned to libkcapi. AEAD one shot non-aligned test 2(libkcapi test) ./../bin/kcapi -x 10 -c "gcm(aes)" -i 7815d4b06ae50c9c56e87bd7 -k ea38ac0c9b9998c80e28fb496a2b88d9 -a "853f98a750098bec1aa7497e979e78098155c877879556bb51ddeb6374cbaefc" -t "c4ce58985b7203094be1d134c1b8ab0b" -q "b03692f86d1b8b39baf2abb255197c98" Fixes: d887c52d6ae4 ("crypto: algif_aead - overhaul memory management") Cc: <stable@vger.kernel.org> Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: Atul Gupta <atul.gupta@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-09-28Merge branch 'next-lockdown' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security Pull kernel lockdown mode from James Morris: "This is the latest iteration of the kernel lockdown patchset, from Matthew Garrett, David Howells and others. From the original description: This patchset introduces an optional kernel lockdown feature, intended to strengthen the boundary between UID 0 and the kernel. When enabled, various pieces of kernel functionality are restricted. Applications that rely on low-level access to either hardware or the kernel may cease working as a result - therefore this should not be enabled without appropriate evaluation beforehand. The majority of mainstream distributions have been carrying variants of this patchset for many years now, so there's value in providing a doesn't meet every distribution requirement, but gets us much closer to not requiring external patches. There are two major changes since this was last proposed for mainline: - Separating lockdown from EFI secure boot. Background discussion is covered here: https://lwn.net/Articles/751061/ - Implementation as an LSM, with a default stackable lockdown LSM module. This allows the lockdown feature to be policy-driven, rather than encoding an implicit policy within the mechanism. The new locked_down LSM hook is provided to allow LSMs to make a policy decision around whether kernel functionality that would allow tampering with or examining the runtime state of the kernel should be permitted. The included lockdown LSM provides an implementation with a simple policy intended for general purpose use. This policy provides a coarse level of granularity, controllable via the kernel command line: lockdown={integrity|confidentiality} Enable the kernel lockdown feature. If set to integrity, kernel features that allow userland to modify the running kernel are disabled. If set to confidentiality, kernel features that allow userland to extract confidential information from the kernel are also disabled. This may also be controlled via /sys/kernel/security/lockdown and overriden by kernel configuration. New or existing LSMs may implement finer-grained controls of the lockdown features. Refer to the lockdown_reason documentation in include/linux/security.h for details. The lockdown feature has had signficant design feedback and review across many subsystems. This code has been in linux-next for some weeks, with a few fixes applied along the way. Stephen Rothwell noted that commit 9d1f8be5cf42 ("bpf: Restrict bpf when kernel lockdown is in confidentiality mode") is missing a Signed-off-by from its author. Matthew responded that he is providing this under category (c) of the DCO" * 'next-lockdown' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (31 commits) kexec: Fix file verification on S390 security: constify some arrays in lockdown LSM lockdown: Print current->comm in restriction messages efi: Restrict efivar_ssdt_load when the kernel is locked down tracefs: Restrict tracefs when the kernel is locked down debugfs: Restrict debugfs when the kernel is locked down kexec: Allow kexec_file() with appropriate IMA policy when locked down lockdown: Lock down perf when in confidentiality mode bpf: Restrict bpf when kernel lockdown is in confidentiality mode lockdown: Lock down tracing and perf kprobes when in confidentiality mode lockdown: Lock down /proc/kcore x86/mmiotrace: Lock down the testmmiotrace module lockdown: Lock down module params that specify hardware parameters (eg. ioport) lockdown: Lock down TIOCSSERIAL lockdown: Prohibit PCMCIA CIS storage when the kernel is locked down acpi: Disable ACPI table override if the kernel is locked down acpi: Ignore acpi_rsdp kernel param when the kernel has been locked down ACPI: Limit access to custom_method when the kernel is locked down x86/msr: Restrict MSR access when the kernel is locked down x86: Lock down IO port access when the kernel is locked down ...