summaryrefslogtreecommitdiffstats
path: root/crypto
AgeCommit message (Collapse)Author
2021-05-22crypto: api - check for ERR pointers in crypto_destroy_tfm()Ard Biesheuvel
[ Upstream commit 83681f2bebb34dbb3f03fecd8f570308ab8b7c2c ] Given that crypto_alloc_tfm() may return ERR pointers, and to avoid crashes on obscure error paths where such pointers are presented to crypto_destroy_tfm() (such as [0]), add an ERR_PTR check there before dereferencing the second argument as a struct crypto_tfm pointer. [0] https://lore.kernel.org/linux-crypto/000000000000de949705bc59e0f6@google.com/ Reported-by: syzbot+12cf5fbfdeba210a89dd@syzkaller.appspotmail.com Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-07crypto: tcrypt - avoid signed overflow in byte countArd Biesheuvel
[ Upstream commit 303fd3e1c771077e32e96e5788817f025f0067e2 ] The signed long type used for printing the number of bytes processed in tcrypt benchmarks limits the range to -/+ 2 GiB, which is not sufficient to cover the performance of common accelerated ciphers such as AES-NI when benchmarked with sec=1. So switch to u64 instead. While at it, fix up a missing printk->pr_cont conversion in the AEAD benchmark. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-03-04crypto: ecdh_helper - Ensure 'len >= secret.len' in decode_key()Daniele Alessandrelli
[ Upstream commit a53ab94eb6850c3657392e2d2ce9b38c387a2633 ] The length ('len' parameter) passed to crypto_ecdh_decode_key() is never checked against the length encoded in the passed buffer ('buf' parameter). This could lead to an out-of-bounds access when the passed length is less than the encoded length. Add a check to prevent that. Fixes: 3c4b23901a0c7 ("crypto: ecdh - Add ECDH software support") Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-01-12crypto: ecdh - avoid buffer overflow in ecdh_set_secret()Ard Biesheuvel
commit 0aa171e9b267ce7c52d3a3df7bc9c1fc0203dec5 upstream. Pavel reports that commit 17858b140bf4 ("crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()") fixes one problem but introduces another: the unconditional memcpy() introduced by that commit may overflow the target buffer if the source data is invalid, which could be the result of intentional tampering. So check params.key_size explicitly against the size of the target buffer before validating the key further. Fixes: 17858b140bf4 ("crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()") Reported-by: Pavel Machek <pavel@denx.de> Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-12-30crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()Ard Biesheuvel
commit 17858b140bf49961b71d4e73f1c3ea9bc8e7dda0 upstream. ecdh_set_secret() casts a void* pointer to a const u64* in order to feed it into ecc_is_key_valid(). This is not generally permitted by the C standard, and leads to actual misalignment faults on ARMv6 cores. In some cases, these are fixed up in software, but this still leads to performance hits that are entirely avoidable. So let's copy the key into the ctx buffer first, which we will do anyway in the common case, and which guarantees correct alignment. Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-12-30crypto: af_alg - avoid undefined behavior accessing salg_nameEric Biggers
commit 92eb6c3060ebe3adf381fd9899451c5b047bb14d upstream. Commit 3f69cc60768b ("crypto: af_alg - Allow arbitrarily long algorithm names") made the kernel start accepting arbitrarily long algorithm names in sockaddr_alg. However, the actual length of the salg_name field stayed at the original 64 bytes. This is broken because the kernel can access indices >= 64 in salg_name, which is undefined behavior -- even though the memory that is accessed is still located within the sockaddr structure. It would only be defined behavior if the array were properly marked as arbitrary-length (either by making it a flexible array, which is the recommended way these days, or by making it an array of length 0 or 1). We can't simply change salg_name into a flexible array, since that would break source compatibility with userspace programs that embed sockaddr_alg into another struct, or (more commonly) declare a sockaddr_alg like 'struct sockaddr_alg sa = { .salg_name = "foo" };'. One solution would be to change salg_name into a flexible array only when '#ifdef __KERNEL__'. However, that would keep userspace without an easy way to actually use the longer algorithm names. Instead, add a new structure 'sockaddr_alg_new' that has the flexible array field, and expose it to both userspace and the kernel. Make the kernel use it correctly in alg_bind(). This addresses the syzbot report "UBSAN: array-index-out-of-bounds in alg_bind" (https://syzkaller.appspot.com/bug?extid=92ead4eb8e26a26d465e). Reported-by: syzbot+92ead4eb8e26a26d465e@syzkaller.appspotmail.com Fixes: 3f69cc60768b ("crypto: af_alg - Allow arbitrarily long algorithm names") Cc: <stable@vger.kernel.org> # v4.12+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-10-29crypto: algif_skcipher - EBUSY on aio should be an errorHerbert Xu
[ Upstream commit 2a05b029c1ee045b886ebf9efef9985ca23450de ] I removed the MAY_BACKLOG flag on the aio path a while ago but the error check still incorrectly interpreted EBUSY as success. This may cause the submitter to wait for a request that will never complete. Fixes: dad419970637 ("crypto: algif_skcipher - Do not set...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-10-29crypto: algif_aead - Do not set MAY_BACKLOG on the async pathHerbert Xu
commit cbdad1f246dd98e6c9c32a6e5212337f542aa7e0 upstream. The async path cannot use MAY_BACKLOG because it is not meant to block, which is what MAY_BACKLOG does. On the other hand, both the sync and async paths can make use of MAY_SLEEP. Fixes: 83094e5e9e49 ("crypto: af_alg - add async support to...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-09crypto: af_alg - fix use-after-free in af_alg_accept() due to bh_lock_sock()Herbert Xu
commit 34c86f4c4a7be3b3e35aa48bd18299d4c756064d upstream. The locking in af_alg_release_parent is broken as the BH socket lock can only be taken if there is a code-path to handle the case where the lock is owned by process-context. Instead of adding such handling, we can fix this by changing the ref counts to atomic_t. This patch also modifies the main refcnt to include both normal and nokey sockets. This way we don't have to fudge the nokey ref count when a socket changes from nokey to normal. Credits go to Mauricio Faria de Oliveira who diagnosed this bug and sent a patch for it: https://lore.kernel.org/linux-crypto/20200605161657.535043-1-mfo@canonical.com/ Reported-by: Brian Moyles <bmoyles@netflix.com> Reported-by: Mauricio Faria de Oliveira <mfo@canonical.com> Fixes: 37f96694cf73 ("crypto: af_alg - Use bh_lock_sock in...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-25crypto: algboss - don't wait during notifier callbackEric Biggers
commit 77251e41f89a813b4090f5199442f217bbf11297 upstream. When a crypto template needs to be instantiated, CRYPTO_MSG_ALG_REQUEST is sent to crypto_chain. cryptomgr_schedule_probe() handles this by starting a thread to instantiate the template, then waiting for this thread to complete via crypto_larval::completion. This can deadlock because instantiating the template may require loading modules, and this (apparently depending on userspace) may need to wait for the crc-t10dif module (lib/crc-t10dif.c) to be loaded. But crc-t10dif's module_init function uses crypto_register_notifier() and therefore takes crypto_chain.rwsem for write. That can't proceed until the notifier callback has finished, as it holds this semaphore for read. Fix this by removing the wait on crypto_larval::completion from within cryptomgr_schedule_probe(). It's actually unnecessary because crypto_alg_mod_lookup() calls crypto_larval_wait() itself after sending CRYPTO_MSG_ALG_REQUEST. This only actually became a problem in v4.20 due to commit b76377543b73 ("crc-t10dif: Pick better transform if one becomes available"), but the unnecessary wait was much older. BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=207159 Reported-by: Mike Gerow <gerow@google.com> Fixes: 398710379f51 ("crypto: algapi - Move larval completion into algboss") Cc: <stable@vger.kernel.org> # v3.6+ Cc: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reported-by: Kai Lüke <kai@kinvolk.io> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-06-25crypto: algif_skcipher - Cap recv SG list at ctx->usedHerbert Xu
commit 7cf81954705b7e5b057f7dc39a7ded54422ab6e1 upstream. Somewhere along the line the cap on the SG list length for receive was lost. This patch restores it and removes the subsequent test which is now redundant. Fixes: 2d97591ef43d ("crypto: af_alg - consolidation of...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-05-20gcc-10: avoid shadowing standard library 'free()' in cryptoLinus Torvalds
commit 1a263ae60b04de959d9ce9caea4889385eefcc7b upstream. gcc-10 has started warning about conflicting types for a few new built-in functions, particularly 'free()'. This results in warnings like: crypto/xts.c:325:13: warning: conflicting types for built-in function ‘free’; expected ‘void(void *)’ [-Wbuiltin-declaration-mismatch] because the crypto layer had its local freeing functions called 'free()'. Gcc-10 is in the wrong here, since that function is marked 'static', and thus there is no chance of confusion with any standard library function namespace. But the simplest thing to do is to just use a different name here, and avoid this gcc mis-feature. [ Side note: gcc knowing about 'free()' is in itself not the mis-feature: the semantics of 'free()' are special enough that a compiler can validly do special things when seeing it. So the mis-feature here is that gcc thinks that 'free()' is some restricted name, and you can't shadow it as a local static function. Making the special 'free()' semantics be a function attribute rather than tied to the name would be the much better model ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-11crypto: api - Fix race condition in crypto_spawn_algHerbert Xu
commit 73669cc556462f4e50376538d77ee312142e8a8a upstream. The function crypto_spawn_alg is racy because it drops the lock before shooting the dying algorithm. The algorithm could disappear altogether before we shoot it. This patch fixes it by moving the shooting into the locked section. Fixes: 6bfd48096ff8 ("[CRYPTO] api: Added spawns") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-11crypto: pcrypt - Do not clear MAY_SLEEP flag in original requestHerbert Xu
commit e8d998264bffade3cfe0536559f712ab9058d654 upstream. We should not be modifying the original request's MAY_SLEEP flag upon completion. It makes no sense to do so anyway. Reported-by: Eric Biggers <ebiggers@kernel.org> Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-11crypto: api - Check spawn->alg under lock in crypto_drop_spawnHerbert Xu
commit 7db3b61b6bba4310f454588c2ca6faf2958ad79f upstream. We need to check whether spawn->alg is NULL under lock as otherwise the algorithm could be removed from under us after we have checked it and found it to be non-NULL. This could cause us to remove the spawn from a non-existent list. Fixes: 7ede5a5ba55a ("crypto: api - Fix crypto_drop_spawn crash...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-02-05crypto: pcrypt - Fix user-after-free on module unloadHerbert Xu
[ Upstream commit 07bfd9bdf568a38d9440c607b72342036011f727 ] On module unload of pcrypt we must unregister the crypto algorithms first and then tear down the padata structure. As otherwise the crypto algorithms are still alive and can be used while the padata structure is being freed. Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-02-01crypto: af_alg - Use bh_lock_sock in sk_destructHerbert Xu
commit 37f96694cf73ba116993a9d2d99ad6a75fa7fdb0 upstream. As af_alg_release_parent may be called from BH context (most notably due to an async request that only completes after socket closure, or as reported here because of an RCU-delayed sk_destruct call), we must use bh_lock_sock instead of lock_sock. Reported-by: syzbot+c2f1558d49e25cc36e5e@syzkaller.appspotmail.com Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Fixes: c840ac6af3f8 ("crypto: af_alg - Disallow bind/setkey/...") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-01-27crypto: tgr192 - fix unaligned memory accessEric Biggers
[ Upstream commit f990f7fb58ac8ac9a43316f09a48cff1a49dda42 ] Fix an unaligned memory access in tgr192_transform() by using the unaligned access helpers. Fixes: 06ace7a9bafe ("[CRYPTO] Use standard byte order macros wherever possible") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2020-01-27pcrypt: use format specifier in kobject_addColin Ian King
[ Upstream commit b1e3874c75ab15288f573b3532e507c37e8e7656 ] Passing string 'name' as the format specifier is potentially hazardous because name could (although very unlikely to) have a format specifier embedded in it causing issues when parsing the non-existent arguments to these. Follow best practice by using the "%s" format string for the string 'name'. Cleans up clang warning: crypto/pcrypt.c:397:40: warning: format string is not a string literal (potentially insecure) [-Wformat-security] Fixes: a3fb1e330dd2 ("pcrypt: Added sysfs interface to pcrypt") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-12-13crypto: user - fix memory leak in crypto_reportNavid Emamdoost
commit ffdde5932042600c6807d46c1550b28b0db6a3bc upstream. In crypto_report, a new skb is created via nlmsg_new(). This skb should be released if crypto_report_alg() fails. Fixes: a38f7907b926 ("crypto: Add userspace configuration API") Cc: <stable@vger.kernel.org> Signed-off-by: Navid Emamdoost <navid.emamdoost@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-12-13crypto: ecdh - fix big endian bug in ECC libraryArd Biesheuvel
commit f398243e9fd6a3a059c1ea7b380c40628dbf0c61 upstream. The elliptic curve arithmetic library used by the EC-DH KPP implementation assumes big endian byte order, and unconditionally reverses the byte and word order of multi-limb quantities. On big endian systems, the byte reordering is not necessary, while the word ordering needs to be retained. So replace the __swab64() invocation with a call to be64_to_cpu() which should do the right thing for both little and big endian builds. Fixes: 3c4b23901a0c ("crypto: ecdh - Add ECDH software support") Cc: <stable@vger.kernel.org> # v4.9+ Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-12-13crypto: af_alg - cast ki_complete ternary op to intAyush Sawal
commit 64e7f852c47ce99f6c324c46d6a299a5a7ebead9 upstream. when libkcapi test is executed using HW accelerator, cipher operation return -74.Since af_alg_async_cb->ki_complete treat err as unsigned int, libkcapi receive 429467222 even though it expect -ve value. Hence its required to cast resultlen to int so that proper error is returned to libkcapi. AEAD one shot non-aligned test 2(libkcapi test) ./../bin/kcapi -x 10 -c "gcm(aes)" -i 7815d4b06ae50c9c56e87bd7 -k ea38ac0c9b9998c80e28fb496a2b88d9 -a "853f98a750098bec1aa7497e979e78098155c877879556bb51ddeb6374cbaefc" -t "c4ce58985b7203094be1d134c1b8ab0b" -q "b03692f86d1b8b39baf2abb255197c98" Fixes: d887c52d6ae4 ("crypto: algif_aead - overhaul memory management") Cc: <stable@vger.kernel.org> Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: Atul Gupta <atul.gupta@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-12-13crypto: ecc - check for invalid values in the key verification testVitaly Chikunov
[ Upstream commit 2eb4942b6609d35a4e835644a33203b0aef7443d ] Currently used scalar multiplication algorithm (Matthieu Rivain, 2011) have invalid values for scalar == 1, n-1, and for regularized version n-2, which was previously not checked. Verify that they are not used as private keys. Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-12-05crypto: user - support incremental algorithm dumpsEric Biggers
[ Upstream commit 0ac6b8fb23c724b015d9ca70a89126e8d1563166 ] CRYPTO_MSG_GETALG in NLM_F_DUMP mode sometimes doesn't return all registered crypto algorithms, because it doesn't support incremental dumps. crypto_dump_report() only permits itself to be called once, yet the netlink subsystem allocates at most ~64 KiB for the skb being dumped to. Thus only the first recvmsg() returns data, and it may only include a subset of the crypto algorithms even if the user buffer passed to recvmsg() is large enough to hold all of them. Fix this by using one of the arguments in the netlink_callback structure to keep track of the current position in the algorithm list. Then userspace can do multiple recvmsg() on the socket after sending the dump request. This is the way netlink dumps work elsewhere in the kernel; it's unclear why this was different (probably just an oversight). Also fix an integer overflow when calculating the dump buffer size hint. Fixes: a38f7907b926 ("crypto: Add userspace configuration API") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-12-01crypto: testmgr - fix sizeof() on COMP_BUF_SIZEMichael Schupikov
[ Upstream commit 22a8118d329334833cd30f2ceb36d28e8cae8a4f ] After allocation, output and decomp_output both point to memory chunks of size COMP_BUF_SIZE. Then, only the first bytes are zeroed out using sizeof(COMP_BUF_SIZE) as parameter to memset(), because sizeof(COMP_BUF_SIZE) provides the size of the constant and not the size of allocated memory. Instead, the whole allocated memory is meant to be zeroed out. Use COMP_BUF_SIZE as parameter to memset() directly in order to accomplish this. Fixes: 336073840a872 ("crypto: testmgr - Allow different compression results") Signed-off-by: Michael Schupikov <michael@schupikov.de> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-11-20crypto: fix a memory leak in rsa-kcs1pad's encryption modeDan Aloni
[ Upstream commit 3944f139d5592790b70bc64f197162e643a8512b ] The encryption mode of pkcs1pad never uses out_sg and out_buf, so there's no need to allocate the buffer, which presently is not even being freed. CC: Herbert Xu <herbert@gondor.apana.org.au> CC: linux-crypto@vger.kernel.org CC: "David S. Miller" <davem@davemloft.net> Signed-off-by: Dan Aloni <dan@kernelim.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-11-20crypto: chacha20 - Fix chacha20_block() keystream alignment (again)Eric Biggers
[ Upstream commit a5e9f557098e54af44ade5d501379be18435bfbf ] In commit 9f480faec58c ("crypto: chacha20 - Fix keystream alignment for chacha20_block()"), I had missed that chacha20_block() can be called directly on the buffer passed to get_random_bytes(), which can have any alignment. So, while my commit didn't break anything, it didn't fully solve the alignment problems. Revert my solution and just update chacha20_block() to use put_unaligned_le32(), so the output buffer need not be aligned. This is simpler, and on many CPUs it's the same speed. But, I kept the 'tmp' buffers in extract_crng_user() and _get_random_bytes() 4-byte aligned, since that alignment is actually needed for _crng_backtrack_protect() too. Reported-by: Stephan Müller <smueller@chronox.de> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-10-11crypto: skcipher - Unmap pages after an external errorHerbert Xu
commit 0ba3c026e685573bd3534c17e27da7c505ac99c4 upstream. skcipher_walk_done may be called with an error by internal or external callers. For those internal callers we shouldn't unmap pages but for external callers we must unmap any pages that are in use. This patch distinguishes between the two cases by checking whether walk->nbytes is zero or not. For internal callers, we now set walk->nbytes to zero prior to the call. For external callers, walk->nbytes has always been non-zero (as zero is used to indicate the termination of a walk). Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type") Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-26crypto: chacha20poly1305 - fix atomic sleep when using async algorithmEric Biggers
commit 7545b6c2087f4ef0287c8c9b7eba6a728c67ff8e upstream. Clear the CRYPTO_TFM_REQ_MAY_SLEEP flag when the chacha20poly1305 operation is being continued from an async completion callback, since sleeping may not be allowed in that context. This is basically the same bug that was recently fixed in the xts and lrw templates. But, it's always been broken in chacha20poly1305 too. This was found using syzkaller in combination with the updated crypto self-tests which actually test the MAY_SLEEP flag now. Reproducer: python -c 'import socket; socket.socket(socket.AF_ALG, 5, 0).bind( ("aead", "rfc7539(cryptd(chacha20-generic),poly1305-generic)"))' Kernel output: BUG: sleeping function called from invalid context at include/crypto/algapi.h:426 in_atomic(): 1, irqs_disabled(): 0, pid: 1001, name: kworker/2:2 [...] CPU: 2 PID: 1001 Comm: kworker/2:2 Not tainted 5.2.0-rc2 #5 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014 Workqueue: crypto cryptd_queue_worker Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x4d/0x6a lib/dump_stack.c:113 ___might_sleep kernel/sched/core.c:6138 [inline] ___might_sleep.cold.19+0x8e/0x9f kernel/sched/core.c:6095 crypto_yield include/crypto/algapi.h:426 [inline] crypto_hash_walk_done+0xd6/0x100 crypto/ahash.c:113 shash_ahash_update+0x41/0x60 crypto/shash.c:251 shash_async_update+0xd/0x10 crypto/shash.c:260 crypto_ahash_update include/crypto/hash.h:539 [inline] poly_setkey+0xf6/0x130 crypto/chacha20poly1305.c:337 poly_init+0x51/0x60 crypto/chacha20poly1305.c:364 async_done_continue crypto/chacha20poly1305.c:78 [inline] poly_genkey_done+0x15/0x30 crypto/chacha20poly1305.c:369 cryptd_skcipher_complete+0x29/0x70 crypto/cryptd.c:279 cryptd_skcipher_decrypt+0xcd/0x110 crypto/cryptd.c:339 cryptd_queue_worker+0x70/0xa0 crypto/cryptd.c:184 process_one_work+0x1ed/0x420 kernel/workqueue.c:2269 worker_thread+0x3e/0x3a0 kernel/workqueue.c:2415 kthread+0x11f/0x140 kernel/kthread.c:255 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:352 Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539") Cc: <stable@vger.kernel.org> # v4.2+ Cc: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-26crypto: ghash - fix unaligned memory access in ghash_setkey()Eric Biggers
commit 5c6bc4dfa515738149998bb0db2481a4fdead979 upstream. Changing ghash_mod_init() to be subsys_initcall made it start running before the alignment fault handler has been installed on ARM. In kernel builds where the keys in the ghash test vectors happened to be misaligned in the kernel image, this exposed the longstanding bug that ghash_setkey() is incorrectly casting the key buffer (which can have any alignment) to be128 for passing to gf128mul_init_4k_lle(). Fix this by memcpy()ing the key to a temporary buffer. Don't fix it by setting an alignmask on the algorithm instead because that would unnecessarily force alignment of the data too. Fixes: 2cdc6899a88e ("crypto: ghash - Add GHASH digest algorithm for GCM") Reported-by: Peter Robinson <pbrobinson@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Tested-by: Peter Robinson <pbrobinson@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-26crypto: asymmetric_keys - select CRYPTO_HASH where neededArnd Bergmann
[ Upstream commit 90acc0653d2bee203174e66d519fbaaa513502de ] Build testing with some core crypto options disabled revealed a few modules that are missing CRYPTO_HASH: crypto/asymmetric_keys/x509_public_key.o: In function `x509_get_sig_params': x509_public_key.c:(.text+0x4c7): undefined reference to `crypto_alloc_shash' x509_public_key.c:(.text+0x5e5): undefined reference to `crypto_shash_digest' crypto/asymmetric_keys/pkcs7_verify.o: In function `pkcs7_digest.isra.0': pkcs7_verify.c:(.text+0xab): undefined reference to `crypto_alloc_shash' pkcs7_verify.c:(.text+0x1b2): undefined reference to `crypto_shash_digest' pkcs7_verify.c:(.text+0x3c1): undefined reference to `crypto_shash_update' pkcs7_verify.c:(.text+0x411): undefined reference to `crypto_shash_finup' This normally doesn't show up in randconfig tests because there is a large number of other options that select CRYPTO_HASH. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-07-26crypto: serpent - mark __serpent_setkey_sbox noinlineArnd Bergmann
[ Upstream commit 473971187d6727609951858c63bf12b0307ef015 ] The same bug that gcc hit in the past is apparently now showing up with clang, which decides to inline __serpent_setkey_sbox: crypto/serpent_generic.c:268:5: error: stack frame size of 2112 bytes in function '__serpent_setkey' [-Werror,-Wframe-larger-than=] Marking it 'noinline' reduces the stack usage from 2112 bytes to 192 and 96 bytes, respectively, and seems to generate more useful object code. Fixes: c871c10e4ea7 ("crypto: serpent - improve __serpent_setkey with UBSAN") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-07-10crypto: cryptd - Fix skcipher instance memory leakVincent Whitchurch
commit 1a0fad630e0b7cff38e7691b28b0517cfbb0633f upstream. cryptd_skcipher_free() fails to free the struct skcipher_instance allocated in cryptd_create_skcipher(), leading to a memory leak. This is detected by kmemleak on bootup on ARM64 platforms: unreferenced object 0xffff80003377b180 (size 1024): comm "cryptomgr_probe", pid 822, jiffies 4294894830 (age 52.760s) backtrace: kmem_cache_alloc_trace+0x270/0x2d0 cryptd_create+0x990/0x124c cryptomgr_probe+0x5c/0x1e8 kthread+0x258/0x318 ret_from_fork+0x10/0x1c Fixes: 4e0958d19bd8 ("crypto: cryptd - Add support for skcipher") Cc: <stable@vger.kernel.org> Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-07-10crypto: user - prevent operating on larval algorithmsEric Biggers
commit 21d4120ec6f5b5992b01b96ac484701163917b63 upstream. Michal Suchanek reported [1] that running the pcrypt_aead01 test from LTP [2] in a loop and holding Ctrl-C causes a NULL dereference of alg->cra_users.next in crypto_remove_spawns(), via crypto_del_alg(). The test repeatedly uses CRYPTO_MSG_NEWALG and CRYPTO_MSG_DELALG. The crash occurs when the instance that CRYPTO_MSG_DELALG is trying to unregister isn't a real registered algorithm, but rather is a "test larval", which is a special "algorithm" added to the algorithms list while the real algorithm is still being tested. Larvals don't have initialized cra_users, so that causes the crash. Normally pcrypt_aead01 doesn't trigger this because CRYPTO_MSG_NEWALG waits for the algorithm to be tested; however, CRYPTO_MSG_NEWALG returns early when interrupted. Everything else in the "crypto user configuration" API has this same bug too, i.e. it inappropriately allows operating on larval algorithms (though it doesn't look like the other cases can cause a crash). Fix this by making crypto_alg_match() exclude larval algorithms. [1] https://lkml.kernel.org/r/20190625071624.27039-1-msuchanek@suse.de [2] https://github.com/linux-test-project/ltp/blob/20190517/testcases/kernel/crypto/pcrypt_aead01.c Reported-by: Michal Suchanek <msuchanek@suse.de> Fixes: a38f7907b926 ("crypto: Add userspace configuration API") Cc: <stable@vger.kernel.org> # v3.2+ Cc: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: ccm - fix incompatibility between "ccm" and "ccm_base"Eric Biggers
commit 6a1faa4a43f5fabf9cbeaa742d916e7b5e73120f upstream. CCM instances can be created by either the "ccm" template, which only allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base", which allows choosing the ctr and cbcmac implementations, e.g. "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". However, a "ccm_base" instance prevents a "ccm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "ccm". This can be used as a denial of service. Moreover, "ccm_base" instances are never tested by the crypto self-tests, even if there are compatible "ccm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "ccm_base" instances set the same cra_name as "ccm" instances, e.g. "ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". This requires extracting the block cipher name from the name of the ctr and cbcmac algorithms. It also requires starting to verify that the algorithms are really ctr and cbcmac using the same block cipher, not something else entirely. But it would be bizarre if anyone were actually using non-ccm-compatible algorithms with ccm_base, so this shouldn't break anyone in practice. Fixes: 4a49b499dfa0 ("[CRYPTO] ccm: Added CCM mode") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: gcm - fix incompatibility between "gcm" and "gcm_base"Eric Biggers
commit f699594d436960160f6d5ba84ed4a222f20d11cd upstream. GCM instances can be created by either the "gcm" template, which only allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base", which allows choosing the ctr and ghash implementations, e.g. "gcm_base(ctr(aes-generic),ghash-generic)". However, a "gcm_base" instance prevents a "gcm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "gcm". This can be used as a denial of service. Moreover, "gcm_base" instances are never tested by the crypto self-tests, even if there are compatible "gcm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "gcm_base" instances set the same cra_name as "gcm" instances, e.g. "gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)". This requires extracting the block cipher name from the name of the ctr algorithm. It also requires starting to verify that the algorithms are really ctr and ghash, not something else entirely. But it would be bizarre if anyone were actually using non-gcm-compatible algorithms with gcm_base, so this shouldn't break anyone in practice. Fixes: d00aa19b507b ("[CRYPTO] gcm: Allow block cipher parameter") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: crct10dif-generic - fix use via crypto_shash_digest()Eric Biggers
commit 307508d1072979f4435416f87936f87eaeb82054 upstream. The ->digest() method of crct10dif-generic reads the current CRC value from the shash_desc context. But this value is uninitialized, causing crypto_shash_digest() to compute the wrong result. Fix it. Probably this wasn't noticed before because lib/crc-t10dif.c only uses crypto_shash_update(), not crypto_shash_digest(). Likewise, crypto_shash_digest() is not yet tested by the crypto self-tests because those only test the ahash API which only uses shash init/update/final. This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. Fixes: 2d31e518a428 ("crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework") Cc: <stable@vger.kernel.org> # v3.11+ Cc: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: skcipher - don't WARN on unprocessed data after slow walk stepEric Biggers
commit dcaca01a42cc2c425154a13412b4124293a6e11e upstream. skcipher_walk_done() assumes it's a bug if, after the "slow" path is executed where the next chunk of data is processed via a bounce buffer, the algorithm says it didn't process all bytes. Thus it WARNs on this. However, this can happen legitimately when the message needs to be evenly divisible into "blocks" but isn't, and the algorithm has a 'walksize' greater than the block size. For example, ecb-aes-neonbs sets 'walksize' to 128 bytes and only supports messages evenly divisible into 16-byte blocks. If, say, 17 message bytes remain but they straddle scatterlist elements, the skcipher_walk code will take the "slow" path and pass the algorithm all 17 bytes in the bounce buffer. But the algorithm will only be able to process 16 bytes, triggering the WARN. Fix this by just removing the WARN_ON(). Returning -EINVAL, as the code already does, is the right behavior. This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: chacha20poly1305 - set cra_name correctlyEric Biggers
commit 5e27f38f1f3f45a0c938299c3a34a2d2db77165a upstream. If the rfc7539 template is instantiated with specific implementations, e.g. "rfc7539(chacha20-generic,poly1305-generic)" rather than "rfc7539(chacha20,poly1305)", then the implementation names end up included in the instance's cra_name. This is incorrect because it then prevents all users from allocating "rfc7539(chacha20,poly1305)", if the highest priority implementations of chacha20 and poly1305 were selected. Also, the self-tests aren't run on an instance allocated in this way. Fix it by setting the instance's cra_name from the underlying algorithms' actual cra_names, rather than from the requested names. This matches what other templates do. Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539") Cc: <stable@vger.kernel.org> # v4.2+ Cc: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: salsa20 - don't access already-freed walk.ivEric Biggers
commit edaf28e996af69222b2cb40455dbb5459c2b875a upstream. If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. salsa20-generic doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and it was actually broken prior to the alignmask being removed by commit b62b3db76f73 ("crypto: salsa20-generic - cleanup and convert to skcipher API"). Since salsa20-generic does not update the IV and does not need any IV alignment, update it to use req->iv instead of walk.iv. Fixes: 2407d60872dd ("[CRYPTO] salsa20: Salsa20 stream cipher") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-04-27crypto: x86/poly1305 - fix overflow during partial reductionEric Biggers
commit 678cce4019d746da6c680c48ba9e6d417803e127 upstream. The x86_64 implementation of Poly1305 produces the wrong result on some inputs because poly1305_4block_avx2() incorrectly assumes that when partially reducing the accumulator, the bits carried from limb 'd4' to limb 'h0' fit in a 32-bit integer. This is true for poly1305-generic which processes only one block at a time. However, it's not true for the AVX2 implementation, which processes 4 blocks at a time and therefore can produce intermediate limbs about 4x larger. Fix it by making the relevant calculations use 64-bit arithmetic rather than 32-bit. Note that most of the carries already used 64-bit arithmetic, but the d4 -> h0 carry was different for some reason. To be safe I also made the same change to the corresponding SSE2 code, though that only operates on 1 or 2 blocks at a time. I don't think it's really needed for poly1305_block_sse2(), but it doesn't hurt because it's already x86_64 code. It *might* be needed for poly1305_2block_sse2(), but overflows aren't easy to reproduce there. This bug was originally detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. But also add a test vector which reproduces it directly (in the AVX2 case). Fixes: b1ccc8f4b631 ("crypto: poly1305 - Add a four block AVX2 variant for x86_64") Fixes: c70f4abef07a ("crypto: poly1305 - Add a SSE2 SIMD variant for x86_64") Cc: <stable@vger.kernel.org> # v4.3+ Cc: Martin Willi <martin@strongswan.org> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: testmgr - skip crc32c context test for ahash algorithmsEric Biggers
commit eb5e6730db98fcc4b51148b4a819fa4bf864ae54 upstream. Instantiating "cryptd(crc32c)" causes a crypto self-test failure because the crypto_alloc_shash() in alg_test_crc32c() fails. This is because cryptd(crc32c) is an ahash algorithm, not a shash algorithm; so it can only be accessed through the ahash API, unlike shash algorithms which can be accessed through both the ahash and shash APIs. As the test is testing the shash descriptor format which is only applicable to shash algorithms, skip it for ahash algorithms. (Note that it's still important to fix crypto self-test failures even for weird algorithm instantiations like cryptd(crc32c) that no one would really use; in fips_enabled mode unprivileged users can use them to panic the kernel, and also they prevent treating a crypto self-test failure as a bug when fuzzing the kernel.) Fixes: 8e3ee85e68c5 ("crypto: crc32c - Test descriptor context format") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: skcipher - set CRYPTO_TFM_NEED_KEY if ->setkey() failsEric Biggers
commit b1f6b4bf416b49f00f3abc49c639371cdecaaad1 upstream. Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. For example, in lrw.c, if gf128mul_init_64k_bbe() fails due to lack of memory, then priv::table will be left NULL. After that, encryption with that tfm will cause a NULL pointer dereference. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a key, to prevent the tfm from being used until a new key is set. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: f8d33fac8480 ("crypto: skcipher - prevent using skciphers without setting key") Cc: <stable@vger.kernel.org> # v4.16+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: pcbc - remove bogus memcpy()s with src == destEric Biggers
commit 251b7aea34ba3c4d4fdfa9447695642eb8b8b098 upstream. The memcpy()s in the PCBC implementation use walk->iv as both the source and destination, which has undefined behavior. These memcpy()'s are actually unneeded, because walk->iv is already used to hold the previous plaintext block XOR'd with the previous ciphertext block. Thus, walk->iv is already updated to its final value. So remove the broken and unnecessary memcpy()s. Fixes: 91652be5d1b9 ("[CRYPTO] pcbc: Add Propagated CBC template") Cc: <stable@vger.kernel.org> # v2.6.21+ Cc: David Howells <dhowells@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: morus - fix handling chunked inputsEric Biggers
commit d644f1c8746ed24f81075480f9e9cb3777ae8d65 upstream. The generic MORUS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Fix them. Fixes: 396be41f16fd ("crypto: morus - Add generic MORUS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: hash - set CRYPTO_TFM_NEED_KEY if ->setkey() failsEric Biggers
commit ba7d7433a0e998c902132bd47330e355a1eaa894 upstream. Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a key, to prevent the tfm from being used until a new key is set. Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so ->setkey() for those must nevertheless be atomic. That's fine for now since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's not intended that OPTIONAL_KEY be used much. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: 9fa68f620041 ("crypto: hash - prevent using keyed hashes without setting key") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: aegis - fix handling chunked inputsEric Biggers
commit 0f533e67d26f228ea5dfdacc8a4bdeb487af5208 upstream. The generic AEGIS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Fix them. Fixes: f606a88e5823 ("crypto: aegis - Add generic AEGIS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: aead - set CRYPTO_TFM_NEED_KEY if ->setkey() failsEric Biggers
commit 6ebc97006b196aafa9df0497fdfa866cf26f259b upstream. Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. For example, in gcm.c, if the kzalloc() fails due to lack of memory, then the CTR part of GCM will have the new key but GHASH will not. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails, to prevent the tfm from being used until a new key is set. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: dc26c17f743a ("crypto: aead - prevent using AEADs without setting key") Cc: <stable@vger.kernel.org> # v4.16+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: ahash - fix another early termination in hash walkEric Biggers
commit 77568e535af7c4f97eaef1e555bf0af83772456c upstream. Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and "michael_mic", fail the improved hash tests because they sometimes produce the wrong digest. The bug is that in the case where a scatterlist element crosses pages, not all the data is actually hashed because the scatterlist walk terminates too early. This happens because the 'nbytes' variable in crypto_hash_walk_done() is assigned the number of bytes remaining in the page, then later interpreted as the number of bytes remaining in the scatterlist element. Fix it. Fixes: 900a081f6912 ("crypto: ahash - Fix early termination in hash walk") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: cfb - remove bogus memcpy() with src == destEric Biggers
commit 6c2e322b3621dc8be72e5c86d4fdb587434ba625 upstream. The memcpy() in crypto_cfb_decrypt_inplace() uses walk->iv as both the source and destination, which has undefined behavior. It is unneeded because walk->iv is already used to hold the previous ciphertext block; thus, walk->iv is already updated to its final value. So, remove it. Also, note that in-place decryption is the only case where the previous ciphertext block is not directly available. Therefore, as a related cleanup I also updated crypto_cfb_encrypt_segment() to directly use the previous ciphertext block rather than save it into walk->iv. This makes it consistent with in-place encryption and out-of-place decryption; now only in-place decryption is different, because it has to be. Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode") Cc: <stable@vger.kernel.org> # v4.17+ Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>