aboutsummaryrefslogtreecommitdiffstats
path: root/crypto
AgeCommit message (Collapse)Author
2018-04-04Merge branch 'linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - add AEAD support to crypto engine - allow batch registration in simd Algorithms: - add CFB mode - add speck block cipher - add sm4 block cipher - new test case for crct10dif - improve scheduling latency on ARM - scatter/gather support to gcm in aesni - convert x86 crypto algorithms to skcihper Drivers: - hmac(sha224/sha256) support in inside-secure - aes gcm/ccm support in stm32 - stm32mp1 support in stm32 - ccree driver from staging tree - gcm support over QI in caam - add ks-sa hwrng driver" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (212 commits) crypto: ccree - remove unused enums crypto: ahash - Fix early termination in hash walk crypto: brcm - explicitly cast cipher to hash type crypto: talitos - don't leak pointers to authenc keys crypto: qat - don't leak pointers to authenc keys crypto: picoxcell - don't leak pointers to authenc keys crypto: ixp4xx - don't leak pointers to authenc keys crypto: chelsio - don't leak pointers to authenc keys crypto: caam/qi - don't leak pointers to authenc keys crypto: caam - don't leak pointers to authenc keys crypto: lrw - Free rctx->ext with kzfree crypto: talitos - fix IPsec cipher in length crypto: Deduplicate le32_to_cpu_array() and cpu_to_le32_array() crypto: doc - clarify hash callbacks state machine crypto: api - Keep failed instances alive crypto: api - Make crypto_alg_lookup static crypto: api - Remove unused crypto_type lookup function crypto: chelsio - Remove declaration of static function from header crypto: inside-secure - hmac(sha224) support crypto: inside-secure - hmac(sha256) support ..
2018-03-31crypto: ahash - Fix early termination in hash walkHerbert Xu
When we have an unaligned SG list entry where there is no leftover aligned data, the hash walk code will incorrectly return zero as if the entire SG list has been processed. This patch fixes it by moving onto the next page instead. Reported-by: Eli Cooper <elicooper@gmx.com> Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-31crypto: lrw - Free rctx->ext with kzfreeHerbert Xu
The buffer rctx->ext contains potentially sensitive data and should be freed with kzfree. Cc: <stable@vger.kernel.org> Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-31crypto: Deduplicate le32_to_cpu_array() and cpu_to_le32_array()Andy Shevchenko
Deduplicate le32_to_cpu_array() and cpu_to_le32_array() by moving them to the generic header. No functional change implied. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-31crypto: api - Keep failed instances aliveHerbert Xu
This patch reverts commit 9c521a200bc3 ("crypto: api - remove instance when test failed") and fixes the underlying problem in a different way. To recap, prior to the reverted commit, an instance that fails a self-test is kept around. However, it would satisfy any new lookups against its name and therefore the system may accumlulate an unbounded number of failed instances for the same algorithm name. The reverted commit fixed it by unregistering the instance. Hoever, this still does not prevent the creation of the same failed instance over and over again each time the name is looked up. This patch fixes it by keeping the failed instance around, just as we would if it were a normal algorithm. However, the lookup code has been udpated so that we do not attempt to create another instance as long as this failed one is still registered. Of course, you could still force a new creation by deleting the instance from user-space. A new error (ELIBBAD) has been commandeered for this purpose and will be returned when all registered algorithm of a given name have failed the self-test. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-31crypto: api - Make crypto_alg_lookup staticHerbert Xu
The function crypto_alg_lookup is only usd within the crypto API and should be not be exported to the modules. This patch marks it as a static function. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-31crypto: api - Remove unused crypto_type lookup functionHerbert Xu
The lookup function in crypto_type was only used for the implicit IV generators which have been completely removed from the crypto API. This patch removes the lookup function as it is now useless. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-16crypto: testmgr - add a new test case for CRC-T10DIFArd Biesheuvel
In order to be able to test yield support under preempt, add a test vector for CRC-T10DIF that is long enough to take multiple iterations (and thus possible preemption between them) of the primary loop of the accelerated x86 and arm64 implementations. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-16crypto: ecc - Remove stack VLA usageKees Cook
On the quest to remove all VLAs from the kernel[1], this switches to a pair of kmalloc regions instead of using the stack. This also moves the get_random_bytes() after all allocations (and drops the needless "nbytes" variable). [1] https://lkml.org/lkml/2018/3/7/621 Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-16crypto: testmgr - introduce SM4 testsGilad Ben-Yossef
Add testmgr tests for the newly introduced SM4 ECB symmetric cipher. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-16crypto: sm4 - introduce SM4 symmetric cipher algorithmGilad Ben-Yossef
Introduce the SM4 cipher algorithms (OSCCA GB/T 32907-2016). SM4 (GBT.32907-2016) is a cryptographic standard issued by the Organization of State Commercial Administration of China (OSCCA) as an authorized cryptographic algorithms for the use within China. SMS4 was originally created for use in protecting wireless networks, and is mandated in the Chinese National Standard for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure) (GB.15629.11-2003). Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-09mn10300: Remove the architectureDavid Howells
Remove the MN10300 arch as the hardware is defunct. Suggested-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David Howells <dhowells@redhat.com> cc: Masahiro Yamada <yamada.masahiro@socionext.com> cc: linux-am33-list@redhat.com Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2018-03-09crypto: ecdh - fix to allow multi segment scatterlistsJames Bottomley
Apparently the ecdh use case was in bluetooth which always has single element scatterlists, so the ecdh module was hard coded to expect them. Now we're using this in TPM, we need multi-element scatterlists, so remove this limitation. Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-09crypto: cfb - add support for Cipher FeedBack modeJames Bottomley
TPM security routines require encryption and decryption with AES in CFB mode, so add it to the Linux Crypto schemes. CFB is basically a one time pad where the pad is generated initially from the encrypted IV and then subsequently from the encrypted previous block of ciphertext. The pad is XOR'd into the plain text to get the final ciphertext. https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#CFB Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: ablk_helper - remove ablk_helperEric Biggers
All users of ablk_helper have been converted over to crypto_simd, so remove ablk_helper. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: lrw - remove lrw_crypt()Eric Biggers
Now that all users of lrw_crypt() have been removed in favor of the LRW template wrapping an ECB mode algorithm, remove lrw_crypt(). Also remove crypto/lrw.h as that is no longer needed either; and fold 'struct lrw_table_ctx' into 'struct priv', lrw_init_table() into setkey(), and lrw_free_table() into exit_tfm(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: xts - remove xts_crypt()Eric Biggers
Now that all users of xts_crypt() have been removed in favor of the XTS template wrapping an ECB mode algorithm, remove xts_crypt(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/camellia-aesni-avx, avx2 - convert to skcipher interfaceEric Biggers
Convert the AESNI AVX and AESNI AVX2 implementations of Camellia from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/camellia - convert to skcipher interfaceEric Biggers
Convert the x86 asm implementation of Camellia from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/camellia - remove XTS algorithmEric Biggers
The XTS template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic XTS code themselves via xts_crypt(). Remove the xts-camellia-asm algorithm which did this. Users who request xts(camellia) and previously would have gotten xts-camellia-asm will now get xts(ecb-camellia-asm) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/camellia - remove LRW algorithmEric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-camellia-asm algorithm which did this. Users who request lrw(camellia) and previously would have gotten lrw-camellia-asm will now get lrw(ecb-camellia-asm) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/camellia-aesni-avx2 - remove LRW algorithmEric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-camellia-aesni-avx2 algorithm which did this. Users who request lrw(camellia) and previously would have gotten lrw-camellia-aesni-avx2 will now get lrw(ecb-camellia-aesni-avx2) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/camellia-aesni-avx - remove LRW algorithmEric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-camellia-aesni algorithm which did this. Users who request lrw(camellia) and previously would have gotten lrw-camellia-aesni will now get lrw(ecb-camellia-aesni) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/des3_ede - convert to skcipher interfaceEric Biggers
Convert the x86 asm implementation of Triple DES from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/blowfish: convert to skcipher interfaceEric Biggers
Convert the x86 asm implementation of Blowfish from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/cast6-avx - convert to skcipher interfaceEric Biggers
Convert the AVX implementation of CAST6 from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/cast6-avx - remove LRW algorithmEric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-cast6-avx algorithm which did this. Users who request lrw(cast6) and previously would have gotten lrw-cast6-avx will now get lrw(ecb-cast6-avx) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/cast5-avx - convert to skcipher interfaceEric Biggers
Convert the AVX implementation of CAST5 from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/twofish-avx - convert to skcipher interfaceEric Biggers
Convert the AVX implementation of Twofish from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/twofish-avx - remove LRW algorithmEric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-twofish-avx algorithm which did this. Users who request lrw(twofish) and previously would have gotten lrw-twofish-avx will now get lrw(ecb-twofish-avx) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/twofish-3way - convert to skcipher interfaceEric Biggers
Convert the 3-way implementation of Twofish from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/twofish-3way - remove XTS algorithmEric Biggers
The XTS template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic XTS code themselves via xts_crypt(). Remove the xts-twofish-3way algorithm which did this. Users who request xts(twofish) and previously would have gotten xts-twofish-3way will now get xts(ecb-twofish-3way) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/twofish-3way - remove LRW algorithmEric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-twofish-3way algorithm which did this. Users who request lrw(twofish) and previously would have gotten lrw-twofish-3way will now get lrw(ecb-twofish-3way) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/serpent-avx,avx2 - convert to skcipher interfaceEric Biggers
Convert the AVX and AVX2 implementations of Serpent from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/serpent-avx - remove LRW algorithmEric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-serpent-avx algorithm which did this. Users who request lrw(serpent) and previously would have gotten lrw-serpent-avx will now get lrw(ecb-serpent-avx) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/serpent-avx2 - remove LRW algorithmEric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-serpent-avx2 algorithm which did this. Users who request lrw(serpent) and previously would have gotten lrw-serpent-avx2 will now get lrw(ecb-serpent-avx2) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/serpent-sse2 - convert to skcipher interfaceEric Biggers
Convert the SSE2 implementation of Serpent from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/serpent-sse2 - remove XTS algorithmEric Biggers
The XTS template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic XTS code themselves via xts_crypt(). Remove the xts-serpent-sse2 algorithm which did this. Users who request xts(serpent) and previously would have gotten xts-serpent-sse2 will now get xts(ecb-serpent-sse2) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: x86/serpent-sse2 - remove LRW algorithmEric Biggers
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-serpent-sse2 algorithm which did this. Users who request lrw(serpent) and previously would have gotten lrw-serpent-sse2 will now get lrw(ecb-serpent-sse2) instead, which is just as fast. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-03-03crypto: simd - allow registering multiple algorithms at onceEric Biggers
Add a function to crypto_simd that registers an array of skcipher algorithms, then allocates and registers the simd wrapper algorithms for them. It assumes the naming scheme where the names of the underlying algorithms are prefixed with two underscores. Also add the corresponding 'unregister' function. Most of the x86 crypto modules will be able to use these. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-22X.509: fix NULL dereference when restricting key with unsupported_sigEric Biggers
The asymmetric key type allows an X.509 certificate to be added even if its signature's hash algorithm is not available in the crypto API. In that case 'payload.data[asym_auth]' will be NULL. But the key restriction code failed to check for this case before trying to use the signature, resulting in a NULL pointer dereference in key_or_keyring_common() or in restrict_link_by_signature(). Fix this by returning -ENOPKG when the signature is unsupported. Reproducer when all the CONFIG_CRYPTO_SHA512* options are disabled and keyctl has support for the 'restrict_keyring' command: keyctl new_session keyctl restrict_keyring @s asymmetric builtin_trusted openssl req -new -sha512 -x509 -batch -nodes -outform der \ | keyctl padd asymmetric desc @s Fixes: a511e1af8b12 ("KEYS: Move the point of trust determination to __key_link()") Cc: <stable@vger.kernel.org> # v4.7+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com>
2018-02-22X.509: fix BUG_ON() when hash algorithm is unsupportedEric Biggers
The X.509 parser mishandles the case where the certificate's signature's hash algorithm is not available in the crypto API. In this case, x509_get_sig_params() doesn't allocate the cert->sig->digest buffer; this part seems to be intentional. However, public_key_verify_signature() is still called via x509_check_for_self_signed(), which triggers the 'BUG_ON(!sig->digest)'. Fix this by making public_key_verify_signature() return -ENOPKG if the hash buffer has not been allocated. Reproducer when all the CONFIG_CRYPTO_SHA512* options are disabled: openssl req -new -sha512 -x509 -batch -nodes -outform der \ | keyctl padd asymmetric desc @s Fixes: 6c2dc5ae4ab7 ("X.509: Extract signature digest and make self-signed cert checks earlier") Reported-by: Paolo Valente <paolo.valente@linaro.org> Cc: Paolo Valente <paolo.valente@linaro.org> Cc: <stable@vger.kernel.org> # v4.7+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com>
2018-02-22PKCS#7: fix direct verification of SignerInfo signatureEric Biggers
If none of the certificates in a SignerInfo's certificate chain match a trusted key, nor is the last certificate signed by a trusted key, then pkcs7_validate_trust_one() tries to check whether the SignerInfo's signature was made directly by a trusted key. But, it actually fails to set the 'sig' variable correctly, so it actually verifies the last signature seen. That will only be the SignerInfo's signature if the certificate chain is empty; otherwise it will actually be the last certificate's signature. This is not by itself a security problem, since verifying any of the certificates in the chain should be sufficient to verify the SignerInfo. Still, it's not working as intended so it should be fixed. Fix it by setting 'sig' correctly for the direct verification case. Fixes: 757932e6da6d ("PKCS#7: Handle PKCS#7 messages that contain no X.509 certs") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com>
2018-02-22PKCS#7: fix certificate blacklistingEric Biggers
If there is a blacklisted certificate in a SignerInfo's certificate chain, then pkcs7_verify_sig_chain() sets sinfo->blacklisted and returns 0. But, pkcs7_verify() fails to handle this case appropriately, as it actually continues on to the line 'actual_ret = 0;', indicating that the SignerInfo has passed verification. Consequently, PKCS#7 signature verification ignores the certificate blacklist. Fix this by not considering blacklisted SignerInfos to have passed verification. Also fix the function comment with regards to when 0 is returned. Fixes: 03bb79315ddc ("PKCS#7: Handle blacklisted certificates") Cc: <stable@vger.kernel.org> # v4.12+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com>
2018-02-22PKCS#7: fix certificate chain verificationEric Biggers
When pkcs7_verify_sig_chain() is building the certificate chain for a SignerInfo using the certificates in the PKCS#7 message, it is passing the wrong arguments to public_key_verify_signature(). Consequently, when the next certificate is supposed to be used to verify the previous certificate, the next certificate is actually used to verify itself. An attacker can use this bug to create a bogus certificate chain that has no cryptographic relationship between the beginning and end. Fortunately I couldn't quite find a way to use this to bypass the overall signature verification, though it comes very close. Here's the reasoning: due to the bug, every certificate in the chain beyond the first actually has to be self-signed (where "self-signed" here refers to the actual key and signature; an attacker might still manipulate the certificate fields such that the self_signed flag doesn't actually get set, and thus the chain doesn't end immediately). But to pass trust validation (pkcs7_validate_trust()), either the SignerInfo or one of the certificates has to actually be signed by a trusted key. Since only self-signed certificates can be added to the chain, the only way for an attacker to introduce a trusted signature is to include a self-signed trusted certificate. But, when pkcs7_validate_trust_one() reaches that certificate, instead of trying to verify the signature on that certificate, it will actually look up the corresponding trusted key, which will succeed, and then try to verify the *previous* certificate, which will fail. Thus, disaster is narrowly averted (as far as I could tell). Fixes: 6c2dc5ae4ab7 ("X.509: Extract signature digest and make self-signed cert checks earlier") Cc: <stable@vger.kernel.org> # v4.7+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com>
2018-02-22crypto: speck - add test vectors for Speck64-XTSEric Biggers
Add test vectors for Speck64-XTS, generated in userspace using C code. The inputs were borrowed from the AES-XTS test vectors, with key lengths adjusted. xts-speck64-neon passes these tests. However, they aren't currently applicable for the generic XTS template, as that only supports a 128-bit block size. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-22crypto: speck - add test vectors for Speck128-XTSEric Biggers
Add test vectors for Speck128-XTS, generated in userspace using C code. The inputs were borrowed from the AES-XTS test vectors. Both xts(speck128-generic) and xts-speck128-neon pass these tests. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-22crypto: speck - export common helpersEric Biggers
Export the Speck constants and transform context and the ->setkey(), ->encrypt(), and ->decrypt() functions so that they can be reused by the ARM NEON implementation of Speck-XTS. The generic key expansion code will be reused because it is not performance-critical and is not vectorizable, while the generic encryption and decryption functions are needed as fallbacks and for the XTS tweak encryption. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-22crypto: speck - add support for the Speck block cipherEric Biggers
Add a generic implementation of Speck, including the Speck128 and Speck64 variants. Speck is a lightweight block cipher that can be much faster than AES on processors that don't have AES instructions. We are planning to offer Speck-XTS (probably Speck128/256-XTS) as an option for dm-crypt and fscrypt on Android, for low-end mobile devices with older CPUs such as ARMv7 which don't have the Cryptography Extensions. Currently, such devices are unencrypted because AES is not fast enough, even when the NEON bit-sliced implementation of AES is used. Other AES alternatives such as Twofish, Threefish, Camellia, CAST6, and Serpent aren't fast enough either; it seems that only a modern ARX cipher can provide sufficient performance on these devices. This is a replacement for our original proposal (https://patchwork.kernel.org/patch/10101451/) which was to offer ChaCha20 for these devices. However, the use of a stream cipher for disk/file encryption with no space to store nonces would have been much more insecure than we thought initially, given that it would be used on top of flash storage as well as potentially on top of F2FS, neither of which is guaranteed to overwrite data in-place. Speck has been somewhat controversial due to its origin. Nevertheless, it has a straightforward design (it's an ARX cipher), and it appears to be the leading software-optimized lightweight block cipher currently, with the most cryptanalysis. It's also easy to implement without side channels, unlike AES. Moreover, we only intend Speck to be used when the status quo is no encryption, due to AES not being fast enough. We've also considered a novel length-preserving encryption mode based on ChaCha20 and Poly1305. While theoretically attractive, such a mode would be a brand new crypto construction and would be more complicated and difficult to implement efficiently in comparison to Speck-XTS. There is confusion about the byte and word orders of Speck, since the original paper doesn't specify them. But we have implemented it using the orders the authors recommended in a correspondence with them. The test vectors are taken from the original paper but were mapped to byte arrays using the recommended byte and word orders. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-02-22crypto: testmgr - Fix incorrect values in PKCS#1 test vectorConor McLoughlin
The RSA private key for the first form should have version, prime1, prime2, exponent1, exponent2, coefficient values 0. With non-zero values for prime1,2, exponent 1,2 and coefficient the Intel QAT driver will assume that values are provided for the private key second form. This will result in signature verification failures for modules where QAT device is present and the modules are signed with rsa,sha256. Cc: <stable@vger.kernel.org> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Conor McLoughlin <conor.mcloughlin@intel.com> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>