lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180314183157.GA183724@gmail.com>
Date:   Wed, 14 Mar 2018 11:31:57 -0700
From:   Eric Biggers <ebiggers3@...il.com>
To:     Salvatore Mesoraca <s.mesoraca16@...il.com>
Cc:     linux-kernel@...r.kernel.org, kernel-hardening@...ts.openwall.com,
        linux-crypto@...r.kernel.org,
        "David S. Miller" <davem@...emloft.net>,
        Herbert Xu <herbert@...dor.apana.org.au>,
        Kees Cook <keescook@...omium.org>
Subject: Re: [PATCH] crypto: ctr: avoid VLA use

On Wed, Mar 14, 2018 at 02:17:30PM +0100, Salvatore Mesoraca wrote:
> All ciphers implemented in Linux have a block size less than or
> equal to 16 bytes and the most demanding hw require 16 bits
> alignment for the block buffer.
> We avoid 2 VLAs[1] by always allocating 16 bytes with 16 bits
> alignment, unless the architecture support efficient unaligned
> accesses.
> We also check, at runtime, that our assumptions still stand,
> possibly dynamically allocating a new buffer, just in case
> something changes in the future.
> 
> [1] https://lkml.org/lkml/2018/3/7/621
> 
> Signed-off-by: Salvatore Mesoraca <s.mesoraca16@...il.com>
> ---
> 
> Notes:
>     Can we maybe skip the runtime check?
> 
>  crypto/ctr.c | 50 ++++++++++++++++++++++++++++++++++++++++++--------
>  1 file changed, 42 insertions(+), 8 deletions(-)
> 
> diff --git a/crypto/ctr.c b/crypto/ctr.c
> index 854d924..f37adf0 100644
> --- a/crypto/ctr.c
> +++ b/crypto/ctr.c
> @@ -35,6 +35,16 @@ struct crypto_rfc3686_req_ctx {
>  	struct skcipher_request subreq CRYPTO_MINALIGN_ATTR;
>  };
>  
> +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
> +#define DECLARE_CIPHER_BUFFER(name) u8 name[16]
> +#else
> +#define DECLARE_CIPHER_BUFFER(name) u8 __aligned(16) name[16]
> +#endif
> +
> +#define CHECK_CIPHER_BUFFER(name, size, align)			\
> +	likely(size <= sizeof(name) &&				\
> +	       name == PTR_ALIGN(((u8 *) name), align + 1))
> +
>  static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key,
>  			     unsigned int keylen)
>  {
> @@ -52,22 +62,35 @@ static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key,
>  	return err;
>  }
>  
> -static void crypto_ctr_crypt_final(struct blkcipher_walk *walk,
> -				   struct crypto_cipher *tfm)
> +static int crypto_ctr_crypt_final(struct blkcipher_walk *walk,
> +				  struct crypto_cipher *tfm)
>  {
>  	unsigned int bsize = crypto_cipher_blocksize(tfm);
>  	unsigned long alignmask = crypto_cipher_alignmask(tfm);
>  	u8 *ctrblk = walk->iv;
> -	u8 tmp[bsize + alignmask];
> -	u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
>  	u8 *src = walk->src.virt.addr;
>  	u8 *dst = walk->dst.virt.addr;
>  	unsigned int nbytes = walk->nbytes;
> +	DECLARE_CIPHER_BUFFER(tmp);
> +	u8 *keystream, *tmp2;
> +
> +	if (CHECK_CIPHER_BUFFER(tmp, bsize, alignmask))
> +		keystream = tmp;
> +	else {
> +		tmp2 = kmalloc(bsize + alignmask, GFP_ATOMIC);
> +		if (!tmp2)
> +			return -ENOMEM;
> +		keystream = PTR_ALIGN(tmp2 + 0, alignmask + 1);
> +	}
>  
>  	crypto_cipher_encrypt_one(tfm, keystream, ctrblk);
>  	crypto_xor_cpy(dst, keystream, src, nbytes);
>  
>  	crypto_inc(ctrblk, bsize);
> +
> +	if (unlikely(keystream != tmp))
> +		kfree(tmp2);
> +	return 0;
>  }

This seems silly; isn't the !CHECK_CIPHER_BUFFER() case unreachable?  Did you
even test it?  If there's going to be limits, the crypto API ought to enforce
them when registering an algorithm.

A better alternative may be to move the keystream buffer into the request
context, which is allowed to be variable length.  It looks like that would
require converting the ctr template over to the skcipher API, since the
blkcipher API doesn't have a request context.  But my understanding is that that
will need to be done eventually anyway, since the blkcipher (and ablkcipher) API
is going away.  I converted a bunch of algorithms recently and I can look at the
remaining ones in crypto/*.c if no one else gets to it first, but it may be a
little while until I have time.

Also, I recall there being a long discussion a while back about how
__aligned(16) doesn't work on local variables because the kernel's stack pointer
isn't guaranteed to maintain the alignment assumed by the compiler (see commit
b8fbe71f7535)...

Eric

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ