[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXu5jJJXM3pCE7y3akkmh_0RzLaO5g39hueZNKPoE4E9yaJfQ@mail.gmail.com>
Date: Wed, 20 Jun 2018 17:15:05 -0700
From: Kees Cook <keescook@...omium.org>
To: Eric Biggers <ebiggers3@...il.com>
Cc: Herbert Xu <herbert@...dor.apana.org.au>,
Giovanni Cabiddu <giovanni.cabiddu@...el.com>,
Arnd Bergmann <arnd@...db.de>,
Eric Biggers <ebiggers@...gle.com>,
Mike Snitzer <snitzer@...hat.com>,
"Gustavo A. R. Silva" <gustavo@...eddedor.com>,
qat-linux@...el.com, LKML <linux-kernel@...r.kernel.org>,
dm-devel@...hat.com, linux-crypto <linux-crypto@...r.kernel.org>,
Lars Persson <larper@...s.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
"David S. Miller" <davem@...emloft.net>,
Alasdair Kergon <agk@...hat.com>,
Rabin Vincent <rabinv@...s.com>
Subject: Re: [PATCH 09/11] crypto: shash: Remove VLA usage in unaligned hashing
On Wed, Jun 20, 2018 at 4:57 PM, Eric Biggers <ebiggers3@...il.com> wrote:
> On Wed, Jun 20, 2018 at 12:04:06PM -0700, Kees Cook wrote:
>> In the quest to remove all stack VLA usage from the kernel[1], this uses
>> the newly defined max alignment to perform unaligned hashing to avoid
>> VLAs, and drops the helper function while adding sanity checks on the
>> resulting buffer sizes.
>>
>> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>>
>> Signed-off-by: Kees Cook <keescook@...omium.org>
>> ---
>> crypto/shash.c | 21 ++++++++++-----------
>> 1 file changed, 10 insertions(+), 11 deletions(-)
>>
>> diff --git a/crypto/shash.c b/crypto/shash.c
>> index ab6902c6dae7..1bb58209330a 100644
>> --- a/crypto/shash.c
>> +++ b/crypto/shash.c
>> @@ -73,13 +73,6 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
>> }
>> EXPORT_SYMBOL_GPL(crypto_shash_setkey);
>>
>> -static inline unsigned int shash_align_buffer_size(unsigned len,
>> - unsigned long mask)
>> -{
>> - typedef u8 __aligned_largest u8_aligned;
>> - return len + (mask & ~(__alignof__(u8_aligned) - 1));
>> -}
>> -
>> static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
>> unsigned int len)
>> {
>> @@ -88,11 +81,14 @@ static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
>> unsigned long alignmask = crypto_shash_alignmask(tfm);
>> unsigned int unaligned_len = alignmask + 1 -
>> ((unsigned long)data & alignmask);
>> - u8 ubuf[shash_align_buffer_size(unaligned_len, alignmask)]
>> - __aligned_largest;
>> + u8 ubuf[CRYPTO_ALG_MAX_ALIGNMASK]
>> + __aligned(CRYPTO_ALG_MAX_ALIGNMASK + 1);
>> u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
>> int err;
>
> Are you sure that __attribute__((aligned(64))) works correctly on stack
> variables on all architectures?
>
> And if it is expected to work, then why is the buffer still aligned by hand on
> the very next line?
I really don't know -- the existing code was doing both the __align
bit and the manual alignment, so I was trying to simplify it while
removing the VLA. I'm totally open to suggestions here.
BTW, these are also the only users of __aligned_largest() in the
kernel, and the only use of unsized __attribute__((aligned))
-Kees
--
Kees Cook
Pixel Security
Powered by blists - more mailing lists