[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACO55tuUROCr14Eq-Yn_OcD21QBjfa33b0Oj4oAthEz5NHHhrA@mail.gmail.com>
Date: Fri, 22 Jun 2018 23:40:13 +0200
From: Karol Herbst <kherbst@...hat.com>
To: Kees Cook <keescook@...omium.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
nouveau <nouveau@...ts.freedesktop.org>,
dri-devel <dri-devel@...ts.freedesktop.org>,
Ben Skeggs <bskeggs@...hat.com>
Subject: Re: [Nouveau] [PATCH] drm/nouveau/secboot/acr: Remove VLA usage
On Fri, Jun 22, 2018 at 11:34 PM, Kees Cook <keescook@...omium.org> wrote:
> On Fri, Jun 22, 2018 at 10:50 AM, Karol Herbst <kherbst@...hat.com> wrote:
>> On Thu, May 24, 2018 at 7:24 PM, Kees Cook <keescook@...omium.org> wrote:
>>> In the quest to remove all stack VLA usage from the kernel[1], this
>>> allocates the working buffers before starting the writing so it won't
>>> abort in the middle. This needs an initial walk of the lists to figure
>>> out how large the buffer should be.
>>>
>>> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
>>>
>>> Signed-off-by: Kees Cook <keescook@...omium.org>
>>> ---
>>> .../nouveau/nvkm/subdev/secboot/acr_r352.c | 25 ++++++++++++++++---
>>> .../nouveau/nvkm/subdev/secboot/acr_r367.c | 16 +++++++++++-
>>> 2 files changed, 37 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r352.c b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r352.c
>>> index a721354249ce..d02e183717dc 100644
>>> --- a/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r352.c
>>> +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/secboot/acr_r352.c
>>> @@ -414,6 +414,20 @@ acr_r352_ls_write_wpr(struct acr_r352 *acr, struct list_head *imgs,
>>> {
>>> struct ls_ucode_img *_img;
>>> u32 pos = 0;
>>> + u32 max_desc_size = 0;
>>> + u8 *gdesc;
>>> +
>>> + /* Figure out how large we need gdesc to be. */
>>> + list_for_each_entry(_img, imgs, node) {
>>> + const struct acr_r352_ls_func *ls_func =
>>> + acr->func->ls_func[_img->falcon_id];
>>> +
>>> + max_desc_size = max(max_desc_size, ls_func->bl_desc_size);
>>> + }
>>> +
>>> + gdesc = kmalloc(max_desc_size, GFP_KERNEL);
>>> + if (!gdesc)
>>> + return -ENOMEM;
>>>
>>> nvkm_kmap(wpr_blob);
>>>
>>> @@ -421,7 +435,6 @@ acr_r352_ls_write_wpr(struct acr_r352 *acr, struct list_head *imgs,
>>> struct ls_ucode_img_r352 *img = ls_ucode_img_r352(_img);
>>> const struct acr_r352_ls_func *ls_func =
>>> acr->func->ls_func[_img->falcon_id];
>>> - u8 gdesc[ls_func->bl_desc_size];
>>>
>>
>> if there are no guarantees that (ls_func->bl_desc_size & 0x4 == 0),
>> then we need to memset a bit more, because 4 bytes at the time are
>> actually copied inside nvkm_gpuobj_memcpy_to later in that code, but
>> the last 4 bytes are only partly memset to 0.
>
> I think this is unchanged from the original code, yes? The memset() is
> always against bl_desc_size; I haven't changed that.
>
right, but I think before we would upload undefined data (because we
run out of bounds for cerain bl_desc_size), now we get what ever was
left inside the buffer from the previous iteration. Both cases are not
good. It isn't an issue with your patch, the code before wasn't 100%
correct either. But maybe that's fine, because bl_desc_size is always
a multple of 0x4.
>> If ls_func->bl_desc_size is always a multiple of 0x4, then it isn't as
>> important, but still better to be fixed. Or maybe
>> nvkm_gpuobj_memcpy_to should do that handling and check if the size is
>> a multiple of 0x4 and otherwise handle that case?
>>
>> Same is valid for the changes in the r367 file.
>
> Should I resend with both the allocation and the memset getting
> rounded up to the next multiple of 4?
Yeah, I think copying 0 is better than random data.
Your patch is fine as it is though, because it doesn't add a new
issue, it just showed us there is a potential one. We should keep that
in mind and see how we want to fix that up. I can imagine that this
might cause some issues in some places, maybe it is totally fine.
Thanks
>
> -Kees
>
> --
> Kees Cook
> Pixel Security
Powered by blists - more mailing lists