[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170126094044.GA24499@gmail.com>
Date: Thu, 26 Jan 2017 10:40:44 +0100
From: Ingo Molnar <mingo@...nel.org>
To: riel@...hat.com
Cc: linux-kernel@...r.kernel.org, luto@...nel.org,
yu-cheng.yu@...el.com, dave.hansen@...ux.intel.com, bp@...e.de
Subject: Re: [PATCH 1/2] x86/fpu: move copyout_from_xsaves bounds check
before the copy
* riel@...hat.com <riel@...hat.com> wrote:
> From: Rik van Riel <riel@...hat.com>
>
> Userspace may have programs, especially debuggers, that do not know
> how large the full XSAVE area space is. They pass in a size argument,
> and expect to not get more data than that.
>
> Unfortunately, the current copyout_from_xsaves does the bounds check
> after the copy out to userspace. This could theoretically result
> in the kernel scribbling over userspace memory outside of the buffer,
> before bailing out of the copy.
>
> In practice, this is not likely to be an issue, since debuggers are
> likely to specify the size they know about, and that size is likely
> to exactly match the XSAVE fields they know about.
>
> However, we could be a little more careful and do the bounds check
> before committing ourselves with a copy to userspace.
>
> Signed-off-by: Rik van Riel <riel@...hat.com>
> ---
> arch/x86/kernel/fpu/xstate.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
> index c24ac1efb12d..c1508d56ecfb 100644
> --- a/arch/x86/kernel/fpu/xstate.c
> +++ b/arch/x86/kernel/fpu/xstate.c
> @@ -992,13 +992,13 @@ int copyout_from_xsaves(unsigned int pos, unsigned int count, void *kbuf,
> offset = xstate_offsets[i];
> size = xstate_sizes[i];
>
> + if (offset + size > count)
> + break;
> +
> ret = xstate_copyout(offset, size, kbuf, ubuf, src, 0, count);
>
> if (ret)
> return ret;
> -
> - if (offset + size >= count)
> - break;
That's not a robust way to do a bounds check either - what if 'offset' is so large
that it overflows and offset + size falls within the 'sane' 0..count range?
Also, what about partial copies?
Plus, to add insult to injury, xstate_copyout() is a totally unnecessary
obfuscation to begin with:
- 'start_pos' is always 0
- 'end_pos' is always 'count'
- both are const for no good reason: they are not pointers
- both are signed for no good reason: they are derived from unsigned types and I
don't see how negative values can ever be valid here.
So this code:
static inline int xstate_copyout(unsigned int pos, unsigned int count,
void *kbuf, void __user *ubuf,
const void *data, const int start_pos,
const int end_pos)
{
if ((count == 0) || (pos < start_pos))
return 0;
if (end_pos < 0 || pos < end_pos) {
unsigned int copy = (end_pos < 0 ? count : min(count, end_pos - pos));
if (kbuf) {
memcpy(kbuf + pos, data, copy);
} else {
if (__copy_to_user(ubuf + pos, data, copy))
return -EFAULT;
}
}
return 0;
}
Is, after all the cleanups and fixes is in reality equivalent to:
static inline int
__copy_xstate_to_kernel(void *kbuf, const void *data,
unsigned int offset, unsigned int size)
{
memcpy(kbuf + offset, data, size);
return 0;
}
!!!
So the real fix is to get rid of xstate_copyout() altogether and just do the
memcpy directly - the regset obfuscation actively hid a real bug...
Thanks,
Ingo
Powered by blists - more mailing lists