[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170923130016.21448-10-mingo@kernel.org>
Date: Sat, 23 Sep 2017 14:59:52 +0200
From: Ingo Molnar <mingo@...nel.org>
To: linux-kernel@...r.kernel.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Eric Biggers <ebiggers3@...il.com>,
Andy Lutomirski <luto@...capital.net>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Fenghua Yu <fenghua.yu@...el.com>,
"H . Peter Anvin" <hpa@...or.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Rik van Riel <riel@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Yu-cheng Yu <yu-cheng.yu@...el.com>
Subject: [PATCH 09/33] x86/fpu: Change 'size_total' parameter to unsigned and standardize the size checks in copy_xstate_to_*()
'size_total' is derived from an unsigned input parameter - and then converted
to 'int' and checked for negative ranges:
if (size_total < 0 || offset < size_total) {
This conversion and the checks are unnecessary obfuscation, reject overly
large requested copy sizes outright and simplify the underlying code.
Reported-by: Rik van Riel <riel@...hat.com>
Cc: Andy Lutomirski <luto@...nel.org>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Dave Hansen <dave.hansen@...ux.intel.com>
Cc: Fenghua Yu <fenghua.yu@...el.com>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Oleg Nesterov <oleg@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Yu-cheng Yu <yu-cheng.yu@...el.com>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
arch/x86/kernel/fpu/xstate.c | 32 +++++++++++++++-----------------
1 file changed, 15 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
index c13083579655..b18c5457065a 100644
--- a/arch/x86/kernel/fpu/xstate.c
+++ b/arch/x86/kernel/fpu/xstate.c
@@ -925,15 +925,11 @@ int arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
* the source data pointer or increment pos, count, kbuf, and ubuf.
*/
static inline int
-__copy_xstate_to_kernel(void *kbuf,
- const void *data,
- unsigned int offset, unsigned int size, int size_total)
+__copy_xstate_to_kernel(void *kbuf, const void *data,
+ unsigned int offset, unsigned int size, unsigned int size_total)
{
- if (!size)
- return 0;
-
- if (size_total < 0 || offset < size_total) {
- unsigned int copy = size_total < 0 ? size : min(size, size_total - offset);
+ if (offset < size_total) {
+ unsigned int copy = min(size, size_total - offset);
memcpy(kbuf + offset, data, copy);
}
@@ -986,12 +982,13 @@ int copy_xstate_to_kernel(void *kbuf, struct xregs_state *xsave, unsigned int of
offset = xstate_offsets[i];
size = xstate_sizes[i];
+ /* The next component has to fit fully into the output buffer: */
+ if (offset + size > size_total)
+ break;
+
ret = __copy_xstate_to_kernel(kbuf, src, offset, size, size_total);
if (ret)
return ret;
-
- if (offset + size >= size_total)
- break;
}
}
@@ -1010,13 +1007,13 @@ int copy_xstate_to_kernel(void *kbuf, struct xregs_state *xsave, unsigned int of
}
static inline int
-__copy_xstate_to_user(void __user *ubuf, const void *data, unsigned int offset, unsigned int size, int size_total)
+__copy_xstate_to_user(void __user *ubuf, const void *data, unsigned int offset, unsigned int size, unsigned int size_total)
{
if (!size)
return 0;
- if (size_total < 0 || offset < size_total) {
- unsigned int copy = size_total < 0 ? size : min(size, size_total - offset);
+ if (offset < size_total) {
+ unsigned int copy = min(size, size_total - offset);
if (__copy_to_user(ubuf + offset, data, copy))
return -EFAULT;
@@ -1069,12 +1066,13 @@ int copy_xstate_to_user(void __user *ubuf, struct xregs_state *xsave, unsigned i
offset = xstate_offsets[i];
size = xstate_sizes[i];
+ /* The next component has to fit fully into the output buffer: */
+ if (offset + size > size_total)
+ break;
+
ret = __copy_xstate_to_user(ubuf, src, offset, size, size_total);
if (ret)
return ret;
-
- if (offset + size >= size_total)
- break;
}
}
--
2.11.0
Powered by blists - more mailing lists