lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 19 Mar 2018 10:49:19 -0700
From:   "Chang S. Bae" <chang.seok.bae@...el.com>
To:     x86@...nel.org
Cc:     luto@...nel.org, ak@...ux.intel.com, hpa@...or.com,
        markus.t.metzger@...el.com, tony.luck@...el.com,
        ravi.v.shankar@...el.com, linux-kernel@...r.kernel.org,
        chang.seok.bae@...el.com,
        "Markus T . Metzger" <markus.t.metzgar@...el.com>
Subject: [PATCH 07/15] x86/fsgsbase/64: putregs() in a reverse order

This patch makes a walk of user_regs_struct reversely. Main
reason for doing this is to put FS/GS base setting after
the selector.

Each element is independently set now. When FS/GS base is
(only) updated, its index is reset to zero. In putregs(),
it does not reset when both FS/GS base and selector are
covered.

When FSGSBASE is enabled, an arbitrary base value is possible
anyways, so it is going to be reasonable to write base lastly.

Suggested-by: H. Peter Anvin <hpa@...or.com>
Signed-off-by: Chang S. Bae <chang.seok.bae@...el.com>
Cc: Markus T. Metzger <markus.t.metzgar@...el.com>
Cc: Andi Kleen <ak@...ux.intel.com>
Cc: Andy Lutomirski <luto@...nel.org>
---
 arch/x86/kernel/ptrace.c | 48 +++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 45 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
index 9c09bf0..ee37e28 100644
--- a/arch/x86/kernel/ptrace.c
+++ b/arch/x86/kernel/ptrace.c
@@ -426,14 +426,56 @@ static int putregs(struct task_struct *child,
 		   unsigned int count,
 		   const unsigned long *values)
 {
-	const unsigned long *v = values;
+	const unsigned long *v = values + count / sizeof(unsigned long);
 	int ret = 0;
+#ifdef CONFIG_X86_64
+	bool fs_fully_covered = (offset <= USER_REGS_OFFSET(fs_base)) &&
+			((offset + count) >= USER_REGS_OFFSET(fs));
+	bool gs_fully_covered = (offset <= USER_REGS_OFFSET(gs_base)) &&
+			((offset + count) >= USER_REGS_OFFSET(gs));
+
+	offset += count - sizeof(*v);
+
+	while (count >= sizeof(*v) && !ret) {
+		v--;
+		switch (offset) {
+		case USER_REGS_OFFSET(fs_base):
+			if (fs_fully_covered) {
+				if (unlikely(*v >= TASK_SIZE_MAX))
+					return -EIO;
+				/*
+				 * When changing both %fs (index) and %fsbase
+				 * write_task_fsbase() tends to overwrite
+				 * task's %fs. Simply setting base only here.
+				 */
+				if (child->thread.fsbase != *v)
+					child->thread.fsbase = *v;
+				break;
+			}
+		case USER_REGS_OFFSET(gs_base):
+			if (gs_fully_covered) {
+				if (unlikely(*v >= TASK_SIZE_MAX))
+					return -EIO;
+				/* Same here as the %fs handling above */
+				if (child->thread.gsbase != *v)
+					child->thread.gsbase = *v;
+				break;
+			}
+		default:
+			ret = putreg(child, offset, *v);
+		}
+		count -= sizeof(*v);
+		offset -= sizeof(*v);
+	}
+#else
 
+	offset += count - sizeof(*v);
 	while (count >= sizeof(*v) && !ret) {
-		ret = putreg(child, offset, *v++);
+		ret = putreg(child, offset, *(--v));
 		count -= sizeof(*v);
-		offset += sizeof(*v);
+		offset -= sizeof(*v);
 	}
+#endif /* CONFIG_X86_64 */
 	return ret;
 }
 
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ