[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220405070304.050051081@linuxfoundation.org>
Date: Tue, 5 Apr 2022 09:27:49 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Eric Dumazet <edumazet@...gle.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.10 175/599] rseq: Optimise rseq_get_rseq_cs() and clear_rseq_cs()
From: Eric Dumazet <edumazet@...gle.com>
[ Upstream commit 5e0ccd4a3b01c5a71732a13186ca110a138516ea ]
Commit ec9c82e03a74 ("rseq: uapi: Declare rseq_cs field as union,
update includes") added regressions for our servers.
Using copy_from_user() and clear_user() for 64bit values
is suboptimal.
We can use faster put_user() and get_user() on 64bit arches.
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Link: https://lkml.kernel.org/r/20210413203352.71350-4-eric.dumazet@gmail.com
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
kernel/rseq.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/kernel/rseq.c b/kernel/rseq.c
index 0077713bf240..1b4547e0d841 100644
--- a/kernel/rseq.c
+++ b/kernel/rseq.c
@@ -120,8 +120,13 @@ static int rseq_get_rseq_cs(struct task_struct *t, struct rseq_cs *rseq_cs)
u32 sig;
int ret;
+#ifdef CONFIG_64BIT
+ if (get_user(ptr, &t->rseq->rseq_cs.ptr64))
+ return -EFAULT;
+#else
if (copy_from_user(&ptr, &t->rseq->rseq_cs.ptr64, sizeof(ptr)))
return -EFAULT;
+#endif
if (!ptr) {
memset(rseq_cs, 0, sizeof(*rseq_cs));
return 0;
@@ -204,9 +209,13 @@ static int clear_rseq_cs(struct task_struct *t)
*
* Set rseq_cs to NULL.
*/
+#ifdef CONFIG_64BIT
+ return put_user(0UL, &t->rseq->rseq_cs.ptr64);
+#else
if (clear_user(&t->rseq->rseq_cs.ptr64, sizeof(t->rseq->rseq_cs.ptr64)))
return -EFAULT;
return 0;
+#endif
}
/*
--
2.34.1
Powered by blists - more mailing lists