[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20180319160513.16384-4-alexander.levin@microsoft.com>
Date: Mon, 19 Mar 2018 16:05:24 +0000
From: Sasha Levin <Alexander.Levin@...rosoft.com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>
CC: Josh Poimboeuf <jpoimboe@...hat.com>,
Cong Wang <xiyou.wangcong@...il.com>,
"David S . Miller" <davem@...emloft.net>,
Dmitry Vyukov <dvyukov@...gle.com>,
Eric Dumazet <edumazet@...gle.com>,
Kostya Serebryany <kcc@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Marcelo Ricardo Leitner <marcelo.leitner@...il.com>,
Neil Horman <nhorman@...driver.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Vlad Yasevich <vyasevich@...il.com>,
"linux-sctp@...r.kernel.org" <linux-sctp@...r.kernel.org>,
netdev <netdev@...r.kernel.org>,
syzkaller <syzkaller@...glegroups.com>,
Ingo Molnar <mingo@...nel.org>,
Sasha Levin <Alexander.Levin@...rosoft.com>
Subject: [PATCH AUTOSEL for 4.4 004/167] x86/asm: Don't use RBP as a temporary
register in csum_partial_copy_generic()
From: Josh Poimboeuf <jpoimboe@...hat.com>
[ Upstream commit 42fc6c6cb1662ba2fa727dd01c9473c63be4e3b6 ]
Andrey Konovalov reported the following warning while fuzzing the kernel
with syzkaller:
WARNING: kernel stack regs at ffff8800686869f8 in a.out:4933 has bad 'bp' value c3fc855a10167ec0
The unwinder dump revealed that RBP had a bad value when an interrupt
occurred in csum_partial_copy_generic().
That function saves RBP on the stack and then overwrites it, using it as
a scratch register. That's problematic because it breaks stack traces
if an interrupt occurs in the middle of the function.
Replace the usage of RBP with another callee-saved register (R15) so
stack traces are no longer affected.
Reported-by: Andrey Konovalov <andreyknvl@...gle.com>
Tested-by: Andrey Konovalov <andreyknvl@...gle.com>
Signed-off-by: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Cong Wang <xiyou.wangcong@...il.com>
Cc: David S . Miller <davem@...emloft.net>
Cc: Dmitry Vyukov <dvyukov@...gle.com>
Cc: Eric Dumazet <edumazet@...gle.com>
Cc: Kostya Serebryany <kcc@...gle.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Marcelo Ricardo Leitner <marcelo.leitner@...il.com>
Cc: Neil Horman <nhorman@...driver.com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Vlad Yasevich <vyasevich@...il.com>
Cc: linux-sctp@...r.kernel.org
Cc: netdev <netdev@...r.kernel.org>
Cc: syzkaller <syzkaller@...glegroups.com>
Link: http://lkml.kernel.org/r/4b03a961efda5ec9bfe46b7b9c9ad72d1efad343.1493909486.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Signed-off-by: Sasha Levin <alexander.levin@...rosoft.com>
---
arch/x86/lib/csum-copy_64.S | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/lib/csum-copy_64.S b/arch/x86/lib/csum-copy_64.S
index 7e48807b2fa1..45a53dfe1859 100644
--- a/arch/x86/lib/csum-copy_64.S
+++ b/arch/x86/lib/csum-copy_64.S
@@ -55,7 +55,7 @@ ENTRY(csum_partial_copy_generic)
movq %r12, 3*8(%rsp)
movq %r14, 4*8(%rsp)
movq %r13, 5*8(%rsp)
- movq %rbp, 6*8(%rsp)
+ movq %r15, 6*8(%rsp)
movq %r8, (%rsp)
movq %r9, 1*8(%rsp)
@@ -74,7 +74,7 @@ ENTRY(csum_partial_copy_generic)
/* main loop. clear in 64 byte blocks */
/* r9: zero, r8: temp2, rbx: temp1, rax: sum, rcx: saved length */
/* r11: temp3, rdx: temp4, r12 loopcnt */
- /* r10: temp5, rbp: temp6, r14 temp7, r13 temp8 */
+ /* r10: temp5, r15: temp6, r14 temp7, r13 temp8 */
.p2align 4
.Lloop:
source
@@ -89,7 +89,7 @@ ENTRY(csum_partial_copy_generic)
source
movq 32(%rdi), %r10
source
- movq 40(%rdi), %rbp
+ movq 40(%rdi), %r15
source
movq 48(%rdi), %r14
source
@@ -103,7 +103,7 @@ ENTRY(csum_partial_copy_generic)
adcq %r11, %rax
adcq %rdx, %rax
adcq %r10, %rax
- adcq %rbp, %rax
+ adcq %r15, %rax
adcq %r14, %rax
adcq %r13, %rax
@@ -121,7 +121,7 @@ ENTRY(csum_partial_copy_generic)
dest
movq %r10, 32(%rsi)
dest
- movq %rbp, 40(%rsi)
+ movq %r15, 40(%rsi)
dest
movq %r14, 48(%rsi)
dest
@@ -203,7 +203,7 @@ ENTRY(csum_partial_copy_generic)
movq 3*8(%rsp), %r12
movq 4*8(%rsp), %r14
movq 5*8(%rsp), %r13
- movq 6*8(%rsp), %rbp
+ movq 6*8(%rsp), %r15
addq $7*8, %rsp
ret
--
2.14.1
Powered by blists - more mailing lists