lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 26 May 2020 01:42:17 +0000 From: Lai Jiangshan <laijs@...ux.alibaba.com> To: linux-kernel@...r.kernel.org Cc: Lai Jiangshan <laijs@...ux.alibaba.com>, Andy Lutomirski <luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>, Thomas Gleixner <tglx@...utronix.de>, x86@...nel.org, Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, "H. Peter Anvin" <hpa@...or.com>, Alexandre Chartre <alexandre.chartre@...cle.com> Subject: [RFC PATCH V2 3/7] x86/hw_breakpoint: Prevent data breakpoints on per_cpu cpu_tss_rw cpu_tss_rw is not directly referenced by hardware, but cpu_tss_rw is also used in CPU entry code, especially when #DB shifts its stacks. If a data breakpoint is on the cpu_tss_rw.x86_tss.ist[IST_INDEX_DB], it will cause recursive #DB (and then #DF soon for #DB is generated after the access, IST-shifting, is done). Cc: Andy Lutomirski <luto@...nel.org> Cc: Peter Zijlstra (Intel) <peterz@...radead.org> Cc: Thomas Gleixner <tglx@...utronix.de> Cc: x86@...nel.org Signed-off-by: Lai Jiangshan <laijs@...ux.alibaba.com> --- arch/x86/kernel/hw_breakpoint.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c index f859095c1b6c..7d3966b9aa12 100644 --- a/arch/x86/kernel/hw_breakpoint.c +++ b/arch/x86/kernel/hw_breakpoint.c @@ -255,6 +255,19 @@ static inline bool within_cpu_entry(unsigned long addr, unsigned long end) if (within_area(addr, end, (unsigned long)get_cpu_gdt_rw(cpu), GDT_SIZE)) return true; + + /* + * cpu_tss_rw is not directly referenced by hardware, but + * cpu_tss_rw is also used in CPU entry code, especially + * when #DB shifts its stacks. If a data breakpoint is on + * the cpu_tss_rw.x86_tss.ist[IST_INDEX_DB], it will cause + * recursive #DB (and then #DF soon for #DB is generated + * after the access, IST-shifting, is done). + */ + if (within_area(addr, end, + (unsigned long)&per_cpu(cpu_tss_rw, cpu), + sizeof(struct tss_struct))) + return true; } return false; -- 2.20.1
Powered by blists - more mailing lists