lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sat, 15 Oct 2022 16:47:16 -0700 From: Linus Torvalds <torvalds@...uxfoundation.org> To: Nadav Amit <nadav.amit@...il.com> Cc: Jann Horn <jannh@...gle.com>, Andy Lutomirski <luto@...nel.org>, Linux-MM <linux-mm@...ck.org>, Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>, kernel list <linux-kernel@...r.kernel.org>, Kees Cook <keescook@...omium.org>, Ingo Molnar <mingo@...nel.org>, Sasha Levin <sasha.levin@...cle.com>, Andrew Morton <akpm@...ux-foundation.org>, Will Deacon <will@...nel.org>, Peter Zijlstra <peterz@...radead.org> Subject: Re: [BUG?] X86 arch_tlbbatch_flush() seems to be lacking mm_tlb_flush_nested() integration On Fri, Oct 14, 2022 at 8:51 PM Nadav Amit <nadav.amit@...il.com> wrote: > > Unless I am missing something, flush_tlb_batched_pending() is would be > called and do the flushing at this point, no? Ahh, yes. That seems to be doing the right thing, although looking a bit more at it, I think it might be improved. At least in the zap_pte_range() case, instead of doing a synchronous TLB flush if there are pending batched flushes, it migth be better if flush_tlb_batched_pending() would set the "need_flush_all" bit in the mmu_gather structure. That would possibly avoid that extra TLB flush entirely - since *normally* fzap_page_range() will cause a TLB flush anyway. Maybe it doesn't matter. Linus
Powered by blists - more mailing lists