lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 3 Jan 2024 12:26:29 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: Catalin Marinas <catalin.marinas@....com>,
 Jisheng Zhang <jszhang@...nel.org>
Cc: Will Deacon <will@...nel.org>,
 "Aneesh Kumar K . V" <aneesh.kumar@...ux.ibm.com>,
 Andrew Morton <akpm@...ux-foundation.org>, Nick Piggin <npiggin@...il.com>,
 Peter Zijlstra <peterz@...radead.org>,
 Paul Walmsley <paul.walmsley@...ive.com>, Palmer Dabbelt
 <palmer@...belt.com>, Albert Ou <aou@...s.berkeley.edu>,
 Arnd Bergmann <arnd@...db.de>, linux-arch@...r.kernel.org,
 linux-mm@...ck.org, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org, linux-riscv@...ts.infradead.org,
 Nadav Amit <namit@...are.com>, Andrea Arcangeli <aarcange@...hat.com>,
 Andy Lutomirski <luto@...nel.org>, Dave Hansen
 <dave.hansen@...ux.intel.com>, Thomas Gleixner <tglx@...utronix.de>,
 Yu Zhao <yuzhao@...gle.com>, x86@...nel.org
Subject: Re: [PATCH 1/2] mm/tlb: fix fullmm semantics

On 1/3/24 10:05, Catalin Marinas wrote:
>> --- a/mm/mmu_gather.c
>> +++ b/mm/mmu_gather.c
>> @@ -384,7 +384,7 @@ void tlb_finish_mmu(struct mmu_gather *tlb)
>>  		 * On x86 non-fullmm doesn't yield significant difference
>>  		 * against fullmm.
>>  		 */
>> -		tlb->fullmm = 1;
>> +		tlb->need_flush_all = 1;
>>  		__tlb_reset_range(tlb);
>>  		tlb->freed_tables = 1;
>>  	}
> The optimisation here was added about a year later in commit
> 7a30df49f63a ("mm: mmu_gather: remove __tlb_reset_range() for force
> flush"). Do we still need to keep freed_tables = 1 here? I'd say only
> __tlb_reset_range().

I think the __tlb_reset_range() can be dangerous if it clears
->freed_tables.  On x86 at least, it might lead to skipping the TLB IPI
for CPUs that are in lazy TLB mode.  When those wake back up they might
start using the freed page tables.

Logically, this little hunk of code is just trying to turn the 'tlb'
from a ranged flush into a full one.  Ideally, just setting
'need_flush_all = 1' would be enough.

Is __tlb_reset_range() doing anything useful for other architectures?  I
think it's just destroying perfectly valid metadata on x86. ;)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ