lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 May 2020 18:26:19 -0700
From:   Andrew Morton <>
To:     Bibo Mao <>
Cc:     Thomas Bogendoerfer <>,
        Jiaxun Yang <>,
        Huacai Chen <>,
        Paul Burton <>,
        Dmitry Korotin <>,
        Philippe Mathieu-Daudé <>,
        Stafford Horne <>,
        Steven Price <>,
        Anshuman Khandual <>,,,
        Mike Rapoport <>,
        Sergei Shtylyov <>,
        "Maciej W. Rozycki" <>,,
        David Hildenbrand <>
Subject: Re: [PATCH v4 2/4] mm/memory.c: Update local TLB if PTE entry

On Tue, 19 May 2020 18:03:28 +0800 Bibo Mao <> wrote:

> If two threads concurrently fault at the same address, the thread that
> won the race updates the PTE and its local TLB. For now, the other
> thread gives up, simply does nothing, and continues.
> It could happen that this second thread triggers another fault, whereby
> it only updates its local TLB while handling the fault. Instead of
> triggering another fault, let's directly update the local TLB of the
> second thread.
> It is only useful to architectures where software can update TLB, it may
> bring out some negative effect if update_mmu_cache is used for other
> purpose also. It seldom happens where multiple threads access the same
> page at the same time, so the negative effect is limited on other arches.

I'm still worried about the impact on other architectures.  The
additional update_mmu_cache() calls won't occur only when multiple
threads are racing against the same page, I think?  For example,
insert_pfn() will do this when making a read-only page a writable one.

Would you have time to add some instrumentation into update_mmu_cache()
(maybe a tracepoint) and see what effect this change has upon the
frequency at which update_mmu_cache() is called for a selection of
workloads?  And add this info to the changelog to set minds at ease?

Powered by blists - more mailing lists