[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <DA49DBBB-FFEE-4ACC-BB6C-364D07533C5E@vmware.com>
Date: Fri, 8 Oct 2021 06:06:34 +0000
From: Nadav Amit <namit@...are.com>
To: David Hildenbrand <david@...hat.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>, Peter Xu <peterx@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Andrew Cooper <andrew.cooper3@...rix.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Nick Piggin <npiggin@...il.com>,
"x86@...nel.org" <x86@...nel.org>
Subject: Re: [PATCH 2/2] mm/mprotect: do not flush on permission promotion
> On Oct 7, 2021, at 10:07 AM, David Hildenbrand <david@...hat.com> wrote:
>
> On 07.10.21 18:16, Nadav Amit wrote:
>>> On Oct 7, 2021, at 5:13 AM, David Hildenbrand <david@...hat.com> wrote:
>>>
>>> On 25.09.21 22:54, Nadav Amit wrote:
>>>> From: Nadav Amit <namit@...are.com>
>>>> Currently, using mprotect() to unprotect a memory region or uffd to
>>>> unprotect a memory region causes a TLB flush. At least on x86, as
>>>> protection is promoted, no TLB flush is needed.
>>>> Add an arch-specific pte_may_need_flush() which tells whether a TLB
>>>> flush is needed based on the old PTE and the new one. Implement an x86
>>>> pte_may_need_flush().
>>>> For x86, PTE protection promotion or changes of software bits does
>>>> require a flush, also add logic that considers the dirty-bit. Changes to
>>>> the access-bit do not trigger a TLB flush, although architecturally they
>>>> should, as Linux considers the access-bit as a hint.
>>>
>>> Is the added LOC worth the benefit? IOW, do we have some benchmark that really benefits from that?
>> So you ask whether the added ~10 LOC (net) worth the benefit?
>
> I read "3 files changed, 46 insertions(+), 1 deletion(-)" to optimize something without proof, so I naturally have to ask. So this is just a "usually we optimize and show numbers to proof" comment.
These numbers are misleading, as they count comments and other non-code.
[snip]
>
> Any numbers would be helpful.
>
>> If you want, I will write a microbenchmarks and give you numbers.
>> If you look for further optimizations (although you did not indicate
>> so), such as doing the TLB batching from do_mprotect_key(),
>> (i.e. batching across VMAs), we can discuss it and apply it on
>> top of these patches.
>
> I think this patch itself is sufficient if we can show a benefit; I do wonder if existing benchmarks could already show a benefit, I feel like they should if this makes a difference. Excessive mprotect() usage (protect<>unprotect) isn't something unusual.
I do not know about a concrete benchmark (other than my workload, which I cannot share right now) that does excessive mprotect() in a way that would actually be measurable on the overall performance. I would argue that many many optimizations in the kernel are such that would not have so measurable benefit by themselves on common macrobenchmarks.
Anyhow, per your request I created a small micro-benchmark that runs mprotect(PROT_READ) and mprotect(PROT_READ|PROT_WRITE) in a loop and measured the time it took to do the latter (where a writeprotect is not needed). I ran the benchmark on a VM (guest) on top of KVM.
The cost (cycles) per mprotect(PROT_READ|PROT_WRITE) operation:
1 thread 2 threads
-------- ---------
w/patch: 2496 2505
w/o patch: 5342 10458
[ The results for 1 thread might seem strange as one can expect the overhead in this case to be no more than ~250 cycles, which is the time a TLB invalidation of a single PTE takes. Yet, this overhead are probably related to “page fracturing”, which happens when the VM memory is backed by 4KB pages. In such scenarios, a single PTE invalidation in the VM can cause on Intel a full TLB flush. The full flush is needed to ensure that if the invalidated address was mapped through huge page in the VM, any relevant 4KB mapping that is cached in the TLB (after fracturing due to the 4KB GPA->HPA mapping) would be removed.]
Let me know if you want me to share the micro-benchmark with you. I am not going to mention the results in the commit log, because I think the overhead of unnecessary TLB invalidation is well established.
Powered by blists - more mailing lists