[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e01c732f-08c2-95db-dbb9-b643131b522c@redhat.com>
Date: Fri, 8 Oct 2021 09:35:29 +0200
From: David Hildenbrand <david@...hat.com>
To: Nadav Amit <namit@...are.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>, Peter Xu <peterx@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Andrew Cooper <andrew.cooper3@...rix.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Nick Piggin <npiggin@...il.com>,
"x86@...nel.org" <x86@...nel.org>
Subject: Re: [PATCH 2/2] mm/mprotect: do not flush on permission promotion
>>
>> Any numbers would be helpful.
>>
>>> If you want, I will write a microbenchmarks and give you numbers.
>>> If you look for further optimizations (although you did not indicate
>>> so), such as doing the TLB batching from do_mprotect_key(),
>>> (i.e. batching across VMAs), we can discuss it and apply it on
>>> top of these patches.
>>
>> I think this patch itself is sufficient if we can show a benefit; I do wonder if existing benchmarks could already show a benefit, I feel like they should if this makes a difference. Excessive mprotect() usage (protect<>unprotect) isn't something unusual.
>
> I do not know about a concrete benchmark (other than my workload, which I cannot share right now) that does excessive mprotect() in a way that would actually be measurable on the overall performance. I would argue that many many optimizations in the kernel are such that would not have so measurable benefit by themselves on common macrobenchmarks.
>
> Anyhow, per your request I created a small micro-benchmark that runs mprotect(PROT_READ) and mprotect(PROT_READ|PROT_WRITE) in a loop and measured the time it took to do the latter (where a writeprotect is not needed). I ran the benchmark on a VM (guest) on top of KVM.
>
> The cost (cycles) per mprotect(PROT_READ|PROT_WRITE) operation:
>
> 1 thread 2 threads
> -------- ---------
> w/patch: 2496 2505
> w/o patch: 5342 10458
>
For my taste, the above numbers are sufficient, thanks!
> [ The results for 1 thread might seem strange as one can expect the overhead in this case to be no more than ~250 cycles, which is the time a TLB invalidation of a single PTE takes. Yet, this overhead are probably related to “page fracturing”, which happens when the VM memory is backed by 4KB pages. In such scenarios, a single PTE invalidation in the VM can cause on Intel a full TLB flush. The full flush is needed to ensure that if the invalidated address was mapped through huge page in the VM, any relevant 4KB mapping that is cached in the TLB (after fracturing due to the 4KB GPA->HPA mapping) would be removed.]
Very nice analysis :)
>
> Let me know if you want me to share the micro-benchmark with you. I am not going to mention the results in the commit log, because I think the overhead of unnecessary TLB invalidation is well established.
Just let me clarify why I am asking at all, it could be that:
a) The optimization is effective and applicable to many workloads
b) The optimization is effective and applicable to some workloads
("micro benchmark")
c) The optimization is ineffective
d) The optimization is wrong
IMHO: We can rule out d) by review and tests. We can rule out c) by
simple benchmarks easily.
Maybe extend the patch description by something like:
"The benefit of this optimization can already be visible when doing
mprotect(PROT_READ) -> mprotect(PROT_READ|PROT_WRITE) on a single
thread, because we end up requiring basically no TLB flushes. The
optimization gets even more significant with more threads. See [1] for
simple micro benchmark results."
Something like that would be good enough for my taste.
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists