[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20220825112334.GA23298@hu-pkondeti-hyd.qualcomm.com>
Date: Thu, 25 Aug 2022 16:53:34 +0530
From: Pavan Kondeti <quic_pkondeti@...cinc.com>
To: Michel Lespinasse <michel@...pinasse.org>
CC: Linux-MM <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
<kernel-team@...com>, Laurent Dufour <ldufour@...ux.ibm.com>,
Jerome Glisse <jglisse@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>,
Davidlohr Bueso <dave@...olabs.net>,
Matthew Wilcox <willy@...radead.org>,
Liam Howlett <liam.howlett@...cle.com>,
Rik van Riel <riel@...riel.com>,
Paul McKenney <paulmck@...nel.org>,
Song Liu <songliubraving@...com>,
Suren Baghdasaryan <surenb@...gle.com>,
Minchan Kim <minchan@...gle.com>,
Joel Fernandes <joelaf@...gle.com>,
David Rientjes <rientjes@...gle.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Andy Lutomirski <luto@...nel.org>
Subject: Re: [PATCH v2 10/35] mm: add per-mm mmap sequence counter for
speculative page fault handling.
On Fri, Jan 28, 2022 at 05:09:41AM -0800, Michel Lespinasse wrote:
> The counter's write side is hooked into the existing mmap locking API:
> mmap_write_lock() increments the counter to the next (odd) value, and
> mmap_write_unlock() increments it again to the next (even) value.
>
> The counter's speculative read side is supposed to be used as follows:
>
> seq = mmap_seq_read_start(mm);
> if (seq & 1)
> goto fail;
> .... speculative handling here ....
> if (!mmap_seq_read_check(mm, seq)
> goto fail;
>
> This API guarantees that, if none of the "fail" tests abort
> speculative execution, the speculative code section did not run
> concurrently with any mmap writer.
>
> This is very similar to a seqlock, but both the writer and speculative
> readers are allowed to block. In the fail case, the speculative reader
> does not spin on the sequence counter; instead it should fall back to
> a different mechanism such as grabbing the mmap lock read side.
>
> Signed-off-by: Michel Lespinasse <michel@...pinasse.org>
> ---
> include/linux/mm_types.h | 4 +++
> include/linux/mmap_lock.h | 58 +++++++++++++++++++++++++++++++++++++--
> 2 files changed, 60 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 0ae3bf854aad..e4965a6f34f2 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -523,6 +523,10 @@ struct mm_struct {
> * cacheline.
> */
> struct rw_semaphore mmap_lock;
> +#ifdef CONFIG_SPECULATIVE_PAGE_FAULT
> + unsigned long mmap_seq;
> +#endif
> +
>
The previous version of patches [1] had maintained this sequence counter
per-vma which is more granualar. Can you please share the rationale behind
this? I guess, this is more maintainable as we did not scatter the write side
changes but nicely hooked into the mmap write lock API.
I have tested ebizzy test with per-mm and per-vma sequence counter on x86 QEMU
and aarch64 platforms. The results indicate that we are taking classic page
fault route 5% more with per-mm sequence counter but it did not showed up in
the end results (how much time it takes to do fixed number of operations). So
I am asking this only to understand the reasoning behind this change.
[1]
https://lore.kernel.org/lkml/1523975611-15978-9-git-send-email-ldufour@linux.vnet.ibm.com/
Powered by blists - more mailing lists