[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240129203904.7dcugltsjajldlea@revolver>
Date: Mon, 29 Jan 2024 15:39:04 -0500
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: Lokesh Gidra <lokeshgidra@...gle.com>
Cc: akpm@...ux-foundation.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
selinux@...r.kernel.org, surenb@...gle.com, kernel-team@...roid.com,
aarcange@...hat.com, peterx@...hat.com, david@...hat.com,
axelrasmussen@...gle.com, bgeffon@...gle.com, willy@...radead.org,
jannh@...gle.com, kaleshsingh@...gle.com, ngeoffray@...gle.com,
timmurray@...gle.com, rppt@...nel.org
Subject: Re: [PATCH v2 0/3] per-vma locks in userfaultfd
* Lokesh Gidra <lokeshgidra@...gle.com> [240129 14:35]:
> Performing userfaultfd operations (like copy/move etc.) in critical
> section of mmap_lock (read-mode) causes significant contention on the
> lock when operations requiring the lock in write-mode are taking place
> concurrently. We can use per-vma locks instead to significantly reduce
> the contention issue.
Is this really an issue? I'm surprised so much userfaultfd work is
happening to create contention. Can you share some numbers and how your
patch set changes the performance?
>
> Changes since v1 [1]:
> - rebase patches on 'mm-unstable' branch
>
> [1] https://lore.kernel.org/all/20240126182647.2748949-1-lokeshgidra@google.com/
>
> Lokesh Gidra (3):
> userfaultfd: move userfaultfd_ctx struct to header file
> userfaultfd: protect mmap_changing with rw_sem in userfaulfd_ctx
> userfaultfd: use per-vma locks in userfaultfd operations
>
> fs/userfaultfd.c | 86 ++++---------
> include/linux/userfaultfd_k.h | 75 ++++++++---
> mm/userfaultfd.c | 229 ++++++++++++++++++++++------------
> 3 files changed, 229 insertions(+), 161 deletions(-)
>
> --
> 2.43.0.429.g432eaa2c6b-goog
>
>
Powered by blists - more mailing lists