[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <01417038-5456-2b93-2b63-d9ab5b93c95e@nvidia.com>
Date: Wed, 30 Sep 2020 14:15:40 -0700
From: John Hubbard <jhubbard@...dia.com>
To: Jann Horn <jannh@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>, <linux-mm@...ck.org>
CC: <linux-kernel@...r.kernel.org>,
"Eric W . Biederman" <ebiederm@...ssion.com>,
Michel Lespinasse <walken@...gle.com>,
"Mauro Carvalho Chehab" <mchehab@...nel.org>,
Sakari Ailus <sakari.ailus@...ux.intel.com>
Subject: Re: [PATCH 1/4] mm/gup_benchmark: Take the mmap lock around GUP
On 9/29/20 6:19 PM, Jann Horn wrote:
> To be safe against concurrent changes to the VMA tree, we must take the
> mmap lock around GUP operations (excluding the GUP-fast family of
> operations, which will take the mmap lock by themselves if necessary).
>
> This code is only for testing, and it's only reachable by root through
> debugfs, so this doesn't really have any impact; however, if we want to add
> lockdep asserts into the GUP path, we need to have clean locking here.
>
> Signed-off-by: Jann Horn <jannh@...gle.com>
> ---
> This series should go on top of the coredump locking series (in
> particular "mm/gup: Take mmap_lock in get_dump_page()"), which is
> already in the mm tree.
Looks good. This also merges properly and still has the right semantics,
even after my gup_benchmark overhaul [1]. (Just have to change
mm/gup_benchmark.c to mm/gup_test.c).
The new needs_mmap_lock rule here still does the right thing, in other
words, after adding the new DUMP_USER_PAGES_TEST (a non-fast variant).
Reviewed-by: John Hubbard <jhubbard@...dia.com>
[1] https://lore.kernel.org/r/20200929212747.251804-1-jhubbard@nvidia.com
thanks,
--
John Hubbard
NVIDIA
>
> mm/gup_benchmark.c | 15 ++++++++++++---
> 1 file changed, 12 insertions(+), 3 deletions(-)
>
> diff --git a/mm/gup_benchmark.c b/mm/gup_benchmark.c
> index be690fa66a46..558595610650 100644
> --- a/mm/gup_benchmark.c
> +++ b/mm/gup_benchmark.c
> @@ -71,6 +71,8 @@ static int __gup_benchmark_ioctl(unsigned int cmd,
> int nr;
> struct page **pages;
> int ret = 0;
> + bool needs_mmap_lock =
> + cmd != GUP_FAST_BENCHMARK && cmd != PIN_FAST_BENCHMARK;
>
> if (gup->size > ULONG_MAX)
> return -EINVAL;
> @@ -80,6 +82,11 @@ static int __gup_benchmark_ioctl(unsigned int cmd,
> if (!pages)
> return -ENOMEM;
>
> + if (needs_mmap_lock && mmap_read_lock_killable(current->mm)) {
> + ret = -EINTR;
> + goto free_pages;
> + }
> +
> i = 0;
> nr = gup->nr_pages_per_call;
> start_time = ktime_get();
> @@ -119,9 +126,8 @@ static int __gup_benchmark_ioctl(unsigned int cmd,
> NULL);
> break;
> default:
> - kvfree(pages);
> ret = -EINVAL;
> - goto out;
> + goto unlock;
> }
>
> if (nr <= 0)
> @@ -149,8 +155,11 @@ static int __gup_benchmark_ioctl(unsigned int cmd,
> end_time = ktime_get();
> gup->put_delta_usec = ktime_us_delta(end_time, start_time);
>
> +unlock:
> + if (needs_mmap_lock)
> + mmap_read_unlock(current->mm);
> +free_pages:
> kvfree(pages);
> -out:
> return ret;
> }
>
>
> base-commit: fb0155a09b0224a7147cb07a4ce6034c8d29667f
> prerequisite-patch-id: 08f97130a51898a5f6efddeeb5b42638577398c7
> prerequisite-patch-id: 577664d761cd23fe9031ffdb1d3c9ac313572c67
> prerequisite-patch-id: dc29a39716aa8689f80ba2767803d9df3709beaa
> prerequisite-patch-id: 42b1b546d33391ead2753621f541bcc408af1769
> prerequisite-patch-id: 2cbb839f57006f32e21f4229e099ae1bd782be24
> prerequisite-patch-id: 1b4daf01cf61654a5ec54b5c3f7c7508be7244ee
> prerequisite-patch-id: f46cc8c99f1909fe2a65fbc3cf1f6bc57489a086
>
Powered by blists - more mailing lists