[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200203135845.ymfbghs7rf67awex@box>
Date: Mon, 3 Feb 2020 16:58:45 +0300
From: "Kirill A. Shutemov" <kirill@...temov.name>
To: John Hubbard <jhubbard@...dia.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Al Viro <viro@...iv.linux.org.uk>,
Christoph Hellwig <hch@...radead.org>,
Dan Williams <dan.j.williams@...el.com>,
Dave Chinner <david@...morbit.com>,
Ira Weiny <ira.weiny@...el.com>, Jan Kara <jack@...e.cz>,
Jason Gunthorpe <jgg@...pe.ca>,
Jonathan Corbet <corbet@....net>,
Jérôme Glisse <jglisse@...hat.com>,
Michal Hocko <mhocko@...e.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Shuah Khan <shuah@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Matthew Wilcox <willy@...radead.org>,
linux-doc@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kselftest@...r.kernel.org, linux-rdma@...r.kernel.org,
linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 11/12] mm/gup_benchmark: support pin_user_pages() and
related calls
On Fri, Jan 31, 2020 at 07:40:28PM -0800, John Hubbard wrote:
> Up until now, gup_benchmark supported testing of the
> following kernel functions:
>
> * get_user_pages(): via the '-U' command line option
> * get_user_pages_longterm(): via the '-L' command line option
> * get_user_pages_fast(): as the default (no options required)
>
> Add test coverage for the new corresponding pin_*() functions:
>
> * pin_user_pages_fast(): via the '-a' command line option
> * pin_user_pages(): via the '-b' command line option
>
> Also, add an option for clarity: '-u' for what is now (still) the
> default choice: get_user_pages_fast().
>
> Also, for the commands that set FOLL_PIN, verify that the pages
> really are dma-pinned, via the new is_dma_pinned() routine.
> Those commands are:
>
> PIN_FAST_BENCHMARK : calls pin_user_pages_fast()
> PIN_BENCHMARK : calls pin_user_pages()
>
> In between the calls to pin_*() and unpin_user_pages(),
> check each page: if page_maybe_dma_pinned() returns false, then
> WARN and return.
>
> Do this outside of the benchmark timestamps, so that it doesn't
> affect reported times.
>
> Reviewed-by: Ira Weiny <ira.weiny@...el.com>
> Signed-off-by: John Hubbard <jhubbard@...dia.com>
> ---
> mm/gup_benchmark.c | 71 ++++++++++++++++++++--
> tools/testing/selftests/vm/gup_benchmark.c | 15 ++++-
> 2 files changed, 80 insertions(+), 6 deletions(-)
>
> diff --git a/mm/gup_benchmark.c b/mm/gup_benchmark.c
> index 8dba38e79a9f..447628d0131f 100644
> --- a/mm/gup_benchmark.c
> +++ b/mm/gup_benchmark.c
> @@ -8,6 +8,8 @@
> #define GUP_FAST_BENCHMARK _IOWR('g', 1, struct gup_benchmark)
> #define GUP_LONGTERM_BENCHMARK _IOWR('g', 2, struct gup_benchmark)
> #define GUP_BENCHMARK _IOWR('g', 3, struct gup_benchmark)
> +#define PIN_FAST_BENCHMARK _IOWR('g', 4, struct gup_benchmark)
> +#define PIN_BENCHMARK _IOWR('g', 5, struct gup_benchmark)
>
> struct gup_benchmark {
> __u64 get_delta_usec;
> @@ -19,6 +21,48 @@ struct gup_benchmark {
> __u64 expansion[10]; /* For future use */
> };
>
> +static void put_back_pages(unsigned int cmd, struct page **pages,
> + unsigned long nr_pages)
> +{
> + int i;
> +
> + switch (cmd) {
> + case GUP_FAST_BENCHMARK:
> + case GUP_LONGTERM_BENCHMARK:
> + case GUP_BENCHMARK:
> + for (i = 0; i < nr_pages; i++)
'i' is 'int' and 'nr_pages' is 'unsigned long'.
There's space for trouble :P
> + put_page(pages[i]);
> + break;
> +
> + case PIN_FAST_BENCHMARK:
> + case PIN_BENCHMARK:
> + unpin_user_pages(pages, nr_pages);
> + break;
> + }
> +}
> +
> +static void verify_dma_pinned(unsigned int cmd, struct page **pages,
> + unsigned long nr_pages)
> +{
> + int i;
> + struct page *page;
> +
> + switch (cmd) {
> + case PIN_FAST_BENCHMARK:
> + case PIN_BENCHMARK:
> + for (i = 0; i < nr_pages; i++) {
Ditto.
> + page = pages[i];
> + if (WARN(!page_maybe_dma_pinned(page),
> + "pages[%d] is NOT dma-pinned\n", i)) {
> +
> + dump_page(page, "gup_benchmark failure");
> + break;
> + }
> + }
> + break;
> + }
> +}
> +
> static int __gup_benchmark_ioctl(unsigned int cmd,
> struct gup_benchmark *gup)
> {
--
Kirill A. Shutemov
Powered by blists - more mailing lists