[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181012073521.GJ8537@350D>
Date: Fri, 12 Oct 2018 18:35:22 +1100
From: Balbir Singh <bsingharora@...il.com>
To: john.hubbard@...il.com
Cc: Matthew Wilcox <willy@...radead.org>,
Michal Hocko <mhocko@...nel.org>,
Christopher Lameter <cl@...ux.com>,
Jason Gunthorpe <jgg@...pe.ca>,
Dan Williams <dan.j.williams@...el.com>,
Jan Kara <jack@...e.cz>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-rdma <linux-rdma@...r.kernel.org>,
linux-fsdevel@...r.kernel.org, John Hubbard <jhubbard@...dia.com>,
Al Viro <viro@...iv.linux.org.uk>,
Jerome Glisse <jglisse@...hat.com>,
Christoph Hellwig <hch@...radead.org>,
Ralph Campbell <rcampbell@...dia.com>
Subject: Re: [PATCH 2/6] mm: introduce put_user_page*(), placeholder versions
On Thu, Oct 11, 2018 at 11:00:10PM -0700, john.hubbard@...il.com wrote:
> From: John Hubbard <jhubbard@...dia.com>
>
> Introduces put_user_page(), which simply calls put_page().
> This provides a way to update all get_user_pages*() callers,
> so that they call put_user_page(), instead of put_page().
>
> Also introduces put_user_pages(), and a few dirty/locked variations,
> as a replacement for release_pages(), and also as a replacement
> for open-coded loops that release multiple pages.
> These may be used for subsequent performance improvements,
> via batching of pages to be released.
>
> This is the first step of fixing the problem described in [1]. The steps
> are:
>
> 1) (This patch): provide put_user_page*() routines, intended to be used
> for releasing pages that were pinned via get_user_pages*().
>
> 2) Convert all of the call sites for get_user_pages*(), to
> invoke put_user_page*(), instead of put_page(). This involves dozens of
> call sites, any will take some time.
>
> 3) After (2) is complete, use get_user_pages*() and put_user_page*() to
> implement tracking of these pages. This tracking will be separate from
> the existing struct page refcounting.
>
> 4) Use the tracking and identification of these pages, to implement
> special handling (especially in writeback paths) when the pages are
> backed by a filesystem. Again, [1] provides details as to why that is
> desirable.
>
> [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()"
>
> CC: Matthew Wilcox <willy@...radead.org>
> CC: Michal Hocko <mhocko@...nel.org>
> CC: Christopher Lameter <cl@...ux.com>
> CC: Jason Gunthorpe <jgg@...pe.ca>
> CC: Dan Williams <dan.j.williams@...el.com>
> CC: Jan Kara <jack@...e.cz>
> CC: Al Viro <viro@...iv.linux.org.uk>
> CC: Jerome Glisse <jglisse@...hat.com>
> CC: Christoph Hellwig <hch@...radead.org>
> CC: Ralph Campbell <rcampbell@...dia.com>
>
> Reviewed-by: Jan Kara <jack@...e.cz>
> Signed-off-by: John Hubbard <jhubbard@...dia.com>
> ---
> include/linux/mm.h | 20 +++++++++++
> mm/swap.c | 83 ++++++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 103 insertions(+)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 0416a7204be3..76d18aada9f8 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -943,6 +943,26 @@ static inline void put_page(struct page *page)
> __put_page(page);
> }
>
> +/*
> + * put_user_page() - release a page that had previously been acquired via
> + * a call to one of the get_user_pages*() functions.
> + *
> + * Pages that were pinned via get_user_pages*() must be released via
> + * either put_user_page(), or one of the put_user_pages*() routines
> + * below. This is so that eventually, pages that are pinned via
> + * get_user_pages*() can be separately tracked and uniquely handled. In
> + * particular, interactions with RDMA and filesystems need special
> + * handling.
> + */
> +static inline void put_user_page(struct page *page)
> +{
> + put_page(page);
> +}
> +
> +void put_user_pages_dirty(struct page **pages, unsigned long npages);
> +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages);
> +void put_user_pages(struct page **pages, unsigned long npages);
> +
> #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
> #define SECTION_IN_PAGE_FLAGS
> #endif
> diff --git a/mm/swap.c b/mm/swap.c
> index 26fc9b5f1b6c..efab3a6b6f91 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -134,6 +134,89 @@ void put_pages_list(struct list_head *pages)
> }
> EXPORT_SYMBOL(put_pages_list);
>
> +/*
> + * put_user_pages_dirty() - for each page in the @pages array, make
> + * that page (or its head page, if a compound page) dirty, if it was
> + * previously listed as clean. Then, release the page using
> + * put_user_page().
> + *
> + * Please see the put_user_page() documentation for details.
> + *
> + * set_page_dirty(), which does not lock the page, is used here.
> + * Therefore, it is the caller's responsibility to ensure that this is
> + * safe. If not, then put_user_pages_dirty_lock() should be called instead.
> + *
> + * @pages: array of pages to be marked dirty and released.
> + * @npages: number of pages in the @pages array.
> + *
> + */
> +void put_user_pages_dirty(struct page **pages, unsigned long npages)
> +{
> + unsigned long index;
> +
> + for (index = 0; index < npages; index++) {
Do we need any checks on npages, npages <= (PUD_SHIFT - PAGE_SHIFT)?
> + struct page *page = compound_head(pages[index]);
> +
> + if (!PageDirty(page))
> + set_page_dirty(page);
> +
> + put_user_page(page);
> + }
> +}
> +EXPORT_SYMBOL(put_user_pages_dirty);
> +
> +/*
> + * put_user_pages_dirty_lock() - for each page in the @pages array, make
> + * that page (or its head page, if a compound page) dirty, if it was
> + * previously listed as clean. Then, release the page using
> + * put_user_page().
> + *
> + * Please see the put_user_page() documentation for details.
> + *
> + * This is just like put_user_pages_dirty(), except that it invokes
> + * set_page_dirty_lock(), instead of set_page_dirty().
> + *
> + * @pages: array of pages to be marked dirty and released.
> + * @npages: number of pages in the @pages array.
> + *
> + */
> +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages)
> +{
> + unsigned long index;
> +
> + for (index = 0; index < npages; index++) {
> + struct page *page = compound_head(pages[index]);
> +
> + if (!PageDirty(page))
> + set_page_dirty_lock(page);
> +
> + put_user_page(page);
> + }
> +}
> +EXPORT_SYMBOL(put_user_pages_dirty_lock);
> +
This can be collapsed w.r.t put_user_pages_dirty, a function pointer indirection
for the locked vs unlocked case, not sure how that affects function optimization.
> +/*
> + * put_user_pages() - for each page in the @pages array, release the page
> + * using put_user_page().
> + *
> + * Please see the put_user_page() documentation for details.
> + *
> + * This is just like put_user_pages_dirty(), except that it invokes
> + * set_page_dirty_lock(), instead of set_page_dirty().
The comment is incorrect.
> + *
> + * @pages: array of pages to be marked dirty and released.
> + * @npages: number of pages in the @pages array.
> + *
> + */
> +void put_user_pages(struct page **pages, unsigned long npages)
> +{
> + unsigned long index;
> +
> + for (index = 0; index < npages; index++)
> + put_user_page(pages[index]);
> +}
Ditto in terms of code duplication
How about
for_each_page_index(index, npages) {
<do the dirty bits if needed>
put_user_pages(pages[index]
}
Then pass what you want the page iterator to do
> +EXPORT_SYMBOL(put_user_pages);
> +
> /*
> * get_kernel_pages() - pin kernel pages in memory
> * @kiov: An array of struct kvec structures
> --
> 2.19.1
>
Balbir Singh.
Powered by blists - more mailing lists