lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Mar 2019 09:47:24 -0400
From:   Jerome Glisse <jglisse@...hat.com>
To:     "Kirill A. Shutemov" <kirill@...temov.name>
Cc:     john.hubbard@...il.com, Andrew Morton <akpm@...ux-foundation.org>,
        linux-mm@...ck.org, Al Viro <viro@...iv.linux.org.uk>,
        Christian Benvenuti <benve@...co.com>,
        Christoph Hellwig <hch@...radead.org>,
        Christopher Lameter <cl@...ux.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Dave Chinner <david@...morbit.com>,
        Dennis Dalessandro <dennis.dalessandro@...el.com>,
        Doug Ledford <dledford@...hat.com>,
        Ira Weiny <ira.weiny@...el.com>, Jan Kara <jack@...e.cz>,
        Jason Gunthorpe <jgg@...pe.ca>,
        Matthew Wilcox <willy@...radead.org>,
        Michal Hocko <mhocko@...nel.org>,
        Mike Rapoport <rppt@...ux.ibm.com>,
        Mike Marciniszyn <mike.marciniszyn@...el.com>,
        Ralph Campbell <rcampbell@...dia.com>,
        Tom Talpey <tom@...pey.com>,
        LKML <linux-kernel@...r.kernel.org>,
        linux-fsdevel@...r.kernel.org, John Hubbard <jhubbard@...dia.com>
Subject: Re: [PATCH v4 1/1] mm: introduce put_user_page*(), placeholder
 versions

On Tue, Mar 19, 2019 at 03:04:17PM +0300, Kirill A. Shutemov wrote:
> On Fri, Mar 08, 2019 at 01:36:33PM -0800, john.hubbard@...il.com wrote:
> > From: John Hubbard <jhubbard@...dia.com>

[...]

> > diff --git a/mm/gup.c b/mm/gup.c
> > index f84e22685aaa..37085b8163b1 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> > @@ -28,6 +28,88 @@ struct follow_page_context {
> >  	unsigned int page_mask;
> >  };
> >  
> > +typedef int (*set_dirty_func_t)(struct page *page);
> > +
> > +static void __put_user_pages_dirty(struct page **pages,
> > +				   unsigned long npages,
> > +				   set_dirty_func_t sdf)
> > +{
> > +	unsigned long index;
> > +
> > +	for (index = 0; index < npages; index++) {
> > +		struct page *page = compound_head(pages[index]);
> > +
> > +		if (!PageDirty(page))
> > +			sdf(page);
> 
> How is this safe? What prevents the page to be cleared under you?
> 
> If it's safe to race clear_page_dirty*() it has to be stated explicitly
> with a reason why. It's not very clear to me as it is.

The PageDirty() optimization above is fine to race with clear the
page flag as it means it is racing after a page_mkclean() and the
GUP user is done with the page so page is about to be write back
ie if (!PageDirty(page)) see the page as dirty and skip the sdf()
call while a split second after TestClearPageDirty() happens then
it means the racing clear is about to write back the page so all
is fine (the page was dirty and it is being clear for write back).

If it does call the sdf() while racing with write back then we
just redirtied the page just like clear_page_dirty_for_io() would
do if page_mkclean() failed so nothing harmful will come of that
neither. Page stays dirty despite write back it just means that
the page might be write back twice in a row.

> > +
> > +		put_user_page(page);
> > +	}
> > +}
> > +
> > +/**
> > + * put_user_pages_dirty() - release and dirty an array of gup-pinned pages
> > + * @pages:  array of pages to be marked dirty and released.
> > + * @npages: number of pages in the @pages array.
> > + *
> > + * "gup-pinned page" refers to a page that has had one of the get_user_pages()
> > + * variants called on that page.
> > + *
> > + * For each page in the @pages array, make that page (or its head page, if a
> > + * compound page) dirty, if it was previously listed as clean. Then, release
> > + * the page using put_user_page().
> > + *
> > + * Please see the put_user_page() documentation for details.
> > + *
> > + * set_page_dirty(), which does not lock the page, is used here.
> > + * Therefore, it is the caller's responsibility to ensure that this is
> > + * safe. If not, then put_user_pages_dirty_lock() should be called instead.
> > + *
> > + */
> > +void put_user_pages_dirty(struct page **pages, unsigned long npages)
> > +{
> > +	__put_user_pages_dirty(pages, npages, set_page_dirty);
> 
> Have you checked if compiler is clever enough eliminate indirect function
> call here? Maybe it's better to go with an opencodded approach and get rid
> of callbacks?
> 

Good point, dunno if John did check that.

> 
> > +}
> > +EXPORT_SYMBOL(put_user_pages_dirty);
> > +
> > +/**
> > + * put_user_pages_dirty_lock() - release and dirty an array of gup-pinned pages
> > + * @pages:  array of pages to be marked dirty and released.
> > + * @npages: number of pages in the @pages array.
> > + *
> > + * For each page in the @pages array, make that page (or its head page, if a
> > + * compound page) dirty, if it was previously listed as clean. Then, release
> > + * the page using put_user_page().
> > + *
> > + * Please see the put_user_page() documentation for details.
> > + *
> > + * This is just like put_user_pages_dirty(), except that it invokes
> > + * set_page_dirty_lock(), instead of set_page_dirty().
> > + *
> > + */
> > +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages)
> > +{
> > +	__put_user_pages_dirty(pages, npages, set_page_dirty_lock);
> > +}
> > +EXPORT_SYMBOL(put_user_pages_dirty_lock);
> > +
> > +/**
> > + * put_user_pages() - release an array of gup-pinned pages.
> > + * @pages:  array of pages to be marked dirty and released.
> > + * @npages: number of pages in the @pages array.
> > + *
> > + * For each page in the @pages array, release the page using put_user_page().
> > + *
> > + * Please see the put_user_page() documentation for details.
> > + */
> > +void put_user_pages(struct page **pages, unsigned long npages)
> > +{
> > +	unsigned long index;
> > +
> > +	for (index = 0; index < npages; index++)
> > +		put_user_page(pages[index]);
> 
> I believe there's an room for improvement for compound pages.
> 
> If there's multiple consequential pages in the array that belong to the
> same compound page we can get away with a single atomic operation to
> handle them all.

Yes maybe just add a comment with that for now and leave this kind of
optimization to latter ?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ