lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 24 Sep 2020 12:07:12 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     peterz@...radead.org
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Hugh Dickins <hughd@...gle.com>,
        Daisuke Nishimura <nishimura@....nes.nec.co.jp>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org, npiggin@...il.com
Subject: Re: [PATCH] page_alloc: Fix freeing non-compound pages

On Thu, Sep 24, 2020 at 11:00:02AM +0200, peterz@...radead.org wrote:
> On Tue, Sep 22, 2020 at 03:00:17PM +0100, Matthew Wilcox (Oracle) wrote:
> > Here is a very rare race which leaks memory:
> > 
> > Page P0 is allocated to the page cache.
> > Page P1 is free.
> > 
> > Thread A		Thread B		Thread C
> > find_get_entry():
> > xas_load() returns P0
> > 						Removes P0 from page cache
> > 						Frees P0
> > 						P0 merged with its buddy P1
> > 			alloc_pages(GFP_KERNEL, 1) returns P0
> > 			P0 has refcount 1
> > page_cache_get_speculative(P0)
> > P0 has refcount 2
> > 			__free_pages(P0)
> > 			P0 has refcount 1
> > put_page(P0)
> > P1 is not freed
> > 
> > Fix this by freeing all the pages in __free_pages() that won't be freed
> > by the call to put_page().  It's usually not a good idea to split a page,
> > but this is a very unlikely scenario.
> > 
> > Fixes: e286781d5f2e ("mm: speculative page references")
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> > ---
> >  mm/page_alloc.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index fab5e97dc9ca..5db74797db39 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4943,10 +4943,19 @@ static inline void free_the_page(struct page *page, unsigned int order)
> >  		__free_pages_ok(page, order);
> >  }
> >  
> > +/*
> > + * If we free a non-compound allocation, another thread may have a
> > + * speculative reference to the first page.  It has no way of knowing
> > + * about the rest of the allocation, so we have to free all but the
> > + * first page here.
> > + */
> >  void __free_pages(struct page *page, unsigned int order)
> >  {
> >  	if (put_page_testzero(page))
> >  		free_the_page(page, order);
> > +	else if (!PageHead(page))
> > +		while (order-- > 0)
> > +			free_the_page(page + (1 << order), order);
> >  }
> >  EXPORT_SYMBOL(__free_pages);
> 
> So the obvious question I have here is why not teach put_page() to free
> the whole thing?

That's more complicated.  It looks like this:

    Fix this by converting P0 into a compound page if it is not freed by
    __free_pages().
    
    Fixes: e286781d5f2e ("mm: speculative page references")
    Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index fab5e97dc9ca..3e9f6e6694e7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4943,10 +4943,25 @@ static inline void free_the_page(struct page *page, unsigned int order)
                __free_pages_ok(page, order);
 }
 
+/*
+ * Have to be careful when freeing a non-compound allocation in case somebody
+ * else takes a temporary reference on the first page and then calls put_page()
+ */
 void __free_pages(struct page *page, unsigned int order)
 {
-       if (put_page_testzero(page))
-               free_the_page(page, order);
+       if (likely(page_ref_freeze(page, 1)))
+               goto free;
+       if (likely(order == 0 || PageHead(page))) {
+               if (put_page_testzero(page))
+                       goto free;
+               return;
+       }
+
+       prep_compound_page(page, order);
+       put_page(page);
+       return;
+free:
+       free_the_page(page, order);
 }
 EXPORT_SYMBOL(__free_pages);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ