lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADJK47M=4kU9SabcDsFD5qTQm-0rQdmage8eiFrV=LDMp7OCyQ@mail.gmail.com>
Date:   Fri, 23 Aug 2019 04:10:10 -0400
From:   Henry Burns <henrywolfeburns@...il.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     Sergey Senozhatsky <sergey.senozhatsky.work@...il.com>,
        Henry Burns <henryburns@...gle.com>,
        Minchan Kim <minchan@...nel.org>,
        Nitin Gupta <ngupta@...are.org>,
        Shakeel Butt <shakeelb@...gle.com>,
        Jonathan Adams <jwadams@...gle.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2 v2] mm/zsmalloc.c: Fix race condition in zs_destroy_pool

On Thu, Aug 22, 2019 at 7:23 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> On Tue, 20 Aug 2019 11:59:39 +0900 Sergey Senozhatsky <sergey.senozhatsky.work@...il.com> wrote:
>
> > On (08/09/19 11:17), Henry Burns wrote:
> > > In zs_destroy_pool() we call flush_work(&pool->free_work). However, we
> > > have no guarantee that migration isn't happening in the background
> > > at that time.
> > >
> > > Since migration can't directly free pages, it relies on free_work
> > > being scheduled to free the pages.  But there's nothing preventing an
> > > in-progress migrate from queuing the work *after*
> > > zs_unregister_migration() has called flush_work().  Which would mean
> > > pages still pointing at the inode when we free it.
> > >
> > > Since we know at destroy time all objects should be free, no new
> > > migrations can come in (since zs_page_isolate() fails for fully-free
> > > zspages).  This means it is sufficient to track a "# isolated zspages"
> > > count by class, and have the destroy logic ensure all such pages have
> > > drained before proceeding.  Keeping that state under the class
> > > spinlock keeps the logic straightforward.
> > >
> > > Fixes: 48b4800a1c6a ("zsmalloc: page migration support")
> > > Signed-off-by: Henry Burns <henryburns@...gle.com>
> >
> > Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@...il.com>
> >
>
> Thanks.  So we have a couple of races which result in memory leaks?  Do
> we feel this is serious enough to justify a -stable backport of the
> fixes?

In this case a memory leak could lead to an eventual crash if
compaction hits the leaked page. I don't know what a -stable
backport entails, but this crash would only occur if people are
changing their zswap backend at runtime
(which eventually starts destruction).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ