lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALvZod57CZ20SG0eYu95=PDqJ+adoiUErdgAmhc_+qxDo68GoA@mail.gmail.com>
Date:   Wed, 3 Jul 2019 13:14:34 -0700
From:   Shakeel Butt <shakeelb@...gle.com>
To:     Vitaly Wool <vitalywool@...il.com>
Cc:     Henry Burns <henryburns@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vitaly Vul <vitaly.vul@...y.com>,
        Mike Rapoport <rppt@...ux.vnet.ibm.com>,
        Xidong Wang <wangxidong_97@....com>,
        Jonathan Adams <jwadams@...gle.com>,
        Linux-MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm/z3fold: Fix z3fold_buddy_slots use after free

On Tue, Jul 2, 2019 at 11:03 PM Vitaly Wool <vitalywool@...il.com> wrote:
>
> On Tue, Jul 2, 2019 at 6:57 PM Henry Burns <henryburns@...gle.com> wrote:
> >
> > On Tue, Jul 2, 2019 at 12:45 AM Vitaly Wool <vitalywool@...il.com> wrote:
> > >
> > > Hi Henry,
> > >
> > > On Mon, Jul 1, 2019 at 8:31 PM Henry Burns <henryburns@...gle.com> wrote:
> > > >
> > > > Running z3fold stress testing with address sanitization
> > > > showed zhdr->slots was being used after it was freed.
> > > >
> > > > z3fold_free(z3fold_pool, handle)
> > > >   free_handle(handle)
> > > >     kmem_cache_free(pool->c_handle, zhdr->slots)
> > > >   release_z3fold_page_locked_list(kref)
> > > >     __release_z3fold_page(zhdr, true)
> > > >       zhdr_to_pool(zhdr)
> > > >         slots_to_pool(zhdr->slots)  *BOOM*
> > >
> > > Thanks for looking into this. I'm not entirely sure I'm all for
> > > splitting free_handle() but let me think about it.
> > >
> > > > Instead we split free_handle into two functions, release_handle()
> > > > and free_slots(). We use release_handle() in place of free_handle(),
> > > > and use free_slots() to call kmem_cache_free() after
> > > > __release_z3fold_page() is done.
> > >
> > > A little less intrusive solution would be to move backlink to pool
> > > from slots back to z3fold_header. Looks like it was a bad idea from
> > > the start.
> > >
> > > Best regards,
> > >    Vitaly
> >
> > We still want z3fold pages to be movable though. Wouldn't moving
> > the backink to the pool from slots to z3fold_header prevent us from
> > enabling migration?
>
> That is a valid point but we can just add back pool pointer to
> z3fold_header. The thing here is, there's another patch in the
> pipeline that allows for a better (inter-page) compaction and it will
> somewhat complicate things, because sometimes slots will have to be
> released after z3fold page is released (because they will hold a
> handle to another z3fold page). I would prefer that we just added back
> pool to z3fold_header and changed zhdr_to_pool to just return
> zhdr->pool, then had the compaction patch valid again, and then we
> could come back to size optimization.
>

By adding pool pointer back to z3fold_header, will we still be able to
move/migrate/compact the z3fold pages?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ