lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 28 Apr 2017 08:16:38 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Kees Cook <keescook@...omium.org>
Cc:     Christoph Lameter <cl@...ux.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Linux-MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: Add additional consistency check

On Thu 27-04-17 18:11:28, Kees Cook wrote:
> On Tue, Apr 11, 2017 at 7:19 AM, Michal Hocko <mhocko@...nel.org> wrote:
> > I would do something like...
> > ---
> > diff --git a/mm/slab.c b/mm/slab.c
> > index bd63450a9b16..87c99a5e9e18 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -393,10 +393,15 @@ static inline void set_store_user_dirty(struct kmem_cache *cachep) {}
> >  static int slab_max_order = SLAB_MAX_ORDER_LO;
> >  static bool slab_max_order_set __initdata;
> >
> > +static inline struct kmem_cache *page_to_cache(struct page *page)
> > +{
> > +       return page->slab_cache;
> > +}
> > +
> >  static inline struct kmem_cache *virt_to_cache(const void *obj)
> >  {
> >         struct page *page = virt_to_head_page(obj);
> > -       return page->slab_cache;
> > +       return page_to_cache(page);
> >  }
> >
> >  static inline void *index_to_obj(struct kmem_cache *cache, struct page *page,
> > @@ -3813,14 +3818,18 @@ void kfree(const void *objp)
> >  {
> >         struct kmem_cache *c;
> >         unsigned long flags;
> > +       struct page *page;
> >
> >         trace_kfree(_RET_IP_, objp);
> >
> >         if (unlikely(ZERO_OR_NULL_PTR(objp)))
> >                 return;
> > +       page = virt_to_head_page(obj);
> > +       if (CHECK_DATA_CORRUPTION(!PageSlab(page)))
> > +               return;
> >         local_irq_save(flags);
> >         kfree_debugcheck(objp);
> > -       c = virt_to_cache(objp);
> > +       c = page_to_cache(page);
> >         debug_check_no_locks_freed(objp, c->object_size);
> >
> >         debug_check_no_obj_freed(objp, c->object_size);
> 
> Sorry for the delay, I've finally had time to look at this again.
> 
> So, this only handles the kfree() case, not the kmem_cache_free() nor
> kmem_cache_free_bulk() cases, so it misses all the non-kmalloc
> allocations (and kfree() ultimately calls down to kmem_cache_free()).
> Similarly, my proposed patch missed the kfree() path. :P

yes

> As I work on a replacement, is the goal to avoid the checks while
> under local_irq_save()? (i.e. I can't just put the check in
> virt_to_cache(), etc.)

You would have to check all callers of virt_to_cache. I would simply
replace BUG_ON(!PageSlab()) in cache_from_obj. kmem_cache_free already
handles NULL cache. kmem_cache_free_bulk and build_detached_freelist can
be made to do so.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ