lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGXu5j+vVn02Vsx5TzWPz3MS7Jow1gi+m3ojwMXrL-w6aaZhtw@mail.gmail.com>
Date:   Thu, 27 Apr 2017 18:11:28 -0700
From:   Kees Cook <keescook@...omium.org>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     Christoph Lameter <cl@...ux.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Linux-MM <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm: Add additional consistency check

On Tue, Apr 11, 2017 at 7:19 AM, Michal Hocko <mhocko@...nel.org> wrote:
> I would do something like...
> ---
> diff --git a/mm/slab.c b/mm/slab.c
> index bd63450a9b16..87c99a5e9e18 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -393,10 +393,15 @@ static inline void set_store_user_dirty(struct kmem_cache *cachep) {}
>  static int slab_max_order = SLAB_MAX_ORDER_LO;
>  static bool slab_max_order_set __initdata;
>
> +static inline struct kmem_cache *page_to_cache(struct page *page)
> +{
> +       return page->slab_cache;
> +}
> +
>  static inline struct kmem_cache *virt_to_cache(const void *obj)
>  {
>         struct page *page = virt_to_head_page(obj);
> -       return page->slab_cache;
> +       return page_to_cache(page);
>  }
>
>  static inline void *index_to_obj(struct kmem_cache *cache, struct page *page,
> @@ -3813,14 +3818,18 @@ void kfree(const void *objp)
>  {
>         struct kmem_cache *c;
>         unsigned long flags;
> +       struct page *page;
>
>         trace_kfree(_RET_IP_, objp);
>
>         if (unlikely(ZERO_OR_NULL_PTR(objp)))
>                 return;
> +       page = virt_to_head_page(obj);
> +       if (CHECK_DATA_CORRUPTION(!PageSlab(page)))
> +               return;
>         local_irq_save(flags);
>         kfree_debugcheck(objp);
> -       c = virt_to_cache(objp);
> +       c = page_to_cache(page);
>         debug_check_no_locks_freed(objp, c->object_size);
>
>         debug_check_no_obj_freed(objp, c->object_size);

Sorry for the delay, I've finally had time to look at this again.

So, this only handles the kfree() case, not the kmem_cache_free() nor
kmem_cache_free_bulk() cases, so it misses all the non-kmalloc
allocations (and kfree() ultimately calls down to kmem_cache_free()).
Similarly, my proposed patch missed the kfree() path. :P

As I work on a replacement, is the goal to avoid the checks while
under local_irq_save()? (i.e. I can't just put the check in
virt_to_cache(), etc.)

-Kees

-- 
Kees Cook
Pixel Security

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ