lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 8 Jan 2014 18:24:33 -0800 (PST)
From:	David Rientjes <>
To:	Dave Hansen <>
Subject: Re: [PATCH] mm: slub: fix ALLOC_SLOWPATH stat

On Mon, 6 Jan 2014, Dave Hansen wrote:

> There used to be only one path out of __slab_alloc(), and
> ALLOC_SLOWPATH got bumped in that exit path.  Now there are two,
> and a bunch of gotos.  ALLOC_SLOWPATH can now get set more than once
> during a single call to __slab_alloc() which is pretty bogus.
> Here's the sequence:
> 1. Enter __slab_alloc(), fall through all the way to the
>    stat(s, ALLOC_SLOWPATH);
> 2. hit 'if (!freelist)', and bump DEACTIVATE_BYPASS, jump to
>    new_slab (goto #1)
> 3. Hit 'if (c->partial)', bump CPU_PARTIAL_ALLOC, goto redo
>    (goto #2)
> 4. Fall through in the same path we did before all the way to
>    stat(s, ALLOC_SLOWPATH)
> 5. bump ALLOC_REFILL stat, then return
> Doing this is obviously bogus.  It keeps us from being able to
> accurately compare ALLOC_SLOWPATH vs. ALLOC_FASTPATH.  It also
> means that the total number of allocs always exceeds the total
> number of frees.
> This patch moves stat(s, ALLOC_SLOWPATH) to be called from the
> same place that __slab_alloc() is.  This makes it much less
> likely that ALLOC_SLOWPATH will get botched again in the
> spaghetti-code inside __slab_alloc().
> Signed-off-by: Dave Hansen <>

Acked-by: David Rientjes <>
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists