[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1509171854480.5696@east.gentwo.org>
Date: Thu, 17 Sep 2015 18:57:17 -0500 (CDT)
From: Christoph Lameter <cl@...ux.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
cc: linux-mm@...ck.org, netdev@...r.kernel.org,
akpm@...ux-foundation.org,
Alexander Duyck <alexander.duyck@...il.com>,
iamjoonsoo.kim@....com
Subject: Re: Experiences with slub bulk use-case for network stack
On Thu, 17 Sep 2015, Jesper Dangaard Brouer wrote:
> What I'm proposing is keeping interrupts on, and then simply cmpxchg
> e.g 2 slab-pages out of the SLUB allocator (which the SLUB code calls
> freelist's). The bulk call now owns these freelists, and returns them
> to the caller. The API caller gets some helpers/macros to access
> objects, to shield him from the details (of SLUB freelist's).
>
> The pitfall with this API is we don't know how many objects are on a
> SLUB freelist. And we cannot walk the freelist and count them, because
> then we hit the problem of memory/cache stalls (that we are trying so
> hard to avoid).
If you get a fresh page from the page allocator then you know how many
objects are available in a slab page.
There is also a counter in each slab page for the objects allocated. The
number of free object is page->objects - page->inuse.
This is only true for a lockec cmpxchg. The unlocked cmpxchg used for the
per cpu freelist does not use the counters in the page struct.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists