[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200117213721.GN2935@paulmck-ThinkPad-P72>
Date: Fri, 17 Jan 2020 13:37:21 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Joel Fernandes <joel@...lfernandes.org>
Cc: Uladzislau Rezki <urezki@...il.com>,
LKML <linux-kernel@...r.kernel.org>, RCU <rcu@...r.kernel.org>,
Steven Rostedt <rostedt@...dmis.org>,
Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: Re: [PATCH 1/1] rcu/tree: support kfree_bulk() interface in
kfree_rcu()
On Fri, Jan 17, 2020 at 01:57:32PM -0500, Joel Fernandes wrote:
> On Fri, Jan 17, 2020 at 06:52:17PM +0100, Uladzislau Rezki wrote:
> > > > > > But rcuperf uses a single block size, which turns into kfree_bulk() using
> > > > > > a single slab, which results in good locality of reference. So I have to
> > > > >
> > > > > You meant a "single cache" category when you say "single slab"? Just to
> > > > > mention, the number of slabs (in a single cache) when a large number of
> > > > > objects are allocated is more than 1 (not single). With current rcuperf, I
> > > > > see 100s of slabs (each slab being one page) in the kmalloc-32 cache. Each
> > > > > slab contains around 128 objects of type kfree_rcu (24 byte object aligned to
> > > > > 32-byte slab object).
> > > > >
> > > > I think that is about using different slab caches to break locality. It
> > > > makes sense, IMHO, because usually the system make use of different slabs,
> > > > because of different object sizes. From the other hand i guess there are
> > > > test cases when only one slab gets used.
> > >
> > > I was wondering about "locality". A cache can be split into many slabs. Only
> > > the data on a page is local (contiguous). If there are a large number of
> > > objects, then it goes to a new slab (on the same cache). At least on the
> > > kmalloc slabs, there is only 1 slab per page. So for example, if on
> > > kmalloc-32 slab, there are more than 128 objects, then it goes to a different
> > > slab / page. So how is there still locality?
> > >
> > Hmm.. On a high level:
> >
> > one slab cache manages a specific object size, i.e. the slab memory consists of
> > contiguous pages(when increased probably not) of memory(4096 bytes or so) divided
> > into equal object size. For example when kmalloc() gets called, the appropriate
> > cache size(slab that serves only specific size) is selected and an object assigned
> > from it is returned.
> >
> > But that is theory and i have not deeply analyzed how the SLAB works internally,
> > so i can be wrong :)
> >
> > You mentioned 128 objects per one slab in the kmalloc-32 slab-cache. But all of
> > them follows each other, i mean it is sequential and is like regular array. In
>
> Yes, for these 128 objects it is sequential. But the next 128 could be on
> some other page is what I was saying And we are allocating 10s of 1000s of
> objects in this test. (I believe pages are sequential only per slab and not
> for a different slab within same cache).
>
> > that sense freeing can be beneficial because when an access is done to any object
> > whole CPU cache-line is fetched(if it was not before), usually it is 64K.
>
> You mean size of the whole L1 cache right? cachelines are in the order of bytes.
>
> > That is what i meant "locality". In order to "break it" i meant to allocate from
> > different slabs to see how kfree_slub() behaves in that sense, what is more real
> > scenario and workload, i think.
>
> Ok, agreed.
> (BTW I do agree your patch is beneficial, just wanted to get the slab
> discussion right).
Thank you both!
Then I should be looking for an updated version of the patch with an upgraded
commit log? Or is there more investigation/testing/review in process?
Thanx, Paul
Powered by blists - more mailing lists