[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140515071650.GB32113@esperanza>
Date: Thu, 15 May 2014 11:16:52 +0400
From: Vladimir Davydov <vdavydov@...allels.com>
To: Christoph Lameter <cl@...ux.com>
CC: <hannes@...xchg.org>, <mhocko@...e.cz>,
<akpm@...ux-foundation.org>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>
Subject: Re: [PATCH RFC 3/3] slub: reparent memcg caches' slabs on memcg
offline
On Wed, May 14, 2014 at 11:20:51AM -0500, Christoph Lameter wrote:
> On Tue, 13 May 2014, Vladimir Davydov wrote:
>
> > Since the "slow" and the "normal" free's can't coexist at the same time,
> > we must assure all conventional free's have finished before switching
> > all further free's to the "slow" mode and starting reparenting. To
> > achieve that, a percpu refcounter is used. It is taken and held during
> > each "normal" free. The refcounter is killed on memcg offline, and the
> > cache's pages migration is initiated from the refcounter's release
> > function. If we fail to take a ref on kfree, it means all "normal"
> > free's have been completed and the cache is being reparented right now,
> > so we should free the object using the "slow" mode.
>
> Argh adding more code to the free path touching more cachelines in the
> process.
Actually, there is not that much active code added, IMO. In fact, it's
only percpu ref get/put for per memcg caches plus a couple of
conditionals. The "slow" mode code is meant to be executed very rarely,
so we can move it to a separate function under unlikely optimization.
I admit that's far not perfect, because kfree is really a hot path,
where every byte of code matters, but unfortunately I don't see how we
can avoid this in case we want slab re-parenting.
Again, I'd like to hear from you if there is any point in moving in this
direction, or I should give up and concentrate on some other approach,
because you'll never accept it.
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists