[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FC506E6.8030108@parallels.com>
Date: Tue, 29 May 2012 21:27:02 +0400
From: Glauber Costa <glommer@...allels.com>
To: Christoph Lameter <cl@...ux.com>
CC: <linux-kernel@...r.kernel.org>, <cgroups@...r.kernel.org>,
<linux-mm@...ck.org>, <kamezawa.hiroyu@...fujitsu.com>,
Tejun Heo <tj@...nel.org>, Li Zefan <lizefan@...wei.com>,
Greg Thelen <gthelen@...gle.com>,
Suleiman Souhlal <suleiman@...gle.com>,
Michal Hocko <mhocko@...e.cz>,
Johannes Weiner <hannes@...xchg.org>, <devel@...nvz.org>,
David Rientjes <rientjes@...gle.com>,
Pekka Enberg <penberg@...helsinki.fi>
Subject: Re: [PATCH v3 13/28] slub: create duplicate cache
On 05/29/2012 09:25 PM, Christoph Lameter wrote:
>> I will try to at least have the page accounting done in a consistent way. How
>> > about that?
> Ok. What do you mean by "consistent"? Since objects and pages can be used
> in a shared way and since accounting in many areas of the kernel is
> intentional fuzzy to avoid performance issues there is not that much of a
> requirement for accuracy. We have problems even just defining what the
> exact set of pages belonging to a task/process is.
>
I mean consistent between allocators, done in a shared place.
Note that what we do here *is* still fuzzy to some degree. We can have a
level of sharing, and always account to the first one to touch, we don't
go worrying about this because that would be hell. We also don't
carry a process' objects around when we send it to a different cgroup.
Those are all already simplications for performance and simplicity sake.
But we really need a page to be filled with objects from the same
cgroup, and the non-shared objects to be accounted to the right place.
Otherwise, I don't think we can meet even the lighter of isolation
guarantees.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists