[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1156181716.6479.2.camel@linuxchandra>
Date: Mon, 21 Aug 2006 10:35:16 -0700
From: Chandra Seetharaman <sekharan@...ibm.com>
To: Matt Helsley <matthltc@...ibm.com>
Cc: Kirill Korotaev <dev@...ru>, Rik van Riel <riel@...hat.com>,
CKRM-Tech <ckrm-tech@...ts.sourceforge.net>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andi Kleen <ak@...e.de>, Christoph Hellwig <hch@...radead.org>,
Andrey Savochkin <saw@...ru>, devel@...nvz.org,
hugh@...itas.com, Ingo Molnar <mingo@...e.hu>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
Pavel Emelianov <xemul@...nvz.org>,
Andrew Morton <akpm@...l.org>
Subject: Re: [ckrm-tech] [RFC][PATCH 2/7] UBC: core (structures, API)
On Fri, 2006-08-18 at 19:38 -0700, Matt Helsley wrote:
<snip>
> > >
> > >>+ for (p = ub; p != NULL; p = p->parent) {
> > >
> > >
> > > Seems rather expensive to walk up the tree for every charge. Especially
> > > if the administrator wants a fine degree of resource control and makes a
> > > tall tree. This would be a problem especially when it comes to resources
> > > that require frequent and fast allocation.
> > in heirarchical accounting you always have to update all the nodes :/
> > with flat UBC this doesn't introduce significant overhead.
>
> Except that you eventually have to lock ub0. Seems that the cache line
> for that spinlock could bounce quite a bit in such a hot path.
>
> Chandra, doesn't Resource Groups avoid walking more than 1 level up the
> hierarchy in the "charge" paths?
Yes, charging happens at one level only (except the case where the group
is over its guarantee, it has to borrow from its parent, it will go up).
<snip>
--
----------------------------------------------------------------------
Chandra Seetharaman | Be careful what you choose....
- sekharan@...ibm.com | .......you may get it.
----------------------------------------------------------------------
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists