[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120213223156.GG12117@google.com>
Date: Mon, 13 Feb 2012 14:31:56 -0800
From: Tejun Heo <tj@...nel.org>
To: Vivek Goyal <vgoyal@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>, avi@...hat.com,
nate@...nel.net, cl@...ux-foundation.org, oleg@...hat.com,
axboe@...nel.dk, linux-kernel@...r.kernel.org
Subject: Re: [PATCHSET] block, mempool, percpu: implement percpu mempool
and fix blkcg percpu alloc deadlock
Hello, Vivek.
On Fri, Feb 10, 2012 at 11:26:58AM -0500, Vivek Goyal wrote:
> The only difference is that by putting this logic in per cpu counters,
> we make it somewhat generic so that other users who can't do GFP_KERNEL
> allocation of per cpu data, can use it. I can live with that.
Also, it has fallback mechanism while percpu data isn't there, so the
counts are guaranteed to be right. Probably doesn't matter all that
much for blkcg stats.
> But if you don't think that fixing alloc_percpu() is possible in long
> term and users should use per cpu counters for any kind of non GFP_KERNEL
> needs, then it probably is find to continue to develop this patch.
Updating alloc_percpu() to support full @gfp_mask is rather complex
and likely to generate a lot of churn and fragility.
> Personally, I liked my old patch of restricting worker thread allocation
> logic to blk-throttle.c and cfq-iosched.c. If you don't have objection
> to that approach, I can brush it up, fix a pending issue and post it?
I don't know. If we're gonna do that, I think doing that in mempool
is better. Andrew, are you still dead against using mempool for
percpu pooling? That logic is going somewhere and it probably is
better to put it somewhere common rather than shoving it in block
cgroup code.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists