[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20140801012301.GB1967@htj.dyndns.org>
Date: Thu, 31 Jul 2014 21:23:01 -0400
From: Tejun Heo <tj@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: lkml <linux-kernel@...r.kernel.org>, Jens Axboe <axboe@...nel.dk>,
Vivek Goyal <vgoyal@...hat.com>
Subject: Re: [PATCH percpu/for-3.17 1/2] percpu: implement percpu_pool
Hello, Andrew.
On Thu, Jul 31, 2014 at 06:16:56PM -0700, Andrew Morton wrote:
> Yet nowhere in either the changelog or the code comments is it even
> mentioned that this allocator is unreliable and that callers *must*
> implement (and test!) fallback paths.
Hmmm, yeah, somehow the atomic behavior seemed obvious to me. I'll
try to make it clear that this thing can and does fail.
> > an obvious solution is adding a failure
> > injection for debugging, but really except for being a bit ghetto,
> > this is just the atomic allocation for percpu areas.
>
> If it was a try-GFP_ATOMIC-then-fall-back-to-pool thing then it would
> work fairly well. But it's not even that - a caller could trivially
> chew through that pool in a single timeslice. Especially on !SMP.
> Especially squared with !PREEMPT or SCHED_FIFO.
Yeap, occassional pool depletion would be a normal thing to happen,
which isn't a correctness issue and most likely not even a performance
issue.
> But please make very sure that this is how we position it. I don't
> know how to do this. Maybe prefix the names with "blk_" to signify
> that it is block-private (and won't even be there if !CONFIG_BLOCK).
>
> Or rename percpu_pool_alloc() to percpu_pool_try_alloc() - that should
> wake people up.
Sounds good to me. I'll rename it to percpu_pool_try_alloc() and make
it clear in the comment that the allocation is opportunistic.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists