[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130813210719.GB28996@mtj.dyndns.org>
Date: Tue, 13 Aug 2013 17:07:19 -0400
From: Tejun Heo <tj@...nel.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Chris Metcalf <cmetcalf@...era.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Thomas Gleixner <tglx@...utronix.de>,
Frederic Weisbecker <fweisbec@...il.com>,
Cody P Schafer <cody@...ux.vnet.ibm.com>
Subject: Re: [PATCH v4 2/2] mm: make lru_add_drain_all() selective
Hello,
On Tue, Aug 13, 2013 at 01:31:35PM -0700, Andrew Morton wrote:
> > the logical thing to do
> > would be pre-allocating per-cpu buffers instead of depending on
> > dynamic allocation. Do the invocations need to be stackable?
>
> schedule_on_each_cpu() calls should if course happen concurrently, and
> there's the question of whether we wish to permit async
> schedule_on_each_cpu(). Leaving the calling CPU twiddling thumbs until
> everyone has finished is pretty sad if the caller doesn't want that.
Oh, I meant the caller-side, not schedule_on_each_cpu(). So, if this
particular caller is performance sensitive for some reason, it makes
sense to pre-allocate resources on the caller side if the caller
doesn't need to be reentrant or called concurrently.
> I don't recall seeing such abuse. It's a very common and powerful
> tool, and not implementing it because some dummy may abuse it weakens
> the API for all non-dummies. That allocation is simply unneeded.
More powerful and flexible doesn't always equal better and I think
being simple and less prone to abuses are important characteristics
that APIs should have. It feels a bit silly to me to push the API
that way when doing so doesn't even solve the allocation problem. It
doesn't really buy us much while making the interface more complex.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists