[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d82e647a0905121728m278b604bn9a0f5122b964978a@mail.gmail.com>
Date: Wed, 13 May 2009 08:28:15 +0800
From: Ming Lei <tom.leiming@...il.com>
To: Cornelia Huck <cornelia.huck@...ibm.com>
Cc: Frederic Weisbecker <fweisbec@...il.com>, arjan@...radead.org,
linux-kernel@...r.kernel.org, akpm@...ux-foundation.org
Subject: Re: [PATCH] kernel/async.c:introduce async_schedule*_atomic
2009/5/13 Cornelia Huck <cornelia.huck@...ibm.com>:
> On Tue, 12 May 2009 18:52:29 +0200,
> Frederic Weisbecker <fweisbec@...il.com> wrote:
>
>> This division would make more sense indeed.
>>
>> - async_schedule_inatomic() would be nosync() and would use
>> GFP_ATOMIC. I guess the case where we want to run
>> a job synchronously from atomic in case of async failure is too rare
>> (non-existent?).
>
> It would add complexity for those callers providing a function that is
> safe to be called in both contexts.
>
>> - async_schedule_nosync() would be only nosync() and would use
>> GFP_KERNEL
>>
>> I'm not sure the second case will ever be used though.
>
> It might make sense for the "just fail if we cannot get memory" case.
>
>>
>> Another alternative would be to define a single async_schedule_nosync()
>> which also takes a gfp flag.
>
> Wouldn't async_schedule() then need a gfp flag as well?
>
IMHO, we should call async_schedule*() from non-atomic contexts and
async_schedule_inatomic*() from atomic contexts explicitly, so
async_schedule*()
use GFP_KERNEL and async_schedule_inatomic*() use GFP_ATOMIC
always. This can simplify the problem much more.
Also we still allow async_schedule*() to run a job synchronously if
out of memory
or other failure. This can keep consistency with before.
Any sugesstions or objections?
--
Lei Ming
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists