[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ac3eb2510911190625i7432644aod6e99829b8a03dc4@mail.gmail.com>
Date: Thu, 19 Nov 2009 15:25:59 +0100
From: Kay Sievers <kay.sievers@...y.org>
To: Matthew Garrett <mjg59@...f.ucam.org>
Cc: David Zeuthen <david@...ar.dk>, linux-kernel@...r.kernel.org,
axboe@...nel.dk, linux-hotplug@...r.kernel.org
Subject: Re: [PATCH] [RFC] Add support for uevents on block device idle
changes
On Thu, Nov 19, 2009 at 15:16, Matthew Garrett <mjg59@...f.ucam.org> wrote:
> On Thu, Nov 19, 2009 at 02:29:29PM +0100, Kay Sievers wrote:
>> On Thu, Nov 19, 2009 at 14:01, Matthew Garrett <mjg@...hat.com> wrote:
>> > On Thu, Nov 19, 2009 at 12:09:30PM +0100, Kay Sievers wrote:
>> >> On Wed, Nov 18, 2009 at 22:33, Matthew Garrett <mjg59@...f.ucam.org> wrote:
>> >> > My use cases are on the order of a second.
>> >>
>> >> Ok, what's the specific use case, which should be triggered after a
>> >> second? I thought you were thinking about disk spindown or similar.
>> >
>> > The first is altering ALPM policy. ALPM will be initiated by the host if
>> > the number of queued requests hits zero - if there's no hysteresis
>> > implemented, then that can result in a significant performance hit. We
>> > don't need /much/ hysteresis, but it's the difference between a 50%
>> > performance hit and not having that.
>>
>> Can't that logic live entirely in the kernel, instead of being a
>> rather generic userspace event interface (with the current limitation
>> to a single user)?
>
> It could, but it seems a bit of a hack. It'd still also require the
> timer to be in the kernel, so we might as well expose that to userspace.
Sure, but a userspace configurable policy for an in-kernel disk-idle
powermanagent sounds fine, compared to a single-subscriber
userspace-only disk-idle event interface. :)
Kay
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists