[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrWipKV-hmTYUL4itQ6Z0rtXuJMQ49VHv9BHc_3EM63jKA@mail.gmail.com>
Date: Fri, 16 Sep 2016 09:20:10 -0700
From: Andy Lutomirski <luto@...capital.net>
To: One Thousand Gnomes <gnomes@...rguk.ukuu.org.uk>
Cc: Andy Lutomirski <luto@...nel.org>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
Keith Busch <keith.busch@...el.com>, Jens Axboe <axboe@...com>,
linux-nvme@...ts.infradead.org,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
J Freyensee <james_p_freyensee@...ux.intel.com>,
Christoph Hellwig <hch@....de>
Subject: Re: Should drivers like nvme let userspace control their latency via dev_pm_qos?
On Fri, Sep 16, 2016 at 8:54 AM, One Thousand Gnomes
<gnomes@...rguk.ukuu.org.uk> wrote:
> On Fri, 16 Sep 2016 08:26:03 -0700
> Andy Lutomirski <luto@...nel.org> wrote:
>
>> I'm adding power management to the nvme driver, and I'm exposing
>> exactly one knob via sysfs: the maximum permissible latency. This
>> isn't a power domain issue, and it has no dependencies -- it's
>> literally just the maximum latency that the driver may impose on I/O
>> for power saving purposes.
>
> Why is this in the driver. Surely the latency is a property of the
> request queue and the requests being made. Now it may well be that its
> implement as min(list-of-queues) but a device sysfs node seems a strange
> place to stick it.
>
I'm not sure what you mean. The whole device can be programmed to
take a nap when fully idle. The driver can limit how deep that nap is
and thus how long the next request can be delayed while the device
wakes back up.
Unlike the very small number of in-tree users of this type of
mechanism that I can find, there are no busses or hard tolerances
involved. The only effect is a power consumption vs I/O performance
tradeoff.
--Andy
Powered by blists - more mailing lists