[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrXLhbiGTmeMDFjYAAwMi2PSK38cy1Z8UEPKxOWJz55wyQ@mail.gmail.com>
Date: Sat, 13 May 2017 05:27:55 -0700
From: Andy Lutomirski <luto@...nel.org>
To: Andy Lutomirski <luto@...nel.org>
Cc: Jens Axboe <axboe@...nel.dk>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Kai-Heng Feng <kai.heng.feng@...onical.com>,
linux-nvme <linux-nvme@...ts.infradead.org>,
Christoph Hellwig <hch@....de>,
Sagi Grimberg <sagi@...mberg.me>,
Keith Busch <keith.busch@...el.com>,
Mario Limonciello <mario_limonciello@...l.com>
Subject: Re: [PATCH] nvme: Change our APST table to be no more aggressive than
Intel RSTe
On Thu, May 11, 2017 at 9:06 PM, Andy Lutomirski <luto@...nel.org> wrote:
> It seems like RSTe is much more conservative with transition timing
> that we are. According to Mario, RSTe programs APST to transition from
> active states to the first idle state after 60ms and, thereafter, to
> 1000 * the exit latency of the target state.
Bad news, folks: this appears to be merely more stable, not all the way stable:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1678184/comments/65
I maintain my hypothesis that no one ever validated these disks and
that the very conservative parameters set by RSTe merely make it rare
to trigger the bug. But maybe something else is going on.
Powered by blists - more mailing lists