lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190513143741.GA25500@lst.de>
Date:   Mon, 13 May 2019 16:37:41 +0200
From:   Christoph Hellwig <hch@....de>
To:     Mario.Limonciello@...l.com
Cc:     keith.busch@...el.com, hch@....de, sagi@...mberg.me,
        linux-nvme@...ts.infradead.org, rafael@...nel.org,
        linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
        kai.heng.feng@...onical.com
Subject: Re: [PATCH] nvme/pci: Use host managed power state for suspend

On Mon, May 13, 2019 at 02:24:41PM +0000, Mario.Limonciello@...l.com wrote:
> I've received the result that from one of my partners this patch doesn't
> work properly and the platform doesn't go into a lower power state.

Well, it sounds like your partners device does not work properly in this
case.  There is nothing in the NVMe spec that says queues should be
torn down for deep power states, and that whole idea seems rather
counter productive to low-latency suspend/resume cycles.

> This was not a disk with HMB, but with regard to the HMB I believe it
> needs to be removed during s0ix so that there isn't any mistake that SSD
> thinks it can access HMB memory in s0ix.

There is no mistake - the device is allowed to use the HMB from the
point that we give it the memory range until the point where we either
disable it, or shut the controller down.  If something else requires the
device not to use the HMB after ->suspend is called we need to disable
the HMB, and we better have a good reason for that and document it in
the code.  Note that shutting down queues or having CPU memory barriers
is not going to help with any of that.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ