lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 28 Apr 2022 18:09:11 +0200
From:   Thomas Weißschuh <linux@...ssschuh.net>
To:     Christoph Hellwig <hch@....de>
Cc:     Keith Busch <kbusch@...nel.org>, Jens Axboe <axboe@...com>,
        Sagi Grimberg <sagi@...mberg.me>, linux-kernel@...r.kernel.org,
        linux-nvme@...ts.infradead.org
Subject: Re: [PATCH] nvme-pci: fix host memory buffer allocation size

On 2022-04-28 17:06+0200, Christoph Hellwig wrote:
> On Thu, Apr 28, 2022 at 04:44:47PM +0200, Thomas Weißschuh wrote:
> > Is the current code supposed to reach HMPRE? It does not for me.
> > 
> > The code tries to allocate memory for HMPRE in chunks.
> > The best allocation would be to allocate one chunk for all of HMPRE.
> > If this fails we half the chunk size on each iteration and try again.
> > 
> > On my hardware we start with a chunk_size of 4MiB and just allocate
> > 8 (hmmaxd) * 4 = 32 MiB which is worse than 1 * 200MiB.
> 
> And that is because the hardware only has a limited set of descriptors.

Wouldn't it make more sense then to allocate as much memory as possible for
each descriptor that is available?

The comment in nvme_alloc_host_mem() tries to "start big".
But it actually starts with at most 4MiB.

And on devices that have hmminds > 4MiB the loop condition will never succeed
at all and HMB will not be used.
My fairly boring hardware already is at a hmminds of 3.3MiB.

> Is there any real problem you are fixing with this?  Do you actually
> see a performance difference on a relevant workload?

I don't have a concrete problem or performance issue.
During some debugging I stumbled in my kernel logs upon
"nvme nvme0: allocated 32 MiB host memory buffer"
and investigated why it was so low.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ