lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210628231934.GL4459@nvidia.com>
Date:   Mon, 28 Jun 2021 20:19:34 -0300
From:   Jason Gunthorpe <jgg@...dia.com>
To:     "Marciniszyn, Mike" <mike.marciniszyn@...nelisnetworks.com>
Cc:     "Dalessandro, Dennis" <dennis.dalessandro@...nelisnetworks.com>,
        Leon Romanovsky <leon@...nel.org>,
        Doug Ledford <dledford@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
        "Pine, Kevin" <kevin.pine@...nelisnetworks.com>
Subject: Re: [PATCH rdma-next] RDMA/rdmavt: Decouple QP and SGE lists
 allocations

On Mon, Jun 28, 2021 at 09:59:48PM +0000, Marciniszyn, Mike wrote:

> To answer some of the pending questions posed before, the mempolicy
> looks to be a process relative control and does not apply to our QP
> allocation where the struct rvt_qp is in the kernel.

I think mempolicy is per task, which is a thread, and it propagates
into kernel allocations made under that task's current

> It certainly does not apply to kernel ULPs such as those created by
> say Lustre, ipoib, SRP, iSer, and NFS RDMA.

These don't use uverbs, so a uverbs change is not relavent.
 
> We do support comp_vector stuff, but that distributes completion
> processing.  Completions are triggered in our receive processing but
> to a much less extent based on ULP choices and packet type.  From a
> strategy standpoint, the code assumes distribution of kernel receive
> interrupt processing is vectored either by irqbalance or by explicit
> user mode scripting to spread RC QP receive processing across CPUs
> on the local socket.

And there you go, it should be allocating the memory based on the NUMA
affinity of the IRQ that it is going to assign to touch the memory.

And the CPU threads that are triggering this should be affine to the
same socket as well, otherwise you just get bouncing in another area.

Overall I think you get the same configuration if you just let the
normal policy stuff do its work, and it might be less fragile to boot.

I certainly object to this idea that the driver assumes userspace will
never move its IRQs off the local because it has wrongly hardwired a
numa locality to the wrong object.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ