[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210514150237.GJ1002214@nvidia.com>
Date: Fri, 14 May 2021 12:02:37 -0300
From: Jason Gunthorpe <jgg@...dia.com>
To: "Marciniszyn, Mike" <mike.marciniszyn@...nelisnetworks.com>
Cc: "Dalessandro, Dennis" <dennis.dalessandro@...nelisnetworks.com>,
Leon Romanovsky <leon@...nel.org>,
Doug Ledford <dledford@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>
Subject: Re: [PATCH rdma-next] RDMA/rdmavt: Decouple QP and SGE lists
allocations
On Fri, May 14, 2021 at 03:00:37PM +0000, Marciniszyn, Mike wrote:
> > The core stuff in ib_qp is not performance sensitive and has no obvious node
> > affinity since it relates primarily to simple control stuff.
> >
>
> The current rvt_qp "inherits" from ib_qp, so the fields in the
> "control" stuff are performance critical especially for receive
> processing and have historically live in the same allocation.
This is why I said "core stuff in ib_qp" if drivers are adding
performance stuff to their own structs then that is the driver's
responsibility to handle.
> I would in no way call these fields "simple control stuff". The
> rvt_qp structure is tuned to optimize receive processing and the
> NUMA locality is part of that tuning.
>
> We could separate out the allocation, but that misses promoting
> fields from rvt_qp that may indeed be common into the core.
>
> I know that we use the qpn from ib_qp and there may be other fields
> in the critical path.
I wouldn't worry about 32 bits when tuning for performance
Jason
Powered by blists - more mailing lists