lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210514130247.GA1002214@nvidia.com>
Date:   Fri, 14 May 2021 10:02:47 -0300
From:   Jason Gunthorpe <jgg@...dia.com>
To:     Dennis Dalessandro <dennis.dalessandro@...nelisnetworks.com>
Cc:     Leon Romanovsky <leon@...nel.org>,
        "Marciniszyn, Mike" <mike.marciniszyn@...nelisnetworks.com>,
        Doug Ledford <dledford@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>
Subject: Re: [PATCH rdma-next] RDMA/rdmavt: Decouple QP and SGE lists
 allocations

On Thu, May 13, 2021 at 03:31:48PM -0400, Dennis Dalessandro wrote:
> On 5/13/21 3:15 PM, Jason Gunthorpe wrote:
> > On Thu, May 13, 2021 at 03:03:43PM -0400, Dennis Dalessandro wrote:
> > > On 5/12/21 8:50 AM, Leon Romanovsky wrote:
> > > > On Wed, May 12, 2021 at 12:25:15PM +0000, Marciniszyn, Mike wrote:
> > > > > > > Thanks Leon, we'll get this put through our testing.
> > > > > > 
> > > > > > Thanks a lot.
> > > > > > 
> > > > > > > 
> > > > > 
> > > > > The patch as is passed all our functional testing.
> > > > 
> > > > Thanks Mike,
> > > > 
> > > > Can I ask you to perform a performance comparison between this patch and
> > > > the following?
> > > 
> > > We have years of performance data with the code the way it is. Please
> > > maintain the original functionality of the code when moving things into the
> > > core unless there is a compelling reason to change. That is not the case
> > > here.
> > 
> > Well, making the core do node allocations for metadata on every driver
> > is a pretty big thing to ask for with no data.
> 
> Can't you just make the call into the core take a flag for this? You are
> looking to make a change to key behavior without any clear reason that I can
> see for why it needs to be that way. If there is a good reason, please
> explain so we can understand.

The lifetime model of all this data is messed up, there are a bunch of
little bugs on the error paths, and we can't have a proper refcount
lifetime module when this code really wants to have it.

IMHO if hf1 has a performance need here it should chain a sub
allocation since promoting node awareness to the core code looks
not nice..

These are not supposed to be performance sensitive data structures,
they haven't even been organized for cache locality or anything.

> I would think the person authoring the patch should be responsible to prove
> their patch doesn't cause a regression.

I'm more interested in this argument as it applied to functional
regressions. Performance is always shifting around and a win for a
node specific allocation seems highly situational to me. I half wonder
if all the node allocation in this driver is just some copy and
paste.

Jason

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ