lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 22 Jul 2021 10:05:18 +0300
From:   Leon Romanovsky <leon@...nel.org>
To:     Dennis Dalessandro <dennis.dalessandro@...nelisnetworks.com>
Cc:     Gal Pressman <galpress@...zon.com>,
        Doug Ledford <dledford@...hat.com>,
        Jason Gunthorpe <jgg@...dia.com>,
        Adit Ranadive <aditr@...are.com>,
        Ariel Elior <aelior@...vell.com>,
        Bernard Metzler <bmt@...ich.ibm.com>,
        Christian Benvenuti <benve@...co.com>,
        linux-kernel@...r.kernel.org, linux-rdma@...r.kernel.org,
        Michal Kalderon <mkalderon@...vell.com>,
        Mike Marciniszyn <mike.marciniszyn@...nelisnetworks.com>,
        Mustafa Ismail <mustafa.ismail@...el.com>,
        Naresh Kumar PBS <nareshkumar.pbs@...adcom.com>,
        Nelson Escobar <neescoba@...co.com>,
        Potnuri Bharat Teja <bharat@...lsio.com>,
        Selvin Xavier <selvin.xavier@...adcom.com>,
        Shiraz Saleem <shiraz.saleem@...el.com>,
        Steve Wise <larrystevenwise@...il.com>,
        VMware PV-Drivers <pv-drivers@...are.com>,
        Weihang Li <liweihang@...wei.com>,
        Wenpeng Liang <liangwenpeng@...wei.com>,
        Yishai Hadas <yishaih@...dia.com>,
        Zhu Yanjun <zyjzyj2000@...il.com>
Subject: Re: [PATCH rdma-next 8/9] RDMA: Globally allocate and release QP
 memory

On Wed, Jul 21, 2021 at 02:05:34PM -0400, Dennis Dalessandro wrote:
> On 7/20/21 4:35 AM, Leon Romanovsky wrote:
> > On Mon, Jul 19, 2021 at 04:42:11PM +0300, Gal Pressman wrote:
> >> On 18/07/2021 15:00, Leon Romanovsky wrote:
> >>> From: Leon Romanovsky <leonro@...dia.com>
> >>>
> >>> Convert QP object to follow IB/core general allocation scheme.
> >>> That change allows us to make sure that restrack properly kref
> >>> the memory.
> >>>
> >>> Signed-off-by: Leon Romanovsky <leonro@...dia.com>
> >>
> >> EFA and core parts look good to me.
> >> Reviewed-by: Gal Pressman <galpress@...zon.com>
> >> Tested-by: Gal Pressman <galpress@...zon.com>
> 
> Leon, I pulled your tree and tested, things look good so far.
> 
> For rdmavt and core:
> Reviewed-by: Dennis Dalessandro <dennis.dalessandro@...nelisnetworks.com>
> Tested-by: Dennis Dalessandro <dennis.dalessandro@...nelisnetworks.com>

Thanks

> 
> 
> > Thanks a lot.
> > 
> >>
> >>> +static inline void *rdma_zalloc_obj(struct ib_device *dev, size_t size,
> >>> +				    gfp_t gfp, bool is_numa_aware)
> >>> +{
> >>> +	if (is_numa_aware && dev->ops.get_numa_node)
> >>
> >> Honestly I think it's better to return an error if a numa aware allocation is
> >> requested and get_numa_node is not provided.
> > 
> > We don't want any driver to use and implement ".get_numa_node()" callback.
> > 
> > Initially, I thought about adding WARN_ON(driver_id != HFI && .get_numa_node)
> > to the device.c, but decided to stay with comment in ib_verbs.h only.
> 
> Maybe you could update that comment and add that it's for performance? This way
> its clear we are different for a reason. I'd be fine adding a WARN_ON_ONCE like
> you mention here. I don't think we need to fail the call but drawing attention
> to it would not necessarily be a bad thing. Either way, RB/TB for me stands.

The thing is that performance gain is achieved in very specific artificial
use case, while we can think about other scenario where such allocation will
give different results.

For the performance test, Mike forced to have QP and device to be in different
NUMA nodes, while in reality such situation is possible only if memory on the
local node is depleted and it is better to have working system instead of current
situation where node request will fail.

I don't want to give wrong impression that numa node allocations are
better for performance.

Thanks

> 
> -Denny
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ