lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 20 Dec 2019 11:41:16 +0100
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Ilias Apalodimas <ilias.apalodimas@...aro.org>
Cc:     netdev@...r.kernel.org, lirongqing@...du.com,
        linyunsheng@...wei.com, Saeed Mahameed <saeedm@...lanox.com>,
        mhocko@...nel.org, peterz@...radead.org,
        linux-kernel@...r.kernel.org, brouer@...hat.com
Subject: Re: [net-next v5 PATCH] page_pool: handle page recycle for
 NUMA_NO_NODE condition

On Fri, 20 Dec 2019 12:23:14 +0200
Ilias Apalodimas <ilias.apalodimas@...aro.org> wrote:

> Hi Jesper, 
> 
> I like the overall approach since this moves the check out of  the hotpath. 
> @Saeed, since i got no hardware to test this on, would it be possible to check
> that it still works fine for mlx5?
> 
> [...]
> > +	struct ptr_ring *r = &pool->ring;
> > +	struct page *page;
> > +	int pref_nid; /* preferred NUMA node */
> > +
> > +	/* Quicker fallback, avoid locks when ring is empty */
> > +	if (__ptr_ring_empty(r))
> > +		return NULL;
> > +
> > +	/* Softirq guarantee CPU and thus NUMA node is stable. This,
> > +	 * assumes CPU refilling driver RX-ring will also run RX-NAPI.
> > +	 */
> > +	pref_nid = (pool->p.nid == NUMA_NO_NODE) ? numa_mem_id() : pool->p.nid;  
> 
> One of the use cases for this is that during the allocation we are not
> guaranteed to pick up the correct NUMA node. 
> This will get automatically fixed once the driver starts recycling packets. 
> 
> I don't feel strongly about this, since i don't usually like hiding value
> changes from the user but, would it make sense to move this into 
> __page_pool_alloc_pages_slow() and change the pool->p.nid?
> 
> Since alloc_pages_node() will replace NUMA_NO_NODE with numa_mem_id()
> regardless, why not store the actual node in our page pool information?
> You can then skip this and check pool->p.nid == numa_mem_id(), regardless of
> what's configured. 

This single code line helps support that drivers can control the nid
themselves.  This is a feature that is only used my mlx5 AFAIK.

I do think that is useful to allow the driver to "control" the nid, as
pinning/preferring the pages to come from the NUMA node that matches
the PCI-e controller hardware is installed in does have benefits.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ