[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191220160649.GA26788@apalos.home>
Date: Fri, 20 Dec 2019 18:06:49 +0200
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: netdev@...r.kernel.org, lirongqing@...du.com,
linyunsheng@...wei.com, Saeed Mahameed <saeedm@...lanox.com>,
mhocko@...nel.org, peterz@...radead.org,
linux-kernel@...r.kernel.org
Subject: Re: [net-next v5 PATCH] page_pool: handle page recycle for
NUMA_NO_NODE condition
On Fri, Dec 20, 2019 at 04:22:54PM +0100, Jesper Dangaard Brouer wrote:
> On Fri, 20 Dec 2019 12:49:37 +0200
> Ilias Apalodimas <ilias.apalodimas@...aro.org> wrote:
>
> > On Fri, Dec 20, 2019 at 11:41:16AM +0100, Jesper Dangaard Brouer wrote:
> > > On Fri, 20 Dec 2019 12:23:14 +0200
> > > Ilias Apalodimas <ilias.apalodimas@...aro.org> wrote:
> > >
> > > > Hi Jesper,
> > > >
> > > > I like the overall approach since this moves the check out of the hotpath.
> > > > @Saeed, since i got no hardware to test this on, would it be possible to check
> > > > that it still works fine for mlx5?
> > > >
> > > > [...]
> > > > > + struct ptr_ring *r = &pool->ring;
> > > > > + struct page *page;
> > > > > + int pref_nid; /* preferred NUMA node */
> > > > > +
> > > > > + /* Quicker fallback, avoid locks when ring is empty */
> > > > > + if (__ptr_ring_empty(r))
> > > > > + return NULL;
> > > > > +
> > > > > + /* Softirq guarantee CPU and thus NUMA node is stable. This,
> > > > > + * assumes CPU refilling driver RX-ring will also run RX-NAPI.
> > > > > + */
> > > > > + pref_nid = (pool->p.nid == NUMA_NO_NODE) ? numa_mem_id() : pool->p.nid;
> > > >
> > > > One of the use cases for this is that during the allocation we are not
> > > > guaranteed to pick up the correct NUMA node.
> > > > This will get automatically fixed once the driver starts recycling packets.
> > > >
> > > > I don't feel strongly about this, since i don't usually like hiding value
> > > > changes from the user but, would it make sense to move this into
> > > > __page_pool_alloc_pages_slow() and change the pool->p.nid?
> > > >
> > > > Since alloc_pages_node() will replace NUMA_NO_NODE with numa_mem_id()
> > > > regardless, why not store the actual node in our page pool information?
> > > > You can then skip this and check pool->p.nid == numa_mem_id(), regardless of
> > > > what's configured.
> > >
> > > This single code line helps support that drivers can control the nid
> > > themselves. This is a feature that is only used my mlx5 AFAIK.
> > >
> > > I do think that is useful to allow the driver to "control" the nid, as
> > > pinning/preferring the pages to come from the NUMA node that matches
> > > the PCI-e controller hardware is installed in does have benefits.
> >
> > Sure you can keep the if statement as-is, it won't break anything.
> > Would we want to store the actual numa id in pool->p.nid if the user
> > selects 'NUMA_NO_NODE'?
>
> No. pool->p.nid should stay as NUMA_NO_NODE, because that makes it
> dynamic. If someone moves an RX IRQ to another CPU on another NUMA
> node, then this 'NUMA_NO_NODE' setting makes pages transitioned
> automatically.
Ok this assumed that drivers were going to use page_pool_nid_changed(), but with
the current code we don't have to force them to do that. Let's keep this as-is.
I'll be running a few more tests and wait in case Saeed gets a chance to test
it and send my reviewed-by
Cheers
/Ilias
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
>
Powered by blists - more mailing lists