[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191210150244.GB12702@apalos.home>
Date: Tue, 10 Dec 2019 17:02:44 +0200
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Saeed Mahameed <saeedm@...lanox.com>
Cc: "brouer@...hat.com" <brouer@...hat.com>,
"jonathan.lemon@...il.com" <jonathan.lemon@...il.com>,
"linyunsheng@...wei.com" <linyunsheng@...wei.com>,
Li Rongqing <lirongqing@...du.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH][v2] page_pool: handle page recycle for NUMA_NO_NODE
condition
Hi Saeed,
> >
> > The patch description doesn't explain the problem very well.
> >
> > Lets first establish what the problem is. After I took at closer
> > look,
> > I do think we have a real problem here...
> >
> > If function alloc_pages_node() is called with NUMA_NO_NODE (see below
> > signature), then the nid is re-assigned to numa_mem_id().
> >
> > Our current code checks: page_to_nid(page) == pool->p.nid which seems
> > bogus, as pool->p.nid=NUMA_NO_NODE and the page NID will not return
> > NUMA_NO_NODE... as it was set to the local detect numa node, right?
> >
>
> right.
>
> > So, we do need a fix... but the question is that semantics do we
> > want?
> >
>
> maybe assume that __page_pool_recycle_direct() is always called from
> the right node and change the current bogus check:
Is this a typo? pool_page_reusable() is called from __page_pool_put_page().
page_pool_put_page and page_pool_recycle_direct() (no underscores) call that.
Can we guarantee that those will always run from the correct cpu?
In the current code base if they are only called under NAPI this might be true.
On the page_pool skb recycling patches though (yes we'll eventually send those
:)) this is called from kfree_skb().
I don't think we can get such a guarantee there, right?
Regards
/Ilias
Powered by blists - more mailing lists