[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191209093014.GA25360@apalos.home>
Date: Mon, 9 Dec 2019 11:30:14 +0200
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: "Li,Rongqing" <lirongqing@...du.com>
Cc: Yunsheng Lin <linyunsheng@...wei.com>,
Saeed Mahameed <saeedm@...lanox.com>,
"jonathan.lemon@...il.com" <jonathan.lemon@...il.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"brouer@...hat.com" <brouer@...hat.com>,
"ivan.khoronzhuk@...aro.org" <ivan.khoronzhuk@...aro.org>,
"grygorii.strashko@...com" <grygorii.strashko@...com>
Subject: Re: 答复: [PATCH][v2] page_pool:
handle page recycle for NUMA_NO_NODE condition
On Mon, Dec 09, 2019 at 03:47:50AM +0000, Li,Rongqing wrote:
> > >
[...]
> > > Cc'ed Jesper, Ilias & Jonathan.
> > >
> > > I don't think it is correct to check that the page nid is same as
> > > numa_mem_id() if pool is NUMA_NO_NODE. In such case we should allow
> > > all pages to recycle, because you can't assume where pages are
> > > allocated from and where they are being handled.
> > >
> > > I suggest the following:
> > >
> > > return !page_pfmemalloc() &&
> > > ( page_to_nid(page) == pool->p.nid || pool->p.nid == NUMA_NO_NODE );
> > >
> > > 1) never recycle emergency pages, regardless of pool nid.
> > > 2) always recycle if pool is NUMA_NO_NODE.
> >
> > As I can see, below are the cases that the pool->p.nid could be
> > NUMA_NO_NODE:
> >
> > 1. kernel built with the CONFIG_NUMA being off.
> >
> > 2. kernel built with the CONFIG_NUMA being on, but FW/BIOS dose not provide
> > a valid node id through ACPI/DT, and it has the below cases:
> >
> > a). the hardware itself is single numa node system, so maybe it is valid
> > to not provide a valid node for the device.
> > b). the hardware itself is multi numa nodes system, and the FW/BIOS is
> > broken that it does not provide a valid one.
> >
> > 3. kernel built with the CONFIG_NUMA being on, and FW/BIOS dose provide a
> > valid node id through ACPI/DT, but the driver does not pass the valid
> > node id when calling page_pool_init().
> >
> > I am not sure which case this patch is trying to fix, maybe Rongqing can help to
> > clarify.
> >
> > For case 1 and case 2 (a), I suppose checking pool->p.nid being
> > NUMA_NO_NODE is enough.
> >
> > For case 2 (b), There may be argument that it should be fixed in the BIOS/FW,
> > But it also can be argued that the numa_mem_id() checking has been done in
> > the driver that has not using page pool yet when deciding whether to do
> > recycling, see [1]. If I understanding correctly, recycling is normally done at the
> > NAPI pooling, which has the same affinity as the rx interrupt, and rx interrupt is
> > not changed very often. If it does change to different memory node, maybe it
> > makes sense not to recycle the old page belongs to old node?
> >
> >
> > For case 3, I am not sure if any driver is doing that, and if the page pool API
> > even allow that?
> >
>
> I think pool_page_reusable should support NUMA_NO_NODE no matter which cases
>
Yes
>
> And recycling is normally done at the NAPI pooling, NUMA_NO_NODE hint to use the
> local node, except memory usage is unbalance, so I add the check that the page nid is
> same as numa_mem_id() if pool is NUMA_NO_NODE
I am not sure i am following here.
Recycling done at NAPI or not should have nothing to do with NUMA.
What do you mean by 'memory usage is unbalance'?
Thanks
/Ilias
Powered by blists - more mailing lists