lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 10 Dec 2019 22:10:32 +0200
From:   Ilias Apalodimas <ilias.apalodimas@...aro.org>
To:     Saeed Mahameed <saeedm@....mellanox.co.il>
Cc:     Saeed Mahameed <saeedm@...lanox.com>,
        "brouer@...hat.com" <brouer@...hat.com>,
        "jonathan.lemon@...il.com" <jonathan.lemon@...il.com>,
        "linyunsheng@...wei.com" <linyunsheng@...wei.com>,
        Li Rongqing <lirongqing@...du.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH][v2] page_pool: handle page recycle for NUMA_NO_NODE
 condition

Hi Saeed,

On Tue, Dec 10, 2019 at 12:02:12PM -0800, Saeed Mahameed wrote:
> On Tue, Dec 10, 2019 at 7:02 AM Ilias Apalodimas
> <ilias.apalodimas@...aro.org> wrote:
> >
> > Hi Saeed,
> >
> > > >
> > > > The patch description doesn't explain the problem very well.
> > > >
> > > > Lets first establish what the problem is.  After I took at closer
> > > > look,
> > > > I do think we have a real problem here...
> > > >
> > > > If function alloc_pages_node() is called with NUMA_NO_NODE (see below
> > > > signature), then the nid is re-assigned to numa_mem_id().
> > > >
> > > > Our current code checks: page_to_nid(page) == pool->p.nid which seems
> > > > bogus, as pool->p.nid=NUMA_NO_NODE and the page NID will not return
> > > > NUMA_NO_NODE... as it was set to the local detect numa node, right?
> > > >
> > >
> > > right.
> > >
> > > > So, we do need a fix... but the question is that semantics do we
> > > > want?
> > > >
> > >
> > > maybe assume that __page_pool_recycle_direct() is always called from
> > > the right node and change the current bogus check:
> >
> > Is this a typo? pool_page_reusable() is called from __page_pool_put_page().
> >
> > page_pool_put_page and page_pool_recycle_direct() (no underscores) call that.
> 
> Yes a typo :) , thanks for the correction.
> 
> > Can we guarantee that those will always run from the correct cpu?
> No, but we add the tool to correct any discrepancy: page_pool_nid_changed()
> 
> > In the current code base if they are only called under NAPI this might be true.
> > On the page_pool skb recycling patches though (yes we'll eventually send those
> > :)) this is called from kfree_skb().
> > I don't think we can get such a guarantee there, right?
> >
> 
> Yes, but this has nothing to do with page recycling from pool's owner
> level (driver napi)
>  for SKB recycling we can use pool.nid to recycle, and not numa_mem_id().

Right i responded to an email without the proper context!
Let me try again. You suggested  changing the check
from page_to_nid(page) == pool->p.nid to page_to_nid(page) == numa_mem_id().

Since the skb recycling code is using page_pool_put_page() won't that break the
recycling for thatr patchset?

Thanks
/Ilias

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ