lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <15be326d-1811-329c-424c-6dd22b0604a8@huawei.com>
Date:   Mon, 16 Dec 2019 09:51:29 +0800
From:   Yunsheng Lin <linyunsheng@...wei.com>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
CC:     "Li,Rongqing" <lirongqing@...du.com>,
        Saeed Mahameed <saeedm@...lanox.com>,
        "ilias.apalodimas@...aro.org" <ilias.apalodimas@...aro.org>,
        "jonathan.lemon@...il.com" <jonathan.lemon@...il.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "mhocko@...nel.org" <mhocko@...nel.org>,
        "peterz@...radead.org" <peterz@...radead.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        "bhelgaas@...gle.com" <bhelgaas@...gle.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Björn Töpel <bjorn.topel@...el.com>
Subject: Re: [PATCH][v2] page_pool: handle page recycle for NUMA_NO_NODE
 condition

On 2019/12/13 16:48, Jesper Dangaard Brouer wrote:> You are basically saying that the NUMA check should be moved to
> allocation time, as it is running the RX-CPU (NAPI).  And eventually
> after some time the pages will come from correct NUMA node.
> 
> I think we can do that, and only affect the semi-fast-path.
> We just need to handle that pages in the ptr_ring that are recycled can
> be from the wrong NUMA node.  In __page_pool_get_cached() when
> consuming pages from the ptr_ring (__ptr_ring_consume_batched), then we
> can evict pages from wrong NUMA node.

Yes, that's workable.

> 
> For the pool->alloc.cache we either accept, that it will eventually
> after some time be emptied (it is only in a 100% XDP_DROP workload that
> it will continue to reuse same pages).   Or we simply clear the
> pool->alloc.cache when calling page_pool_update_nid().

Simply clearing the pool->alloc.cache when calling page_pool_update_nid()
seems better.

> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ