[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191219143535.6c7bc880@carbon>
Date: Thu, 19 Dec 2019 14:35:35 +0100
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Michal Hocko <mhocko@...nel.org>
Cc: netdev@...r.kernel.org, lirongqing@...du.com,
linyunsheng@...wei.com,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Saeed Mahameed <saeedm@...lanox.com>, peterz@...radead.org,
linux-kernel@...r.kernel.org, brouer@...hat.com
Subject: Re: [net-next v4 PATCH] page_pool: handle page recycle for
NUMA_NO_NODE condition
On Thu, 19 Dec 2019 13:09:25 +0100
Michal Hocko <mhocko@...nel.org> wrote:
> On Wed 18-12-19 09:01:35, Jesper Dangaard Brouer wrote:
> [...]
> > For the NUMA_NO_NODE case, when a NIC IRQ is moved to another NUMA
> > node, then ptr_ring will be emptied in 65 (PP_ALLOC_CACHE_REFILL+1)
> > chunks per allocation and allocation fall-through to the real
> > page-allocator with the new nid derived from numa_mem_id(). We accept
> > that transitioning the alloc cache doesn't happen immediately.
Oh, I just realized that the drivers usually refill several RX
packet-pages at once, this means that this is called N times, meaning
during a NUMA change this will result in N * 65 pages returned.
> Could you explain what is the expected semantic of NUMA_NO_NODE in this
> case? Does it imply always the preferred locality? See my other email[1] to
> this matter.
I do think we want NUMA_NO_NODE to mean preferred locality. My code
allow the page to come from a remote NUMA node, but once it is returned
via the ptr_ring, we return pages not belonging to the local NUMA node
(determined by the CPU processing RX packets from the drivers RX-ring).
> [1] http://lkml.kernel.org/r/20191219115338.GC26945@dhcp22.suse.cz
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
Powered by blists - more mailing lists