lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240117174722.521c9fdf@kernel.org>
Date: Wed, 17 Jan 2024 17:47:22 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Lorenzo Bianconi <lorenzo@...nel.org>
Cc: Jesper Dangaard Brouer <hawk@...nel.org>, Paolo Abeni
 <pabeni@...hat.com>, Ilias Apalodimas <ilias.apalodimas@...aro.org>,
 netdev@...r.kernel.org, Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
 willemdebruijn.kernel@...il.com, toke@...hat.com, davem@...emloft.net,
 edumazet@...gle.com, bpf@...r.kernel.org, lorenzo.bianconi@...hat.com,
 sdf@...gle.com, jasowang@...hat.com
Subject: Re: [PATCH v5 net-next 1/3] net: introduce page_pool pointer in
 softnet_data percpu struct

On Wed, 17 Jan 2024 18:36:25 +0100 Lorenzo Bianconi wrote:
> I would resume this activity and it seems to me there is no a clear direction
> about where we should add the page_pool (in a per_cpu pointer or in
> netdev_rx_queue struct) or if we can rely on page_frag_cache instead.
> 
> @Jakub: what do you think? Should we add a page_pool in a per_cpu pointer?

Let's try to summarize. We want skb reallocation without linearization
for XDP generic. We need some fast-ish way to get pages for the payload.

First, options for placing the allocator:
 - struct netdev_rx_queue
 - per-CPU

IMO per-CPU has better scaling properties - you're less likely to
increase the CPU count to infinity than spawn extra netdev queues.

The second question is:
 - page_frag_cache
 - page_pool

I like the page pool because we have an increasing amount of infra for
it, and page pool is already used in veth, which we can hopefully also
de-duplicate if we have a per-CPU one, one day. But I do agree that
it's not a perfect fit.

To answer Jesper's questions:
 ad1. cache size - we can lower the cache to match page_frag_cache, 
      so I think 8 entries? page frag cache can give us bigger frags 
      and therefore lower frag count, so that's a minus for using 
      page pool
 ad2. nl API - we can extend netlink to dump unbound page pools fairly
      easily, I didn't want to do it without a clear use case, but I
      don't think there are any blockers
 ad3. locking - a bit independent of allocator but fair point, we assume
      XDP generic or Rx path for now, so sirq context / bh locked out
 ad4. right, well, right, IDK what real workloads need, and whether 
      XDP generic should be optimized at all.. I personally lean
      towards "no"
 
Sorry if I haven't helped much to clarify the direction :)
I have no strong preference on question #2, I would prefer to not add
per-queue state for something that's in no way tied to the device
(question #1 -> per-CPU). 

You did good perf analysis of the options, could you share it here
again?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ