[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20220520161627.6d587791@kernel.org>
Date: Fri, 20 May 2022 16:16:27 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Pablo Neira Ayuso <pablo@...filter.org>
Cc: netfilter-devel@...r.kernel.org, davem@...emloft.net,
netdev@...r.kernel.org, pabeni@...hat.com,
Felix Fietkau <nbd@....name>, Oz Shlomo <ozsh@...dia.com>,
paulb@...dia.com, vladbu@...dia.com
Subject: Re: [PATCH net-next 06/11] netfilter: nf_flow_table: count and
limit hw offloaded entries
On Sat, 21 May 2022 00:17:32 +0200 Pablo Neira Ayuso wrote:
> Policy can also throttle down the maximum number of entries in the
> hardware, but policy is complementary to the hard cap.
>
> Once the hw cap is reached, the implementation falls back to the
> software flowtable datapath.
Understood.
> Regarding the "magic number", it would be good if devices can expose
> these properties through interface, maybe FLOW_BLOCK_PROBE to fetch
> device properties and capabilities.
Fingers crossed, however, if the device is multi-user getting exact
cap may be pretty much impossible. Then again the user is supposed
to be able to pull the cap for sysfs out of the hat so I'm confused.
What I was thinking of was pausing offload requests for a jiffy if we
get ENOSPC 3 times in a row, or some such.
> In general, I would also prefer a netlink interface for this, but for
> tc ct, this would need to expose the existing flowtable objects via a
> new netlink command. Then, I assume such cap would be per ct zone
> (there is internally one flowtable per conntrack zone).
>
> BTW, Cc'ing Oz, Paul and Vlad.
Ah, thanks, I added Felix just in case but didn't check if authors
are already on CC :S
> Meanwhile, what do you want me to do, toss this patchset?
Yeah, if you don't mind.. We're too close to the merge window to
tentatively take stuff that's under discussion.
Powered by blists - more mailing lists