[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <433890308.243090207.1640188193310.JavaMail.zimbra@uliege.be>
Date: Wed, 22 Dec 2021 16:49:53 +0100 (CET)
From: Justin Iurman <justin.iurman@...ege.be>
To: Vladimir Oltean <olteanv@...il.com>
Cc: Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
davem@...emloft.net, dsahern@...nel.org, yoshfuji@...ux-ipv6.org,
linux-mm@...ck.org, cl@...ux.com, penberg@...nel.org,
rientjes@...gle.com, iamjoonsoo kim <iamjoonsoo.kim@....com>,
akpm@...ux-foundation.org, vbabka@...e.cz,
Roopa Prabhu <roopa@...dia.com>,
Nikolay Aleksandrov <nikolay@...dia.com>,
Andrew Lunn <andrew@...n.ch>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Florian Fainelli <f.fainelli@...il.com>,
Florian Westphal <fw@...len.de>,
Paolo Abeni <pabeni@...hat.com>
Subject: Re: [RFC net-next 2/2] ipv6: ioam: Support for Buffer occupancy
data field
On Dec 21, 2021, at 6:23 PM, Vladimir Oltean olteanv@...il.com wrote:
> I know nothing about OAM and therefore did not want to comment, but I
NP, all opinions are more than welcome.
> think the point raised about the metric you propose being irrelevant in
> the context of offloaded data paths is quite important. The "devlink-sb"
> proposal was dismissed very quickly on grounds of requiring sleepable
> context, is that a deal breaker, and if it is, why? Not only offloaded
Can't sleep in the datapath.
> interfaces like switches/routers can report buffer occupancy. Plain NICs
> also have buffer pools, DMA RX/TX rings, MAC FIFOs, etc, that could
> indicate congestion or otherwise high load. Maybe slab information could
Indeed. Is there any API to retrieve such metric? Anyway, that would
probably involve (again) sleepable context.
> be relevant, for lack of a better option, on virtual interfaces, but if
> they're physical, why limit ourselves on reporting that? The IETF draft
> you present says "This field indicates the current status of the
> occupancy of the common buffer pool used by a set of queues." It appears
> to me that we could try to get a reporting that has better granularity
> (per interface, per queue) than just something based on
> skbuff_head_cache. What if someone will need that finer granularity in
> the future.
I think we all agree (Jakub, you, and I) on this point. The thing is,
what could be a better solution to have something generic that makes
sense, instead of just nothing? Is it actually feasible at all?
Powered by blists - more mailing lists