[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230216142846.bjura4mf2f64tmcr@skbuf>
Date: Thu, 16 Feb 2023 16:28:46 +0200
From: Vladimir Oltean <vladimir.oltean@....com>
To: Ferenc Fejes <fejes@....elte.hu>
Cc: netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
Paolo Abeni <pabeni@...hat.com>,
Claudiu Manoil <claudiu.manoil@....com>,
Vinicius Costa Gomes <vinicius.gomes@...el.com>,
Kurt Kanzenbach <kurt@...utronix.de>,
Jacob Keller <jacob.e.keller@...el.com>,
Jamal Hadi Salim <jhs@...atatu.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>,
Simon Horman <simon.horman@...igine.com>
Subject: Re: [PATCH v6 net-next 02/13] net/sched: mqprio: refactor offloading
and unoffloading to dedicated functions
Hi Ferenc,
On Thu, Feb 16, 2023 at 02:05:22PM +0100, Ferenc Fejes wrote:
> This patch just code refactoring or it modifies the default behavior of
> the offloading too? I'm asking it in regards of the veth interface.
> When you configure mqprio, the "hw" parameter is mandatory. By default,
> it tries to configure it with "hw 1". However as a result, veth spit
> back "Invalid argument" error (before your patches). Same happens after
> this patch too, right?
Yup. iproute2 has a default queue configuration built in, if nothing
else is specified, and this has "hw 1":
https://git.kernel.org/pub/scm/network/iproute2/iproute2.git/tree/tc/q_mqprio.c#n36
> For veth hardware offloading makes no sense, but giving the "hw 0"
> argument explicitly as mqprio parameter might counterintuitive.
Agree that giving the right nlattrs to mqprio and trying to slalom
through their validation is a frustrating minesweeper game. I have some
patches which add some netlink EXT_ACK messages to make this a bit less
sour. I'm regression-testing those, together with some other mqprio
changes and I hope to send them soon.
OTOH, "hw 1" is mandatory with the "mode", "shaper", "min_rate" and
"max_rate" options. This is logical when you think about it (driver has
to act upon them), but indeed it makes mqprio difficult to configure.
With veth, you need to use multi-queue to make use of mqprio/taprio,
have you done that?
ip link add veth0 numtxqueues 8 numrxqueues 8 type veth peer name veth1
Powered by blists - more mailing lists