[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6618baf56c815_3a050208f4@john.notmuch>
Date: Thu, 11 Apr 2024 21:39:17 -0700
From: John Fastabend <john.fastabend@...il.com>
To: Vinicius Costa Gomes <vinicius.gomes@...el.com>,
Simon Horman <horms@...nel.org>,
netdev@...r.kernel.org
Cc: Jakub Kicinski <kuba@...nel.org>,
Jiri Pirko <jiri@...nulli.us>,
Madhu Chittim <madhu.chittim@...el.com>,
Sridhar Samudrala <sridhar.samudrala@...el.com>,
Paolo Abeni <pabeni@...hat.com>
Subject: Re: [RFC] HW TX Rate Limiting Driver API
Vinicius Costa Gomes wrote:
> Hi,
>
> Simon Horman <horms@...nel.org> writes:
>
> > Hi,
> >
> > This is follow-up to the ongoing discussion started by Intel to extend the
> > support for TX shaping H/W offload [1].
> >
> > The goal is allowing the user-space to configure TX shaping offload on a
> > per-queue basis with min guaranteed B/W, max B/W limit and burst size on a
> > VF device.
> >
>
> What about non-VF cases? Would it be out of scope?
>
> >
> > In the past few months several different solutions were attempted and
> > discussed, without finding a perfect fit:
> >
> > - devlink_rate APIs are not appropriate for to control TX shaping on netdevs
> > - No existing TC qdisc offload covers the required feature set
> > - HTB does not allow direct queue configuration
> > - MQPRIO imposes constraint on the maximum number of TX queues
> > - TBF does not support max B/W limit
> > - ndo_set_tx_maxrate() only controls the max B/W limit
> >
>
> Another questions: is how "to plug" different shaper algorithms? for
> example, the TSN world defines the Credit Based Shaper (IEEE 802.1Q-2018
> Annex L gives a good overview), which tries to be accurate over sub
> milisecond intervals.
>
> (sooner or later, some NIC with lots of queues will appear with TSN
> features, and I guess some people would like to know that they are using
> the expected shaper)
>
> > A new H/W offload API is needed, but offload API proliferation should be
> > avoided.
> >
> > The following proposal intends to cover the above specified requirement and
> > provide a possible base to unify all the shaping offload APIs mentioned above.
> >
> > The following only defines the in-kernel interface between the core and
> > drivers. The intention is to expose the feature to user-space via Netlink.
> > Hopefully the latter part should be straight-forward after agreement
> > on the in-kernel interface.
> >
>
> Another thing that MQPRIO (indirectly) gives is the ability to userspace
> applications to have some amount of control in which queue their packets
> will end up, via skb->priority.
You can attach a BPF program now to set the queue_mapping. So one way
to do this would be to have a simple BPF program that maps priority
to a set of queues. We could likely include it somewhere in the source
or tooling to make it easily available for folks.
I agree having a way to map applications/packets to QOS is useful. The
nice bit about allowing BPF to do this is you don't have to figure out
how to set the priority on the socket and can use whatever policy makes
sense.
>
> Would this new shaper hierarchy have something that would fill this
> role? (if this is for VF-only use cases, then the answer would be "no" I
> guess)
>
> (I tried to read the whole thread, sorry if I missed something)
>
Powered by blists - more mailing lists