[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210129135702.0f8cf702@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date: Fri, 29 Jan 2021 13:57:02 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Vinicius Costa Gomes <vinicius.gomes@...el.com>
Cc: Vladimir Oltean <vladimir.oltean@....com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"jhs@...atatu.com" <jhs@...atatu.com>,
"xiyou.wangcong@...il.com" <xiyou.wangcong@...il.com>,
"jiri@...nulli.us" <jiri@...nulli.us>,
"Jose.Abreu@...opsys.com" <Jose.Abreu@...opsys.com>,
Po Liu <po.liu@....com>,
"intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>,
"anthony.l.nguyen@...el.com" <anthony.l.nguyen@...el.com>,
"mkubecek@...e.cz" <mkubecek@...e.cz>
Subject: Re: [PATCH net-next v3 2/8] taprio: Add support for frame
preemption offload
On Fri, 29 Jan 2021 13:13:24 -0800 Vinicius Costa Gomes wrote:
> Vladimir Oltean <vladimir.oltean@....com> writes:
>
> > On Fri, Jan 22, 2021 at 02:44:47PM -0800, Vinicius Costa Gomes wrote:
> >> + /* It's valid to enable frame preemption without any kind of
> >> + * offloading being enabled, so keep it separated.
> >> + */
> >> + if (tb[TCA_TAPRIO_ATTR_PREEMPT_TCS]) {
> >> + u32 preempt = nla_get_u32(tb[TCA_TAPRIO_ATTR_PREEMPT_TCS]);
> >> + struct tc_preempt_qopt_offload qopt = { };
> >> +
> >> + if (preempt == U32_MAX) {
> >> + NL_SET_ERR_MSG(extack, "At least one queue must be not be preemptible");
> >> + err = -EINVAL;
> >> + goto free_sched;
> >> + }
> >> +
> >> + qopt.preemptible_queues = tc_map_to_queue_mask(dev, preempt);
> >> +
> >> + err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_PREEMPT,
> >> + &qopt);
> >> + if (err)
> >> + goto free_sched;
> >> +
> >> + q->preemptible_tcs = preempt;
> >> + }
> >> +
> >
> > First I'm interested in the means: why check for preempt == U32_MAX when
> > you determine that all traffic classes are preemptible? What if less
> > than 32 traffic classes are used by the netdev? The check will be
> > bypassed, won't it?
>
> Good catch :-)
>
> I wanted to have this (at least one express queue) handled in a
> centralized way, but perhaps this should be handled best per driver.
Centralized is good. Much easier than reviewing N drivers to make sure
they all behave the same, and right.
Powered by blists - more mailing lists