[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <95d6873c-256c-0462-60f7-56dbffb8221b@mojatatu.com>
Date: Tue, 27 Jul 2021 12:13:37 -0400
From: Jamal Hadi Salim <jhs@...atatu.com>
To: Vlad Buslov <vladbu@...dia.com>,
Simon Horman <simon.horman@...igine.com>
Cc: David Miller <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...lanox.com>, netdev@...r.kernel.org,
oss-drivers@...igine.com, Baowen Zheng <baowen.zheng@...igine.com>,
Louis Peens <louis.peens@...igine.com>
Subject: Re: [PATCH net-next 1/3] flow_offload: allow user to offload tc
action to net device
On 2021-07-27 10:38 a.m., Vlad Buslov wrote:
>
> On Tue 27 Jul 2021 at 16:04, Simon Horman <simon.horman@...igine.com> wrote:
>>>
>>> Also showing a tc command line in the cover letter on how one would
>>> ask for a specific action to be offloaded.
>>
>> In practice actions are offloaded when a flow using them is offloaded.
>> So I think we need to consider what the meaning of IN_HW is.
>>
>> Is it that:
>>
>> * The driver (and potentially hardware, though not in our current
>> implementation) has accepted the action for offload;
>> * That a classifier that uses the action has bee offloaded;
>> * Or something else?
>
> I think we have the same issue with filters - they might not be in
> hardware after driver callback returned "success" (due to neigh state
> being invalid for tunnel_key encap, for example).
>
Sounds like we need another state for this. Otherwise, how do you debug
that something is sitting in the driver and not in hardware after you
issued a command to offload it? How do i tell today?
Also knowing reason why something is sitting in the driver would be
helpful.
>> With regards to a counter, I'm not quite sure what this would be:
>>
>> * The number of devices where the action has been offloaded (which ties
>> into the question of what we mean by IN_HW)
>> * The number of offloaded classifier instances using the action
>> * Something else
>
> I would prefer to have semantics similar to filters:
>
> 1. Count number of driver callbacks that returned "success".
>
> 2. If count > 0, then set in_hw flag.
>
> 3. Set in_hw_count to success count.
>
> This would allow user to immediately determine whether action passed
> driver validation.
>
I didnt follow this:
Are we refering to the the "block" semantics (where a filter for
example applies to multiple devices)?
>>
>> Regarding a flag to control offload:
>>
>> * For classifiers (at least the flower classifier) there is the skip_sw and
>> skip_hw flags, which allow control of placement of a classifier in SW and
>> HW.
>> * We could add similar flags for actions, which at least in my
>> world view would have the net-effect of controlling which classifiers can
>> be added to sw and hw - f.e. a classifier that uses an action marked
>> skip_hw could not be added to HW.
I guess it depends on the hardware implementation.
In S/W we have two modes:
Approach A: create an action and then 2) bind it to a filter.
Approach B: Create a filter and then bind it to an action.
And #2A can be repeated multiple times for the same action
(would require some index as a reference for the action)
To Simon's comment above that would mean allowing
"a classifier that uses an action marked skip_hw to be added to HW"
i.e
Some hardware is capable of doing both option #A and #B.
Todays offload assumes #B - in which both filter and action are assumed
offloaded.
I am hoping whatever approach we end up agreeing on doesnt limit
either mode.
>> * Doing so would add some extra complexity and its not immediately apparent
>> to me what the use-case would be given that there are already flags for
>> classifiers.
> Yeah, adding such flag for action offload seems to complicate things.
> Also, "skip_sw" flag doesn't even make much sense for actions. I thought
> that "skip_hw" flag would be nice to have for users that would like to
> avoid "spamming" their NIC drivers (potentially causing higher latency
> and resource consumption) for filters/actions they have no intention to
> offload to hardware, but I'm not sure how useful is that option really
> is.
Hold on Vlad.
So you are looking at this mostly as an optimization to speed up h/w
control updates? ;->
I was looking at it more as a (currently missing) feature improvement.
We already have a use case that is implemented by s/w today. The feature
mimics it in h/w.
At minimal all existing NICs should be able to support the counters
as mapped to simple actions like drop. I understand for example if some
cant support adding separately offloading of tunnels for example.
So the syntax is something along the lines of:
tc actions add action drop index 15 skip_sw
tc filter add dev ...parent ... protocol ip prio X ..\
u32/flower skip_sw match ... flowid 1:10 action gact index 15
You get an error if counter index 15 is not offloaded or
if skip_sw was left out..
And then later on, if you support sharing of actions:
tc filter add dev ...parent ... protocol ip prio X2 ..\
u32/flower skip_sw match ... flowid 1:10 action gact index 15
cheers,
jamal
Powered by blists - more mailing lists