lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 28 Jul 2021 09:46:28 +0200
From:   Simon Horman <simon.horman@...igine.com>
To:     Vlad Buslov <vladbu@...dia.com>
Cc:     Jamal Hadi Salim <jhs@...atatu.com>,
        David Miller <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Cong Wang <xiyou.wangcong@...il.com>,
        Jiri Pirko <jiri@...lanox.com>, netdev@...r.kernel.org,
        oss-drivers@...igine.com, Baowen Zheng <baowen.zheng@...igine.com>,
        Louis Peens <louis.peens@...igine.com>
Subject: Re: [PATCH net-next 1/3] flow_offload: allow user to offload tc
 action to net device

On Tue, Jul 27, 2021 at 07:47:43PM +0300, Vlad Buslov wrote:
> On Tue 27 Jul 2021 at 19:13, Jamal Hadi Salim <jhs@...atatu.com> wrote:
> > On 2021-07-27 10:38 a.m., Vlad Buslov wrote:
> >> On Tue 27 Jul 2021 at 16:04, Simon Horman <simon.horman@...igine.com> wrote:
> >
> >>>>
> >>>> Also showing a tc command line in the cover letter on how one would
> >>>> ask for a specific action to be offloaded.
> >>>
> >>> In practice actions are offloaded when a flow using them is offloaded.
> >>> So I think we need to consider what the meaning of IN_HW is.
> >>>
> >>> Is it that:
> >>>
> >>> * The driver (and potentially hardware, though not in our current
> >>>    implementation) has accepted the action for offload;
> >>> * That a classifier that uses the action has bee offloaded;
> >>> * Or something else?
> >> I think we have the same issue with filters - they might not be in
> >> hardware after driver callback returned "success" (due to neigh state
> >> being invalid for tunnel_key encap, for example).
> >> 
> >
> > Sounds like we need another state for this. Otherwise, how do you debug
> > that something is sitting in the driver and not in hardware after you
> > issued a command to offload it? How do i tell today?
> > Also knowing reason why something is sitting in the driver would be
> > helpful.
> 
> It is not about just adding another state. The issue is that there is no
> way for drivers to change the state of software filter dynamically.

I think it might be worth considering enhancing things at some point.
But I agree that its more than a matter of adding an extra flag. And
I think it's reasonable to implement something similar to the classifier
current offload handling of IN_HW now and consider enhancements separately.

> >>> With regards to a counter, I'm not quite sure what this would be:
> >>>
> >>> * The number of devices where the action has been offloaded (which ties
> >>>    into the question of what we mean by IN_HW)
> >>> * The number of offloaded classifier instances using the action
> >>> * Something else
> >> I would prefer to have semantics similar to filters:
> >> 1. Count number of driver callbacks that returned "success".
> >> 2. If count > 0, then set in_hw flag.
> >> 3. Set in_hw_count to success count.
> >> This would allow user to immediately determine whether action passed
> >> driver validation.

Thanks, that makes sense to me.

> > I didnt follow this:
> > Are we refering to the the "block" semantics (where a filter for
> > example applies to multiple devices)?
> 
> This uses indirect offload infrastructure, which means all drivers
> in flow_block_indr_dev_list will receive action offload requests.
> 
> >>> Regarding a flag to control offload:
> >>>
> >>> * For classifiers (at least the flower classifier) there is the skip_sw and
> >>>    skip_hw flags, which allow control of placement of a classifier in SW and
> >>>    HW.
> >>> * We could add similar flags for actions, which at least in my
> >>>    world view would have the net-effect of controlling which classifiers can
> >>>    be added to sw and hw - f.e. a classifier that uses an action marked
> >>>    skip_hw could not be added to HW.
> >
> > I guess it depends on the hardware implementation.
> > In S/W we have two modes:
> > Approach A: create an action and then 2) bind it to a filter.
> > Approach B: Create a filter and then bind it to an action.
> >
> > And #2A can be repeated multiple times for the same action
> > (would require some index as a reference for the action)
> > To Simon's comment above that would mean allowing
> > "a classifier that uses an action marked skip_hw to be added to HW"
> > i.e
> > Some hardware is capable of doing both option #A and #B.
> >
> > Todays offload assumes #B - in which both filter and action are assumed
> > offloaded.
> >
> > I am hoping whatever approach we end up agreeing on doesnt limit
> > either mode.
> >
> >>> * Doing so would add some extra complexity and its not immediately apparent
> >>>    to me what the use-case would be given that there are already flags for
> >>>    classifiers.
> >> Yeah, adding such flag for action offload seems to complicate things.
> >> Also, "skip_sw" flag doesn't even make much sense for actions. I thought
> >> that "skip_hw" flag would be nice to have for users that would like to
> >> avoid "spamming" their NIC drivers (potentially causing higher latency
> >> and resource consumption) for filters/actions they have no intention to
> >> offload to hardware, but I'm not sure how useful is that option really
> >> is.
> >
> > Hold on Vlad.
> > So you are looking at this mostly as an optimization to speed up h/w
> > control updates? ;->
> 
> No. How would adding more flags improve h/w update rate? I was just
> thinking that it is strange that users that are not interested in
> offloads would suddenly have higher memory usage for their actions just
> because they happen to have offload-capable driver loaded. But it is not
> a major concern for me.

In that case can we rely on the global tc-offload on/off flag
provided by ethtool? (I understand its not the same, but perhaps
it is sufficient in practice.)

> > I was looking at it more as a (currently missing) feature improvement.
> > We already have a use case that is implemented by s/w today. The feature
> > mimics it in h/w.
> >
> > At minimal all existing NICs should be able to support the counters
> > as mapped to simple actions like drop. I understand for example if some
> > cant support adding separately offloading of tunnels for example.
> > So the syntax is something along the lines of:
> >
> > tc actions add action drop index 15 skip_sw
> > tc filter add dev ...parent ... protocol ip prio X ..\
> > u32/flower skip_sw match ... flowid 1:10 action gact index 15
> >
> > You get an error if counter index 15 is not offloaded or
> > if skip_sw was left out..
> >
> > And then later on, if you support sharing of actions:
> > tc filter add dev ...parent ... protocol ip prio X2 ..\
> > u32/flower skip_sw match ... flowid 1:10 action gact index 15

Right, I understand that makes sense and is internally consistent.
But I think that in practice it only makes a difference "Approach B"
implementations, none of which currently exist.

I would suggest we can add this when the need arises, rather than
speculatively without hw/driver support. Its not precluded by the current
model AFAIK.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ