lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190524104436.35bf913b@cakuba.netronome.com>
Date:   Fri, 24 May 2019 10:44:36 -0700
From:   Jakub Kicinski <jakub.kicinski@...ronome.com>
To:     Edward Cree <ecree@...arflare.com>
Cc:     Jamal Hadi Salim <jhs@...atatu.com>, Jiri Pirko <jiri@...nulli.us>,
        "Pablo Neira Ayuso" <pablo@...filter.org>,
        David Miller <davem@...emloft.net>,
        netdev <netdev@...r.kernel.org>,
        Cong Wang <xiyou.wangcong@...il.com>,
        "Andy Gospodarek" <andy@...yhouse.net>,
        Michael Chan <michael.chan@...adcom.com>,
        Vishal Kulkarni <vishal@...lsio.com>
Subject: Re: [PATCH v3 net-next 0/3] flow_offload: Re-add per-action
 statistics

On Fri, 24 May 2019 18:27:39 +0100, Edward Cree wrote:
> On 24/05/2019 18:03, Jakub Kicinski wrote:
> > On Fri, 24 May 2019 14:57:24 +0100, Edward Cree wrote:  
> >> Argh, there's a problem: an action doesn't have a (directly) associated
> >>  block, and all the TC offload machinery nowadays is built around blocks.
> >> Since this action might have been used in _any_ block (and afaik there's
> >>  no way, from the action, to find which) we'd have to make callbacks on
> >>  _every_ block in the system, which sounds like it'd perform even worse
> >>  than the rule-dumping approach.
> >> Any ideas?  
> > Simplest would be to keep a list of offloaders per action, but maybe
> > something more clever would appear as one rummages through the code.  
> Problem with that is where to put the list heads; you'd need something that
>  was allocated per action x block, for those blocks on which at least one
>  offloader handled the rule (in_hw_count > 0).

I was thinking of having the list per action, but I haven't looked at
the code TBH.  Driver would then request to be added to each action's
list..

> Then you'd also have to update that when a driver bound/unbound from a
>  block (fl_reoffload() time).
> Best I can think of is keeping the cls_flower.rule allocated in
>  fl_hw_replace_filter() around instead of immediately freeing it, and
>  having a list_head in each flow_action_entry.  But that really looks like
>  an overcomplicated mess.
> TBH I'm starting to wonder if just calling all tc blocks in existence is
>  really all that bad.  Is there a plausible use case with huge numbers of
>  bound blocks?

Once per RTM_GETACTION?  The simplicity of that has it's allure..
It doesn't give you an upstream user for a cookie, though :S

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ