lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 26 May 2020 17:17:01 +0100
From:   Edward Cree <ecree@...arflare.com>
To:     Paul Blakey <paulb@...lanox.com>, Jiri Pirko <jiri@...nulli.us>
CC:     Saeed Mahameed <saeedm@...lanox.com>,
        Oz Shlomo <ozsh@...lanox.com>,
        "Jakub Kicinski" <jakub.kicinski@...ronome.com>,
        Vlad Buslov <vladbu@...lanox.com>,
        David Miller <davem@...emloft.net>, <netdev@...r.kernel.org>,
        Jiri Pirko <jiri@...lanox.com>, Roi Dayan <roid@...lanox.com>
Subject: Re: [PATCH net-next 0/3] net/sched: act_ct: Add support for
 specifying tuple offload policy

On 26/05/2020 10:25, Paul Blakey wrote:
> On 5/18/2020 9:02 PM, Edward Cree wrote:
>> Well, really it's related as much to flower ct_state as to act_ct: the
>>  policy numbers control when a conntrack rule (from the zone) gets
>>  offloaded into drivers, thus determining whether a packet (which has
>>  been through an act_ct to make it +trk) is ±est.
> It doesn't affect when a connection will become established (+est),
> just the offloading of such connections.
Yes; what I meant was that the packet wouldn't match a +trk+estrule in
 hardware if the connection hadn't been offloaded (it would instead go
 to software, where it would match).
So the policy is in a sense controlling ct_state offload, as much as it
 is act_ct offload.

> I'm not sure how well it will sit with the flow table having a device while
> the filter has a tc block which can have multiple devices.
>
> And then we have the single IPS_OFFLOAD_BIT so a flow can't currently be
> shared between different flow tables that will be created for different devices.
> We will need to do a an atomic lookup/insert to each table.
I see; it does get a bit complicated.  I agree that the 'policy per
 device' use case is too speculative to justify all that work.

But I think I can still make my dream happen with the global policy,
 just by putting the "we used this CT zone in an action, we need to
 offload it" smarts into TC rather than all the drivers.  Which I
 suppose is orthogonal to the TC CT policy uAPI question; but I still
 prefer the 'explicit CT object' approach.  To do those smarts, TC
 would need a CT object internally anyway, so we may as well expose
 it and get a clean policy API into the bargain.

> What you are suggesting will require new userspace and kernel (builtin)
> tc netlink API to manage conntrack zones/nf flow tables policies.
Either way there's new API.  Yours may be a smaller change (just adding
 new attributes hung off the act_ct), but you still need to update
 userspace to take advantage of it, else you get the default policy.
And `tc ct add zone 1 policy_timeout 120` (global policy) can be done
 backwards-compatibly, too, as I described in my message before last.

(Policy-per-device would be much more of a pain new-uAPI-wise; again,
 I agree to drop that idea.)

-ed

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ