[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <73cf369e-80bc-b7d2-b3f5-106633c3c617@mellanox.com>
Date: Tue, 26 May 2020 12:25:00 +0300
From: Paul Blakey <paulb@...lanox.com>
To: Edward Cree <ecree@...arflare.com>, Jiri Pirko <jiri@...nulli.us>
Cc: Saeed Mahameed <saeedm@...lanox.com>,
Oz Shlomo <ozsh@...lanox.com>,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
Vlad Buslov <vladbu@...lanox.com>,
David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
Jiri Pirko <jiri@...lanox.com>, Roi Dayan <roid@...lanox.com>
Subject: Re: [PATCH net-next 0/3] net/sched: act_ct: Add support for
specifying tuple offload policy
On 5/18/2020 9:02 PM, Edward Cree wrote:
> On 18/05/2020 18:25, Jiri Pirko wrote:
>> Is it worth to have an object just for this particular purpose? In the
>> past I was trying to push a tc block object that could be added/removed
>> and being used to insert filters w/o being attached to any qdisc. This
>> was frowned upon and refused because the existence of block does not
>> have any meaning w/o being attached.
> A tc action doesn't have any meaning either until it is attached to a
> filter. Is the consensus that the 'tc action' API/command set was a
> mistake, or am I misunderstanding the objection?
>
>> What you suggest with zone sounds quite similar. More to that, it is
>> related only to act_ct. Is it a good idea to have a common object in TC
>> which is actually used as internal part of act_ct only?
> Well, really it's related as much to flower ct_stateas to act_ct: the
> policy numbers control when a conntrack rule (from the zone) gets
> offloaded into drivers, thus determining whether a packet (which has
> been through an act_ct to make it +trk) is ±est.
It doesn't affect when a connection will become established (+est),
just the offloading of such connections.
> It's because it has a scope broader than a single ct action that I'm
> resistant to hanging it off act_ct in this way.
>
> Also it still somewhat bothers me that this policy isn't scoped to the
> device; I realise that the current implementation of a single flow
> table shared by all offloading devices is what forces that, but it
> just doesn't seem semantically right that the policy on when to
> offload a connection is global across devices with potentially
> differing capabilities (e.g. size of their conntrack tables) that
> might want different policies.
> (And a 'tc ct add dev eth0 zone 1 policy_blah...' would conveniently
> give a hook point for callback (1) from my offtopic ramble, that the
> driver could use to register for connections in the zone and start
> offloading them to hardware, rather than doing it the first time it
> sees that zone show up in an act_ct it's offloading. You don't
> really want to do the same in the non-device-qualified case because
> that could use up HW table space for connections in a zone you're
> not offloading any rules for.)
>
> Basically I'm just dreaming of a world where TC does a lot more with
> explicit objects that it creates and then references, rather than
> drivers having to implicitly create HW objects for things the first
> time a rule tries to reference them.
> "Is it worth" all these extra objects? Really that depends on how
> much simpler the drivers can become as a result; this is the control
> path, so programmer time is worth more than machine time, and space
> in the programmer's head is worth more than machine RAM ;-)
>
> -ed
I see what you mean here, but this is only used to control action ct behavior
and we don't expect this to be used or referenced in other actions/filters.
What you are suggesting will require new userspace and kernel (builtin)
tc netlink API to manage conntrack zones/nf flow tables policies.
I'm not sure how well it will sit with the flow table having a device while
the filter has a tc block which can have multiple devices.
And then we have the single IPS_OFFLOAD_BIT so a flow can't currently be
shared between different flow tables that will be created for different devices.
We will need to do a an atomic lookup/insert to each table.
So this will need a lot of work, and I think might be a overkill till we have more
use cases besides the policy per device case which can still be achieved, if needed,
with different conntrack zones.
Powered by blists - more mailing lists