lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <83bfbc34-6a3e-1f31-4546-1511c5dcddf5@ucloud.cn>
Date:   Wed, 4 Mar 2020 20:54:25 +0800
From:   wenxu <wenxu@...oud.cn>
To:     Pablo Neira Ayuso <pablo@...filter.org>
Cc:     netfilter-devel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH nf-next v5 0/4] netfilter: flowtable: add indr-block
 offload


在 2020/3/4 5:53, Pablo Neira Ayuso 写道:
> Hi,
>
> On Mon, Feb 24, 2020 at 01:22:51PM +0800, wenxu@...oud.cn wrote:
>> From: wenxu <wenxu@...oud.cn>
>>
>> This patch provide tunnel offload based on route lwtunnel. 
>> The first two patches support indr callback setup
>> Then add tunnel match and action offload.
>>
>> This version modify the second patch: make the dev can bind with different 
>> flowtable and check the NF_FLOWTABLE_HW_OFFLOAD flags in 
>> nf_flow_table_indr_block_cb_cmd. 
> I found some time to look at this indirect block infrastructure that
> you have added to net/core/flow_offload.c
>
> This is _complex_ code, I don't understand why it is so complex.
> Frontend calls walks into the driver through callback, then, it gets
> back to the front-end code again through another callback to come
> back... this is hard to follow.
>
> Then, we still have problem with the existing approach that you
> propose, since there is 1:N mapping between the indirect block and the
> net_device.

The indirect block infrastructure is designed by the driver guys. The callbacks

is used for building and finishing relationship between the tunnel device and

the hardware devices. Such as the tunnel device come in and go away and the hardware

device come in and go away. The relationship between the tunnel device and the

hardware devices is so subtle.

> Probably not a requirement in your case, but the same net_device might
> be used in several flowtables. Your patch is flawed there and I don't
> see an easy way to fix this.

The same tunnel device can only be added to one offloaded flowtables. The tunnel device

can build the relationship with the hardware devices one time in the dirver. This is protected

by flow_block_cb_is_busy and xxx_indr_block_cb_priv in driver.


>
> I know there is no way to use ->ndo_setup_tc for tunnel devices, but
> you could have just make it work making it look consistent to the
> ->ndo_setup_tc logic.

I think the difficulty is how to find the hardware device for tunnel device to set the rule

to the hardware.

>
> I'm inclined to apply this patch though, in the hope that this all can
> be revisited later to get it in line with the ->ndo_setup_tc approach.
> However, probably I'm hoping for too much.
>
> Thank you.
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ