lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200617083817.GA1744@salvia>
Date:   Wed, 17 Jun 2020 10:38:17 +0200
From:   Pablo Neira Ayuso <pablo@...filter.org>
To:     wenxu <wenxu@...oud.cn>
Cc:     Simon Horman <simon.horman@...ronome.com>, netdev@...r.kernel.org,
        davem@...emloft.net, vladbu@...lanox.com
Subject: Re: [PATCH net v3 2/4] flow_offload: fix incorrect cb_priv check for
 flow_block_cb

On Wed, Jun 17, 2020 at 11:36:19AM +0800, wenxu wrote:
> 
> On 6/17/2020 4:38 AM, Pablo Neira Ayuso wrote:
> > On Tue, Jun 16, 2020 at 05:47:17PM +0200, Simon Horman wrote:
> >> On Tue, Jun 16, 2020 at 11:18:16PM +0800, wenxu wrote:
> >>> 在 2020/6/16 22:34, Simon Horman 写道:
> >>>> On Tue, Jun 16, 2020 at 10:20:46PM +0800, wenxu wrote:
> >>>>> 在 2020/6/16 18:51, Simon Horman 写道:
> >>>>>> On Tue, Jun 16, 2020 at 11:19:38AM +0800, wenxu@...oud.cn wrote:
> >>>>>>> From: wenxu <wenxu@...oud.cn>
> >>>>>>>
> >>>>>>> In the function __flow_block_indr_cleanup, The match stataments
> >>>>>>> this->cb_priv == cb_priv is always false, the flow_block_cb->cb_priv
> >>>>>>> is totally different data with the flow_indr_dev->cb_priv.
> >>>>>>>
> >>>>>>> Store the representor cb_priv to the flow_block_cb->indr.cb_priv in
> >>>>>>> the driver.
> >>>>>>>
> >>>>>>> Fixes: 1fac52da5942 ("net: flow_offload: consolidate indirect flow_block infrastructure")
> >>>>>>> Signed-off-by: wenxu <wenxu@...oud.cn>
> >>>>>> Hi Wenxu,
> >>>>>>
> >>>>>> I wonder if this can be resolved by using the cb_ident field of struct
> >>>>>> flow_block_cb.
> >>>>>>
> >>>>>> I observe that mlx5e_rep_indr_setup_block() seems to be the only call-site
> >>>>>> where the value of the cb_ident parameter of flow_block_cb_alloc() is
> >>>>>> per-block rather than per-device. So part of my proposal is to change
> >>>>>> that.
> >>>>> I check all the xxdriver_indr_setup_block. It seems all the cb_ident parameter of
> >>>>>
> >>>>> flow_block_cb_alloc is per-block. Both in the nfp_flower_setup_indr_tc_block
> >>>>>
> >>>>> and bnxt_tc_setup_indr_block.
> >>>>>
> >>>>>
> >>>>> nfp_flower_setup_indr_tc_block:
> >>>>>
> >>>>> struct nfp_flower_indr_block_cb_priv *cb_priv;
> >>>>>
> >>>>> block_cb = flow_block_cb_alloc(nfp_flower_setup_indr_block_cb,
> >>>>>                                                cb_priv, cb_priv,
> >>>>>                                                nfp_flower_setup_indr_tc_release);
> >>>>>
> >>>>>
> >>>>> bnxt_tc_setup_indr_block:
> >>>>>
> >>>>> struct bnxt_flower_indr_block_cb_priv *cb_priv;
> >>>>>
> >>>>> block_cb = flow_block_cb_alloc(bnxt_tc_setup_indr_block_cb,
> >>>>>                                                cb_priv, cb_priv,
> >>>>>                                                bnxt_tc_setup_indr_rel);
> >>>>>
> >>>>>
> >>>>> And the function flow_block_cb_is_busy called in most place. Pass the
> >>>>>
> >>>>> parameter as cb_priv but not cb_indent .
> >>>> Thanks, I see that now. But I still think it would be useful to understand
> >>>> the purpose of cb_ident. It feels like it would lead to a clean solution
> >>>> to the problem you have highlighted.
> >>> I think The cb_ident means identify.  It is used to identify the each flow block cb.
> >>>
> >>> In the both flow_block_cb_is_busy and flow_block_cb_lookup function check
> >>>
> >>> the block_cb->cb_ident == cb_ident.
> >> Thanks, I think that I now see what you mean about the different scope of
> >> cb_ident and your proposal to allow cleanup by flow_indr_dev_unregister().
> >>
> >> I do, however, still wonder if there is a nicer way than reaching into
> >> the structure and manually setting block_cb->indr.cb_priv
> >> at each call-site.
> >>
> >> Perhaps a variant of flow_block_cb_alloc() for indirect blocks
> >> would be nicer?
> > A follow up patch to add this new variant would be good. Probably
> > __flow_block_indr_binding() can go away with this new variant to set
> > up the indirect flow block.
> 
> 
> Maybe __flow_block_indr_binding() can't go away. The data and cleanup callback which should
> init the flow_block_indr is only in the flow_indr_dev_setup_offload. This can't be gotten in
> flow_indr_block_cb_alloc.

Probably flow_indr_block_bind_cb_t can be updated to include the data
and the cleanup callback.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ