lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200710171327.GA15481@salvia>
Date:   Fri, 10 Jul 2020 19:13:27 +0200
From:   Pablo Neira Ayuso <pablo@...filter.org>
To:     Petr Machata <petrm@...lanox.com>
Cc:     Ido Schimmel <idosch@...sch.org>, netdev@...r.kernel.org,
        davem@...emloft.net, kuba@...nel.org, jiri@...lanox.com,
        mlxsw@...lanox.com, michael.chan@...adcom.com, saeedm@...lanox.com,
        leon@...nel.org, kadlec@...filter.org, fw@...len.de,
        jhs@...atatu.com, xiyou.wangcong@...il.com,
        simon.horman@...ronome.com, Ido Schimmel <idosch@...lanox.com>
Subject: Re: [PATCH net-next 01/13] net: sched: Pass qdisc reference in
 struct flow_block_offload

On Fri, Jul 10, 2020 at 06:22:03PM +0200, Petr Machata wrote:
> 
> Pablo Neira Ayuso <pablo@...filter.org> writes:
> 
> >> >> diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
> >> >> index eefeb1cdc2ee..4fc42c1955ff 100644
> >> >> --- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
> >> >> +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/tc.c
> >> >> @@ -404,7 +404,7 @@ static void mlx5e_rep_indr_block_unbind(void *cb_priv)
> >> >>  static LIST_HEAD(mlx5e_block_cb_list);
> >> >>
> >> >>  static int
> >> >> -mlx5e_rep_indr_setup_block(struct net_device *netdev,
> >> >> +mlx5e_rep_indr_setup_block(struct Qdisc *sch,
> >> >>  			   struct mlx5e_rep_priv *rpriv,
> >> >>  			   struct flow_block_offload *f,
> >> >>  			   flow_setup_cb_t *setup_cb,
> >> >> @@ -412,6 +412,7 @@ mlx5e_rep_indr_setup_block(struct net_device *netdev,
> >> >>  			   void (*cleanup)(struct flow_block_cb *block_cb))
> >> >>  {
> >> >>  	struct mlx5e_priv *priv = netdev_priv(rpriv->netdev);
> >> >> +	struct net_device *netdev = sch->dev_queue->dev;
> >> >
> >> > This break indirect block support for netfilter since the driver
> >> > is assuming a Qdisc object.
> >>
> >> Sorry, I don't follow. You mean mlx5 driver? What does it mean to
> >> "assume a qdisc object"?
> >>
> >> Is it incorrect to rely on the fact that the netdevice can be deduced
> >> from a qdisc, or that there is always a qdisc associated with a block
> >> binding point?
> >
> > The drivers assume that the xyz_indr_setup_block() always gets a sch
> > object, which is not always true. Are you really sure this will work
> > for the TC CT offload?
> 
> I tested indirect blocks in general, but not CT offload.
> 
> > --- a/net/netfilter/nf_flow_table_offload.c
> > +++ b/net/netfilter/nf_flow_table_offload.c
> > @@ -928,26 +928,27 @@ static int nf_flow_table_block_setup(struct nf_flowtable *flowtable,
> >  }
> >
> >  static void nf_flow_table_block_offload_init(struct flow_block_offload *bo,
> > -                                            struct net *net,
> > +                                            struct net_device *dev,
> >                                              enum flow_block_command cmd,
> >                                              struct nf_flowtable *flowtable,
> >                                              struct netlink_ext_ack *extack)
> >  {
> >         memset(bo, 0, sizeof(*bo));
> > -       bo->net         = net;
> > +       bo->net         = dev_net(dev);
> >         bo->block       = &flowtable->flow_block;
> >         bo->command     = cmd;
> >         bo->binder_type = FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS;
> >         bo->extack      = extack;
> > +       bo->sch         = dev_ingress_queue(dev)->qdisc_sleeping;
> >         INIT_LIST_HEAD(&bo->cb_list);
> >  }
> >
> >  static void nf_flow_table_indr_cleanup(struct flow_block_cb *block_cb)
> >  {
> >         struct nf_flowtable *flowtable = block_cb->indr.data;
> > -       struct net_device *dev = block_cb->indr.dev;
> > +       struct Qdisc *sch = block_cb->indr.sch;
> >
> > -       nf_flow_table_gc_cleanup(flowtable, dev);
> > +       nf_flow_table_gc_cleanup(flowtable, sch->dev_queue->dev);
> >         down_write(&flowtable->flow_block_lock);
> >         list_del(&block_cb->list);
> >         list_del(&block_cb->driver_list);
> >
> > Moreover, the flow_offload infrastructure should also remain
> > independent from the front-end, either tc/netfilter/ethtool, this is
> > pulling in tc specific stuff into it, eg.
> 
> Hmm, OK, so I should not have assumed there is always a qdisc associated
> with a blocks.
> 
> I'm not sure how strong your objection to pulling in TC is. Would it be
> OK, instead of replacing the device with a qdisc in flow_block_indr, to
> put in both? The qdisc can be NULL for the "normal" binder types,
> because there the block is uniquely identified just by the type. For the
> "non-normal" ones it would be obvious how to initialize it.

Adding an extra field to flow_block_indr instead of replacing struct
net_device should be ok for your new qevent use-case, right? This new
Qdisc field will be NULL for the existing use-cases, that is what you
mean, correct?

I still did not have a look at this new qevent infrastructure, so I
don't have a better proposal right now.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ