[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190611150302.smeuvloq7vvtcccp@breakpoint.cc>
Date: Tue, 11 Jun 2019 17:03:02 +0200
From: Florian Westphal <fw@...len.de>
To: John Hurley <john.hurley@...ronome.com>
Cc: Florian Westphal <fw@...len.de>,
David Miller <davem@...emloft.net>,
Linux Netdev List <netdev@...r.kernel.org>,
Simon Horman <simon.horman@...ronome.com>,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
Jamal Hadi Salim <jhs@...atatu.com>,
oss-drivers@...ronome.com
Subject: Re: [RFC net-next v2 1/1] net: sched: protect against loops in TC
filter hooks
John Hurley <john.hurley@...ronome.com> wrote:
> On Thu, Jun 6, 2019 at 8:52 PM Florian Westphal <fw@...len.de> wrote:
> >
> > David Miller <davem@...emloft.net> wrote:
> > > From: Florian Westphal <fw@...len.de>
> > > Date: Thu, 6 Jun 2019 14:58:18 +0200
> > >
> > > >> @@ -827,6 +828,7 @@ struct sk_buff {
> > > >> __u8 tc_at_ingress:1;
> > > >> __u8 tc_redirected:1;
> > > >> __u8 tc_from_ingress:1;
> > > >> + __u8 tc_hop_count:2;
> > > >
> > > > I dislike this, why can't we just use a pcpu counter?
> > >
> > > I understand that it's because the only precise context is per-SKB not
> > > per-cpu doing packet processing. This has been discussed before.
> >
> > I don't think its worth it, and it won't work with physical-world
> > loops (e.g. a bridge setup with no spanning tree and a closed loop).
> >
> > Also I fear that if we start to do this for tc, we will also have to
> > followup later with more l2 hopcounts for other users, e.g. veth,
> > bridge, ovs, and so on.
>
> Hi David/Florian,
> Moving forward with this, should we treat the looping and recursion as
> 2 separate issues and at least prevent the potential stack overflow
> panics caused by the recursion?
> The pcpu counter should protect against this.
As outlined above, I think they are different issues.
> Are there context specific issues that we may miss by doing this?
I can't think of any.
> If not I will respin with the pcpu counter in act_mirred.
Sounds good to me, thanks.
Powered by blists - more mailing lists