[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170628213701.32krfuipzngsmt4k@ast-mbp>
Date: Wed, 28 Jun 2017 14:37:03 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Daniel Borkmann <daniel@...earbox.net>
Cc: Edward Cree <ecree@...arflare.com>, davem@...emloft.net,
Alexei Starovoitov <ast@...com>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org,
iovisor-dev <iovisor-dev@...ts.iovisor.org>
Subject: Re: [PATCH v3 net-next 00/12] bpf: rewrite value tracking in verifier
On Wed, Jun 28, 2017 at 10:38:02PM +0200, Daniel Borkmann wrote:
> On 06/28/2017 04:11 PM, Edward Cree wrote:
> > On 28/06/17 14:50, Daniel Borkmann wrote:
> > > Hi Edward,
> > >
> > > Did you also have a chance in the meantime to look at reducing complexity
> > > along with your unification? I did run the cilium test suite with your
> > > latest set from here and current # worst case processed insns that
> > > verifier has to go through for cilium progs increases from ~53k we have
> > > right now to ~76k. I'm a bit worried that this quickly gets us close to
> > > the upper ~98k max limit starting to reject programs again. Alternative
> > > is to bump the complexity limit again in near future once run into it,
> > > but preferably there's a way to optimize it along with the rewrite? Do
> > > you see any possibilities worth exploring?
> > The trouble, I think, is that as we're now tracking more information about
> > each register value, we're less able to prune branches. But often that
> > information is not actually being used in reaching the exit state. So it
>
> Agree.
>
> > seems like the way to tackle this would be to track what information is
> > used — or at least, which registers are read from (including e.g. writing
> > through them or passing them to helper calls) — in reaching a safe state.
> > Then only registers which are used are required to match for pruning.
> > But that tracking would presumably have to propagate backwards through the
> > verifier stack, and I'm not sure how easily that could be done. Someone
> > (was it you?) was talking about replacing the current DAG walking and
> > pruning with some kind of basic-block thing, which would help with this.
> > Summary: I think it could be done, but I haven't looked into the details
> > of implementation yet; if it's not actually breaking your programs (yet),
> > maybe leave it for a followup patch series?
>
> Could we adapt the limit to 128k perhaps as part of this set
> given we know that we're tracking more meta data here anyway?
Increasing the limit is must have, since pruning suffered so much.
Going from 53k to 76k is pretty substantial.
What is the % increase for tests in selftests/ ?
I think we need to pin point exactly the reason.
Saying we just track more data is not enough.
We've tried v2 set on our load balancer and also saw ~20% increase.
I don't remember the absolute numbers.
These jumps don't make me comfortable with these extra tracking.
Can you try to roll back ptr&const and full negative/positive tracking
and see whether it gets back to what we had before?
I agree that long term it's better to do proper basic block based
liveness, but we need to do understand what's causing the increase today.
If tnum is causing it that would be reasonable trade off to make,
but if it's full neg/pos tracking that has no use today other than
(the whole thing is cleaner) I would rather drop it then.
We can always come back to it later once pruning issues are solved.
Powered by blists - more mailing lists