[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140610115045.GA6019@amd.pavel.ucw.cz>
Date: Tue, 10 Jun 2014 13:50:45 +0200
From: Pavel Machek <pavel@....cz>
To: Jiri Kosina <jkosina@...e.cz>
Cc: Daniel Vetter <daniel.vetter@...ll.ch>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
Linux-pm mailing list <linux-pm@...ts.osdl.org>,
kernel list <linux-kernel@...r.kernel.org>,
Chris Wilson <chris@...is-wilson.co.uk>,
intel-gfx <intel-gfx@...ts.freedesktop.org>
Subject: Bisecting the heisenbugs (was Re: 3.15-rc: regression in suspend)
On Mon 2014-06-09 13:03:31, Jiri Kosina wrote:
> On Mon, 9 Jun 2014, Pavel Machek wrote:
>
> > > > Strange. It seems 3.15 with the patch reverted only boots in 30% or so
> > > > cases... And I've seen resume failure, too, so maybe I was just lucky
> > > > that it worked for a while.
> > >
> > > git bisect really likes 25f397a429dfa43f22c278d0119a60 - you're about
> > > the 5th report or so that claims this is the culprit but it's
> > > something else. The above code is definitely not used in i915 so bogus
> > > bisect result.
> >
> > Note I did not do the bisect, I only attempted revert and test.
> >
> > And did three boots of successful s2ram.. only to find out that it
> > does not really fix s2ram, I was just lucky :-(.
> >
> > Unfortunately, this means my s2ram problem will be tricky/impossible
> > to bisect :-(.
>
> Welcome to the situation I have been in for past several months.
I attempted to do some analysis. It should be possible to bisect when
tests are not reliable, but it will take time and it will be almost
neccessary to have the bisection automated.
How long does the testing take for you to get to 50% test reliability?
It seems to be one minute here.
Trivial strategy is to repeat each test to get to 99% test
reliability. That should make the test about 2x longer.
There are other strategies possible -- like selecting bisect points
closer to the "bad" end, and tricky "lets compute probabilities for
each point", that work well for some parameter settings. There is
probably even better strategy possible... if you have an idea, you can
try it below.
Monte carlo simulation is attached.
Bisector on reliable bug
-----
1024 versions bug with probability of 0 false success, monte carlo
of 30000 tries
Assume compilation takes 6 minutes and test takes 1 minutes
Average cost 71.0522 minutes
Average tests 9.99793333333
Bisector
-----
1024 versions bug with probability of 0.5 false success, monte carlo
of 30000 tries
Assume compilation takes 6 minutes and test takes 1 minutes
Average cost 143.393933333 minutes
Average tests 44.5374666667
Trisector
-----
1024 versions bug with probability of 0.5 false success, monte carlo
of 30000 tries
Assume compilation takes 6 minutes and test takes 1 minutes
Average cost 160.554 minutes
Average tests 39.9552666667
Strange
-----
1024 versions bug with probability of 0.5 false success, monte carlo
of 3000 tries
Assume compilation takes 6 minutes and test takes 1 minutes
Average cost 246.658 minutes
Average tests 38.412
pavel@amd:~/WWW$
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
View attachment "tricky_bisect.py" of type "text/x-python" (4322 bytes)
Powered by blists - more mailing lists