[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130626093713.GA27385@gmail.com>
Date: Wed, 26 Jun 2013 11:37:13 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Mike Galbraith <bitbucket@...ine.de>
Cc: Dave Chiluk <chiluk@...onical.com>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org
Subject: Re: Scheduler accounting inflated for io bound processes.
* Mike Galbraith <bitbucket@...ine.de> wrote:
> On Tue, 2013-06-25 at 18:01 +0200, Mike Galbraith wrote:
> > On Thu, 2013-06-20 at 14:46 -0500, Dave Chiluk wrote:
> > > Running the below testcase shows each process consuming 41-43% of it's
> > > respective cpu while per core idle numbers show 63-65%, a disparity of
> > > roughly 4-8%. Is this a bug, known behaviour, or consequence of the
> > > process being io bound?
> >
> > All three I suppose.
>
> P.S.
>
> perf top --sort=comm -C 3 -d 5 -F 250 (my tick freq)
> 56.65% netserver
> 43.35% pert
>
> perf top --sort=comm -C 3 -d 5
> 67.16% netserver
> 32.84% pert
>
> If you sample a high freq signal (netperf TCP_RR) at low freq (tick),
> then try to reproduce the original signal, (very familiar) distortion
> results. Perf doesn't even care about softirq yada yada, so seems it's
> a pure sample rate thing.
Would be very nice to randomize the sampling rate, by randomizing the
intervals within a 1% range or so - perf tooling will probably recognize
the different weights.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists