[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120104145602.GB8333@linux.vnet.ibm.com>
Date: Wed, 4 Jan 2012 20:26:02 +0530
From: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
To: Avi Kivity <avi@...hat.com>
Cc: Nikunj A Dadhania <nikunj@...ux.vnet.ibm.com>,
Rik van Riel <riel@...hat.com>, Ingo Molnar <mingo@...e.hu>,
peterz@...radead.org, linux-kernel@...r.kernel.org,
bharata@...ux.vnet.ibm.com
Subject: Re: [RFC PATCH 0/4] Gang scheduling in CFS
* Avi Kivity <avi@...hat.com> [2012-01-04 16:41:58]:
> > Here are some observation related to Baseline-only(8vm case)
> >
> > | ple_gap=128 | ple_gap=64 | ple_gap=256 | ple_window=2048
> > --------------+-------------+------------+-------------+----------------
> > EbzyRecords/s | 2247.50 | 2132.75 | 2086.25 | 1835.62
> > PauseExits | 7928154.00 | 6696342.00 | 7365999.00 | 50319582.00
> >
> > With ple_window = 2048, PauseExits is more than 6times the default case
>
> So it looks like the default is optimal, at least wrt the cases you
> tested and your test workload.
The default case still lags considerably behind the results we are seeing with
gang scheduling. One more interesting data point would be to see how
many PLE exits we are seeing when the vcpu is spinning in
flush_tlb_others_ipi(). Is there any easy way to determine that?
- vatsa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists