[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120220081416.GE30810@elte.hu>
Date: Mon, 20 Feb 2012 09:14:16 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Nikunj A Dadhania <nikunj@...ux.vnet.ibm.com>
Cc: Avi Kivity <avi@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
Rik van Riel <riel@...hat.com>, linux-kernel@...r.kernel.org,
vatsa@...ux.vnet.ibm.com, bharata@...ux.vnet.ibm.com
Subject: Re: [RFC PATCH 0/4] Gang scheduling in CFS
* Nikunj A Dadhania <nikunj@...ux.vnet.ibm.com> wrote:
> > Here it would massively improve performance - without
> > regressing the scheduler code massively.
>
> I tried doing an experiment with the flush_tlb_others_ipi.
> This depends on Raghu's "kvm : Paravirt-spinlock support for
> KVM guests" (https://lkml.org/lkml/2012/1/14/66), which has
> new hypercall for kicking another vcpu out of halt.
>
> Here are the results from non-PLE hardware. Running ebizzy
> workload inside the VMs. The table shows the ebizzy score -
> Records/sec.
>
> 8CPU Intel Xeon, HT disabled, 64 bit VM(8vcpu, 1G RAM)
>
> +--------+------------+------------+-------------+
> | | baseline | gang | pv_flush |
> +--------+------------+------------+-------------+
> | 2VM | 3979.50 | 8818.00 | 11002.50 |
> | 4VM | 1817.50 | 6236.50 | 6196.75 |
> | 8VM | 922.12 | 4043.00 | 4001.38 |
> +--------+------------+------------+-------------+
Very nice results!
Seems like the PV approach is massively faster on 2 VMs than
even the gang scheduling hack, because it attacks the problem
at its root, not just the symptom.
The patch is also an order of magnitude simpler. Gang
scheduling, R.I.P.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists