[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r4ziqu8f.fsf@abhimanyu.in.ibm.com>
Date: Mon, 02 Jan 2012 15:52:40 +0530
From: Nikunj A Dadhania <nikunj@...ux.vnet.ibm.com>
To: Avi Kivity <avi@...hat.com>
Cc: Ingo Molnar <mingo@...e.hu>, peterz@...radead.org,
linux-kernel@...r.kernel.org, vatsa@...ux.vnet.ibm.com,
bharata@...ux.vnet.ibm.com
Subject: Re: [RFC PATCH 0/4] Gang scheduling in CFS
On Mon, 02 Jan 2012 11:39:00 +0200, Avi Kivity <avi@...hat.com> wrote:
> On 01/02/2012 06:20 AM, Nikunj A Dadhania wrote:
[...]
> > > non-PLE - Test Setup:
> > >
> > > dbench 8vm (degraded -30%)
> > > | dbench| 2.01 | 1.38 | -30 |
> >
> >
> > Baseline:
> > 57.75% init [kernel.kallsyms] [k] native_safe_halt
> > 40.88% swapper [kernel.kallsyms] [k] native_safe_halt
> >
> > Gang V2:
> > 56.25% init [kernel.kallsyms] [k] native_safe_halt
> > 42.84% swapper [kernel.kallsyms] [k] native_safe_halt
> >
> > Similar comparison here.
> >
>
> Wierd, looks like a mismeasurement...
>
Getting similar numbers across different runs/reboots with dbench.
> what happens if you add a bash
> busy loop?
>
Perf output for bash busy loops inside the guest:
9.93% sh libc-2.12.so [.] _int_free
8.37% sh libc-2.12.so [.] _int_malloc
6.14% sh libc-2.12.so [.] __GI___libc_malloc
6.03% sh bash [.] 0x480e6
loop.sh
----------------------------------
for i in `seq 1 8`
do
while :; do :; done &
pid[$i]=$!;
done
sleep 60
for i in `seq 1 8`
do
kill -9 ${pid[$i]}
done
----------------------------------
Used the following command to capture the perf events inside the guest:
ssh root@....168.123.11 'perf record -a -o loop-perf.out --
/root/loop.sh '
Regards,
Nikunj
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists