[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1257299745.16282.49.camel@ymzhang>
Date: Wed, 04 Nov 2009 09:55:45 +0800
From: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: alex.shi@...el.com, linux-kernel@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mike Galbraith <efault@....de>
Subject: Re: UDP-U stream performance regression on 32-rc1 kernel
On Tue, 2009-11-03 at 18:45 +0100, Ingo Molnar wrote:
> * Zhang, Yanmin <yanmin_zhang@...ux.intel.com> wrote:
>
> > On Tue, 2009-11-03 at 11:47 +0800, Alex Shi wrote:
> > > We found the UDP-U 1k/4k stream of netperf benchmark have some
> > > performance regression from 10% to 20% on our Tulsa and some NHM
> > > machines.
> > perf events shows function find_busiest_group consumes about 4.5% cpu
> > time with the patch while it only consumes 0.5% cpu time without the
> > patch.
> >
> > The communication between netperf client and netserver is very fast.
> > When netserver receives a message and there is no new message
> > available, it goes to sleep and scheduler calls idle_balance =>
> > load_balance_newidle. load_balance_newidle spends too much time and a
> > new message arrives quickly before load_balance_newidle ends.
> >
> > As the comments in the patch say hackbench benefits from it, I tested
> > hackbench on Nehalem and core2 machines. hackbench does benefit from
> > it, about 6% on nehalem machines, but doesn't benefit on core2
> > machines.
>
> Can you confirm that -tip:
>
> http://people.redhat.com/mingo/tip.git/README
>
> has it fixed (or at least improved)?
The latest tips improves netperf loopback result, but doesn't fix it
thoroughly. For example, on a Nehalem machine, netperf UDP-U-1k has
about 25% regression, but with the tips kernel, the regression becomes
less than 10%.
yanmin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists