[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131026103651.GA21294@gmail.com>
Date: Sat, 26 Oct 2013 12:36:52 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Don Zickus <dzickus@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andi Kleen <ak@...ux.intel.com>, dave.hansen@...ux.intel.com,
Stephane Eranian <eranian@...gle.com>, jmario@...hat.com,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Arnaldo Carvalho de Melo <acme@...radead.org>
Subject: Re: [PATCH] perf, x86: Optimize intel_pmu_pebs_fixup_ip()
* Don Zickus <dzickus@...hat.com> wrote:
> On Thu, Oct 24, 2013 at 12:52:06PM +0200, Peter Zijlstra wrote:
> > On Wed, Oct 23, 2013 at 10:48:38PM +0200, Peter Zijlstra wrote:
> > > I'll also make sure to test we actually hit the fault path
> > > by concurrently running something like:
> > >
> > > while :; echo 1 > /proc/sys/vm/drop_caches ; done
> > >
> > > while doing perf top or so..
> >
> > So the below appears to work; I've ran:
> >
> > while :; do echo 1 > /proc/sys/vm/drop_caches; sleep 1; done &
> > while :; do make O=defconfig-build/ clean; perf record -a -g fp -e cycles:pp make O=defconfig-build/ -s -j64; done
> >
> > And verified that the if (in_nmi()) trace_printk() was visible in the
> > trace output verifying we indeed took the fault from the NMI code.
> >
> > I've had this running for ~ 30 minutes or so and the machine is still
> > healthy.
> >
> > Don, can you give this stuff a spin on your system?
>
> Hi Peter,
>
> I finally had a chance to run this on my machine. From my
> testing, it looks good. Better performance numbers. I think my
> longest latency went from 300K cycles down to 150K cycles and very
> few of those (most are under 100K cycles).
Btw., do we know where those ~100k-150k cycles are spent
specifically? 100k cycles is still an awful lot of time to spend in
NMI context ...
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists