[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABPqkBT9ExD4unz0VZ5G1d+oGpkepiwXskgBpv5GTavs=tYAKA@mail.gmail.com>
Date: Thu, 5 Jun 2014 15:42:14 +0200
From: Stephane Eranian <eranian@...gle.com>
To: Borislav Petkov <bp@...en8.de>
Cc: Matt Fleming <matt@...sole-pimps.org>,
LKML <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
"mingo@...e.hu" <mingo@...e.hu>,
"ak@...ux.intel.com" <ak@...ux.intel.com>,
Jiri Olsa <jolsa@...hat.com>,
"Yan, Zheng" <zheng.z.yan@...el.com>,
Maria Dimakopoulou <maria.n.dimakopoulou@...il.com>
Subject: Re: [PATCH 9/9] perf/x86: add syfs entry to disable HT bug workaround
On Thu, Jun 5, 2014 at 3:27 PM, Borislav Petkov <bp@...en8.de> wrote:
> On Thu, Jun 05, 2014 at 02:02:51PM +0200, Stephane Eranian wrote:
>> It is enabled by default. Nothing is done to try and disable it later
>> even once the kernel is fully booted. So this is mostly for testing
>> and power-users.
>
> You keep saying "power-users". What is the disadvantage for power users
> running with the workaround disabled? I.e., why would anyone want to
> disable it at all, what is the use case for that?
>
I gave a test case earlier:
# echo 0 >/proc/sys/kernel/nmi_watchdog
# run_my_uniform_workload_on_all_cpus &
# perf stat -a -e r81d0,r01d1,r08d0,r20d1 sleep 5
That run gives the correct answer.
If I just look at CPU0 CPU4 siblings:
CPU0, counter0 leaks N counts to CPU4, counter 0
but at the same time:
CPU4, counter0 leaks N counts to CPU0, counter 0
This is because we have the same event in the same
counter AND the workload is uniform, meaning the
event (here loads retired) occurs at the same rate
on both siblings.
You can test this by measuring only on one HT.
# perf stat -a -C0 -e r81d0,r01d1,r08d0,r20d1 sleep 5
Note that some events, leak more than they count.
Again, this is really for experts. The average users
should not have to deal with this. So we can drop
the sysfs entry.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists