[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151118144811.GA1043@nuc-i3427.alporthouse.com>
Date: Wed, 18 Nov 2015 14:48:11 +0000
From: Chris Wilson <chris@...is-wilson.co.uk>
To: Chuck Ebbert <cebbert.lkml@...il.com>
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
Tejun Heo <tj@...nel.org>,
Andy Lutomirski <luto@...capital.net>,
Rusty Russell <rusty@...tcorp.com.au>,
Peter Zijlstra <peterz@...radead.org>,
Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>,
Jiri Kosina <jkosina@...e.cz>,
"H. Peter Anvin" <hpa@...ux.intel.com>,
Steven Rostedt <rostedt@...dmis.org>,
Jason Baron <jbaron@...mai.com>, yrl.pp-manager.tt@...achi.com,
Borislav Petkov <bpetkov@...e.de>,
Ingo Molnar <mingo@...nel.org>,
Daniel Vetter <daniel.vetter@...ll.ch>
Subject: Re: i915.ko WC writes are slow after ea8596bb2d8d379
On Wed, Oct 08, 2014 at 05:10:59AM -0500, Chuck Ebbert wrote:
> On Wed, 8 Oct 2014 10:03:36 +0100
> Chris Wilson <chris@...is-wilson.co.uk> wrote:
>
> >
> > I ran into a problem on a Sandybridge i5-2500s whilst measuring the
> > performance of GTT write-combining access. I found subsequent runs were
> > about 10-40x slower than the first. For example,
> >
> > igt/gem_gtt_speed:
> >
> > Time to read 16k through a GTT map: 325.285µs
> > Time to write 16k through a GTT map: 4.729µs
> > Time to clear 16k through a GTT map: 4.584µs
> > Time to clear 16k through a cached GTT map: 1.342µs
> >
> > on the second run became:
> >
> > Time to read 16k through a GTT map: 332.148µs
> > Time to write 16k through a GTT map: 209.411µs
> > Time to clear 16k through a GTT map: 56.460µs
> > Time to clear 16k through a cached GTT map: 50.897µs
> >
> > Naively I would say that we lost the wc on our ioremap.
> > /sys/kernel/debug/x86/pat_memtype_list remained the same across repeated
> > runs.
> >
> > A bisection pointed to
> >
> > commit ea8596bb2d8d37957f3e92db9511c50801689180
> > Author: Masami Hiramatsu <masami.hiramatsu.pt@...achi.com>
> > Date: Thu Jul 18 20:47:53 2013 +0900
> >
> > kprobes/x86: Remove unused text_poke_smp() and text_poke_smp_batch() functions
> >
> > of which the active ingredient was just
> >
> > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > index b32ebf9..f4001e0 100644
> > --- a/arch/x86/Kconfig
> > +++ b/arch/x86/Kconfig
> > @@ -2334,7 +2334,6 @@ config HAVE_ATOMIC_IOMAP
> >
> > config HAVE_TEXT_POKE_SMP
> > bool
> > - select STOP_MACHINE if SMP
> >
> > config X86_DEV_DMA_OPS
> > bool
> >
> > and adding that back into the current build, e.g.
>
> Hmm, set_mtrr() uses stop_machine(). I wonder if your MTRRs are out of
> sync and your results depend on which CPU the test runs on?
(From the other reply, it did and is still required).
I have run into other issues where stop_machine() tries to only do a
irq-disabled callback on the local CPU as opposed to halting all CPUs
and running the callback universally.
My understanding is that the root cause of the issue is:
diff --git a/init/Kconfig b/init/Kconfig
index af09b4f..8235e0b 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1993,8 +1993,7 @@ config INIT_ALL_POSSIBLE
config STOP_MACHINE
bool
- default y
- depends on (SMP && MODULE_UNLOAD) || HOTPLUG_CPU
+ default y if SMP || HOTPLUG_CPU
help
Need stop_machine() primitive.
Although
diff --git a/include/linux/stop_machine.h b/include/linux/stop_machine.h
index d2abbdb..ff4f029 100644
--- a/include/linux/stop_machine.h
+++ b/include/linux/stop_machine.h
@@ -97,7 +97,7 @@ static inline int try_stop_cpus(const struct cpumask *cpumask,
* grabbing every spinlock (and more). So the "read" side to such a
* lock is anything which disables preemption.
*/
-#if defined(CONFIG_STOP_MACHINE) && defined(CONFIG_SMP)
+#if defined(CONFIG_SMP) || defined(CONFIG_HOTPLUG_CPU)
/**
* stop_machine: freeze the machine on all CPUs and run this function
@@ -128,7 +128,7 @@ int __stop_machine(int (*fn)(void *), void *data, const struct cpumask *cpus);
int stop_machine_from_inactive_cpu(int (*fn)(void *), void *data,
const struct cpumask *cpus);
-#else /* CONFIG_STOP_MACHINE && CONFIG_SMP */
+#else /* CONFIG_SMP */
static inline int __stop_machine(int (*fn)(void *), void *data,
const struct cpumask *cpus)
@@ -153,5 +153,5 @@ static inline int stop_machine_from_inactive_cpu(int (*fn)(void *), void *data,
return __stop_machine(fn, data, cpus);
}
-#endif /* CONFIG_STOP_MACHINE && CONFIG_SMP */
+#endif /* CONFIG_SMP || CONFIG_HOTPLUG_CPU */
#endif /* _LINUX_STOP_MACHINE */
diff --git a/init/Kconfig b/init/Kconfig
index af09b4f..44600a8 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1991,13 +1991,6 @@ config INIT_ALL_POSSIBLE
it was better to provide this option than to break all the archs
and have several arch maintainers pursuing me down dark alleys.
-config STOP_MACHINE
- bool
- default y
- depends on (SMP && MODULE_UNLOAD) || HOTPLUG_CPU
- help
- Need stop_machine() primitive.
-
source "block/Kconfig"
config PREEMPT_NOTIFIERS
diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index fd643d8..2dd1f306 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -513,7 +513,7 @@ static int __init cpu_stop_init(void)
}
early_initcall(cpu_stop_init);
-#ifdef CONFIG_STOP_MACHINE
+#if defined(CONFIG_SMP) || defined(CONFIG_HOTPLUG_CPU)
int __stop_machine(int (*fn)(void *), void *data, const struct cpumask *cpus)
{
@@ -613,4 +613,4 @@ int stop_machine_from_inactive_cpu(int (*fn)(void *), void *data,
return ret ?: done.ret;
}
-#endif /* CONFIG_STOP_MACHINE */
+#endif /* CONFIG_SMP || CONFIG_HOTPLUG_CPU */
may be more apt.
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists