[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171223203741.q4hwq6i33cjf3dyg@shells.gnugeneration.com>
Date: Sat, 23 Dec 2017 12:37:41 -0800
From: vcaputo@...garu.com
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Pavel Machek <pavel@....cz>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: thinkpad x60: sound problems in 4.15-rc1 was Re: thinkpad x60:
sound problems in 4.14.0-next-20171114
On Fri, Dec 22, 2017 at 09:37:01PM -0800, vcaputo@...garu.com wrote:
> On Wed, Dec 20, 2017 at 01:33:45AM +0100, Thomas Gleixner wrote:
> > On Tue, 19 Dec 2017, vcaputo@...garu.com wrote:
> > > On Wed, Dec 20, 2017 at 12:22:12AM +0100, Pavel Machek wrote:
> > > > You forgot to mention commit id :-).
> > > >
> > >
> > > That is very strange, anyhow:
> > >
> > > commit fdba46ffb4c203b6e6794163493fd310f98bb4be
> > > Author: Thomas Gleixner <tglx@...utronix.de>
> > > Date: Wed Sep 13 23:29:27 2017 +0200
> > >
> > > x86/apic: Get rid of multi CPU affinity
> > >
> > >
> > > Will try reverting soon, just a bit busy today out in the desert and the sun
> > > is going down so my solar panel is useless.
> >
> > The revert is not trivial.
> >
> > What is the exact problem and how do you reproduce that?
> >
> > Thanks,
> >
>
> So I had some time today to poke at this some more. Since it looks to
> be easily reproduced by simply pulling the AC power while playing music
> or doing IO, and dmesg clearly reports using mwait, I tried booting with
> idle=nomwait to see if that made any difference. It didn't, the same
> thing still occurs.
>
> In trying to make sense of this totally unfamiliar apic code and better
> understand these changes, I came across this comment which seemed a bit
> telling:
>
> 40 void flat_vector_allocation_domain(int cpu, struct cpumask *retmask,
> 41 const struct cpumask *mask)
> 42 {
> 43 /*
> 44 * Careful. Some cpus do not strictly honor the set of cpus
> 45 * specified in the interrupt destination when using lowest
> 46 * priority interrupt delivery mode.
> 47 *
> 48 * In particular there was a hyperthreading cpu observed to
> 49 * deliver interrupts to the wrong hyperthread when only one
> 50 * hyperthread was specified in the interrupt desitination.
> 51 */
> 52 cpumask_clear(retmask);
> 53 cpumask_bits(retmask)[0] = APIC_ALL_CPUS;
> 54 }
>
> It's this allocation domain mask hook which has been bypassed by the
> offending commit. The existing approach is more robust in the face of
> relaxed adherence to destination cpumasks since it's all-inclusive,
> whereas the new code is exclusive to a specific cpu.
>
> Is it possible what I'm observing is just another manifestation of
> what's being described in that comment? This is a core 2 duo, so not
> hyper-threaded. But maybe something funny happens when switching
> cstates in response to interrupts - like maybe the wrong cpu can be used
> if it can save power vs. powering up another? Just thinking out loud
> here.
>
> In any case, 4.15-rc4 is quite unusable on my machine because of this.
>
Some more food for thought:
Added the following instrumentation:
diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
index 93edc2236282..7034eda4d494 100644
--- a/arch/x86/kernel/apic/vector.c
+++ b/arch/x86/kernel/apic/vector.c
@@ -228,6 +228,9 @@ static int __assign_irq_vector(int irq, struct apic_chip_data *d,
cpumask_and(vector_searchmask, vector_searchmask, mask);
BUG_ON(apic->cpu_mask_to_apicid(vector_searchmask, irqdata,
&d->cfg.dest_apicid));
+
+ printk("allocated vector=%i maskfirst=%i\n", d->cfg.vector, cpumask_first(vector_searchmask));
+
return 0;
}
This is what I see:
Upon playing song in cmus (on AC power since boot):
Dec 22 22:26:52 iridesce kernel: allocated vector=35 maskfirst=1
Yank AC:
Dec 22 22:27:14 iridesce kernel: allocated vector=51 maskfirst=1
Dec 22 22:27:15 iridesce kernel: do_IRQ: 0.35 No irq handler for vector
So CPU 0 vector 35 got an interrupt when maskfirst=1 for 35 as seen in
the added printk.
It seems like the affinity changes are assuming a strict adherence to
the CPU mask when the underlying hardware is treating it more as a hint.
Perhaps handlers still need to be maintained on all CPUs in a given apic
domain, regardless of what the masks are configured as, to cover these
situations.
Regards,
Vito Caputo
Powered by blists - more mailing lists