[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y0b3ZsTRHWG6jGK8@zx2c4.com>
Date: Wed, 12 Oct 2022 11:20:38 -0600
From: "Jason A. Donenfeld" <Jason@...c4.com>
To: Guenter Roeck <linux@...ck-us.net>
Cc: Michael Ellerman <mpe@...erman.id.au>,
Linus Torvalds <torvalds@...ux-foundation.org>,
ajd@...ux.ibm.com, aneesh.kumar@...ux.ibm.com,
atrajeev@...ux.vnet.ibm.com, christophe.leroy@...roup.eu,
cuigaosheng1@...wei.com, david@...hat.com, farosas@...ux.ibm.com,
geoff@...radead.org, gustavoars@...nel.org, haren@...ux.ibm.com,
hbathini@...ux.ibm.com, joel@....id.au, lihuafei1@...wei.com,
linux-kernel@...r.kernel.org, linuxppc-dev@...ts.ozlabs.org,
lukas.bulwahn@...il.com, mikey@...ling.org, nathan@...nel.org,
nathanl@...ux.ibm.com, nicholas@...ux.ibm.com, npiggin@...il.com,
pali@...nel.org, paul@...l-moore.com, rmclure@...ux.ibm.com,
ruscur@...sell.cc, windhl@....com,
wsa+renesas@...g-engineering.com, ye.xingchen@....com.cn,
yuanjilin@...rlc.com, zhengyongjun3@...wei.com
Subject: Re: [GIT PULL] Please pull powerpc/linux.git powerpc-6.1-1 tag
On Wed, Oct 12, 2022 at 09:44:52AM -0700, Guenter Roeck wrote:
> On Wed, Oct 12, 2022 at 09:49:26AM -0600, Jason A. Donenfeld wrote:
> > On Wed, Oct 12, 2022 at 07:18:27AM -0700, Guenter Roeck wrote:
> > > NIP [c000000000031630] .replay_soft_interrupts+0x60/0x300
> > > LR [c000000000031964] .arch_local_irq_restore+0x94/0x1c0
> > > Call Trace:
> > > [c000000007df3870] [c000000000031964] .arch_local_irq_restore+0x94/0x1c0 (unreliable)
> > > [c000000007df38f0] [c000000000f8a444] .__schedule+0x664/0xa50
> > > [c000000007df39d0] [c000000000f8a8b0] .schedule+0x80/0x140
> > > [c000000007df3a50] [c00000000092f0dc] .try_to_generate_entropy+0x118/0x174
> > > [c000000007df3b40] [c00000000092e2e4] .urandom_read_iter+0x74/0x140
> > > [c000000007df3bc0] [c0000000003b0044] .vfs_read+0x284/0x2d0
> > > [c000000007df3cd0] [c0000000003b0d2c] .ksys_read+0xdc/0x130
> > > [c000000007df3d80] [c00000000002a88c] .system_call_exception+0x19c/0x330
> > > [c000000007df3e10] [c00000000000c1d4] system_call_common+0xf4/0x258
> >
> > Obviously the first couple lines of this concern me a bit. But I think
> > actually this might just be a catalyst for another bug. You could view
> > that function as basically just:
> >
> > while (something)
> > schedule();
> >
> > And I guess in the process of calling the scheduler a lot, which toggles
> > interrupts a lot, something got wedged.
> >
> > Curious, though, I did try to reproduce this, to no avail. My .config is
> > https://xn--4db.cc/rBvHWfDZ . What's yours?
> >
>
> Attached. My qemu command line is
Okay, thanks, I reproduced it. In this case, I suspect
try_to_generate_entropy() is just the messenger. There's an earlier
problem:
BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1
caller is .__flush_tlb_pending+0x40/0xf0
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.0.0-28380-gde492c83cae0-dirty #4
Hardware name: PowerMac3,1 PPC970FX 0x3c0301 PowerMac
Call Trace:
[c0000000044c3540] [c000000000f93ef0] .dump_stack_lvl+0x7c/0xc4 (unreliable)
[c0000000044c35d0] [c000000000fc9550] .check_preemption_disabled+0x140/0x150
[c0000000044c3660] [c000000000073dd0] .__flush_tlb_pending+0x40/0xf0
[c0000000044c36f0] [c000000000334434] .__apply_to_page_range+0x764/0xa30
[c0000000044c3840] [c00000000006cad0] .change_memory_attr+0xf0/0x160
[c0000000044c38d0] [c0000000002a1d70] .bpf_prog_select_runtime+0x150/0x230
[c0000000044c3970] [c000000000d405d4] .bpf_prepare_filter+0x504/0x6f0
[c0000000044c3a30] [c000000000d4085c] .bpf_prog_create+0x9c/0x140
[c0000000044c3ac0] [c000000002051d9c] .ptp_classifier_init+0x44/0x78
[c0000000044c3b50] [c000000002050f3c] .sock_init+0xe0/0x100
[c0000000044c3bd0] [c000000000010bd4] .do_one_initcall+0xa4/0x438
[c0000000044c3cc0] [c000000002005008] .kernel_init_freeable+0x378/0x428
[c0000000044c3da0] [c0000000000113d8] .kernel_init+0x28/0x1a0
[c0000000044c3e10] [c00000000000ca3c] .ret_from_kernel_thread+0x58/0x60
This in turn is because __flush_tlb_pending() calls:
static inline int mm_is_thread_local(struct mm_struct *mm)
{
return cpumask_equal(mm_cpumask(mm),
cpumask_of(smp_processor_id()));
}
__flush_tlb_pending() has a comment about this:
* Must be called from within some kind of spinlock/non-preempt region...
*/
void __flush_tlb_pending(struct ppc64_tlb_batch *batch)
So I guess that didn't happen for some reason? Maybe this is indicative
of some lock imbalance that then gets hit later?
I've also managed to not hit this bug a few times. When it triggers,
after "kprobes: kprobe jump-optimization is enabled. All kprobes are
optimized if possible.", there's a long hang - tens seconds before it
continues. When it doesn't trigger, there's no hang at that point in the
boot process.
Jason
Powered by blists - more mailing lists