lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 13 Oct 2022 15:14:08 +1000
From:   "Nicholas Piggin" <npiggin@...il.com>
To:     "Guenter Roeck" <linux@...ck-us.net>,
        "Jason A. Donenfeld" <Jason@...c4.com>
Cc:     "Michael Ellerman" <mpe@...erman.id.au>,
        "Linus Torvalds" <torvalds@...ux-foundation.org>,
        <ajd@...ux.ibm.com>, <aneesh.kumar@...ux.ibm.com>,
        <atrajeev@...ux.vnet.ibm.com>, <christophe.leroy@...roup.eu>,
        <cuigaosheng1@...wei.com>, <david@...hat.com>,
        <farosas@...ux.ibm.com>, <geoff@...radead.org>,
        <gustavoars@...nel.org>, <haren@...ux.ibm.com>,
        <hbathini@...ux.ibm.com>, <joel@....id.au>, <lihuafei1@...wei.com>,
        <linux-kernel@...r.kernel.org>, <linuxppc-dev@...ts.ozlabs.org>,
        <lukas.bulwahn@...il.com>, <mikey@...ling.org>,
        <nathan@...nel.org>, <nathanl@...ux.ibm.com>,
        <nicholas@...ux.ibm.com>, <pali@...nel.org>, <paul@...l-moore.com>,
        <rmclure@...ux.ibm.com>, <ruscur@...sell.cc>, <windhl@....com>,
        <wsa+renesas@...g-engineering.com>, <ye.xingchen@....com.cn>,
        <yuanjilin@...rlc.com>, <zhengyongjun3@...wei.com>,
        "Peter Zijlstra" <peterz@...radead.org>,
        "Ingo Molnar" <mingo@...nel.org>
Subject: Re: [GIT PULL] Please pull powerpc/linux.git powerpc-6.1-1 tag

On Thu Oct 13, 2022 at 2:43 PM AEST, Guenter Roeck wrote:
> On 10/12/22 10:20, Jason A. Donenfeld wrote:
> > On Wed, Oct 12, 2022 at 09:44:52AM -0700, Guenter Roeck wrote:
> >> On Wed, Oct 12, 2022 at 09:49:26AM -0600, Jason A. Donenfeld wrote:
> >>> On Wed, Oct 12, 2022 at 07:18:27AM -0700, Guenter Roeck wrote:
> >>>> NIP [c000000000031630] .replay_soft_interrupts+0x60/0x300
> >>>> LR [c000000000031964] .arch_local_irq_restore+0x94/0x1c0
> >>>> Call Trace:
> >>>> [c000000007df3870] [c000000000031964] .arch_local_irq_restore+0x94/0x1c0 (unreliable)
> >>>> [c000000007df38f0] [c000000000f8a444] .__schedule+0x664/0xa50
> >>>> [c000000007df39d0] [c000000000f8a8b0] .schedule+0x80/0x140
> >>>> [c000000007df3a50] [c00000000092f0dc] .try_to_generate_entropy+0x118/0x174
> >>>> [c000000007df3b40] [c00000000092e2e4] .urandom_read_iter+0x74/0x140
> >>>> [c000000007df3bc0] [c0000000003b0044] .vfs_read+0x284/0x2d0
> >>>> [c000000007df3cd0] [c0000000003b0d2c] .ksys_read+0xdc/0x130
> >>>> [c000000007df3d80] [c00000000002a88c] .system_call_exception+0x19c/0x330
> >>>> [c000000007df3e10] [c00000000000c1d4] system_call_common+0xf4/0x258
> >>>
> >>> Obviously the first couple lines of this concern me a bit. But I think
> >>> actually this might just be a catalyst for another bug. You could view
> >>> that function as basically just:
> >>>
> >>>      while (something)
> >>>      	schedule();
> >>>
> >>> And I guess in the process of calling the scheduler a lot, which toggles
> >>> interrupts a lot, something got wedged.
> >>>
> >>> Curious, though, I did try to reproduce this, to no avail. My .config is
> >>> https://xn--4db.cc/rBvHWfDZ . What's yours?
> >>>
> >>
> >> Attached. My qemu command line is
> > 
> > Okay, thanks, I reproduced it. In this case, I suspect
> > try_to_generate_entropy() is just the messenger. There's an earlier
> > problem:
> > 
> > BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1
> > caller is .__flush_tlb_pending+0x40/0xf0
> > CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.0.0-28380-gde492c83cae0-dirty #4
> > Hardware name: PowerMac3,1 PPC970FX 0x3c0301 PowerMac
> > Call Trace:
> > [c0000000044c3540] [c000000000f93ef0] .dump_stack_lvl+0x7c/0xc4 (unreliable)
> > [c0000000044c35d0] [c000000000fc9550] .check_preemption_disabled+0x140/0x150
> > [c0000000044c3660] [c000000000073dd0] .__flush_tlb_pending+0x40/0xf0
> > [c0000000044c36f0] [c000000000334434] .__apply_to_page_range+0x764/0xa30
> > [c0000000044c3840] [c00000000006cad0] .change_memory_attr+0xf0/0x160
> > [c0000000044c38d0] [c0000000002a1d70] .bpf_prog_select_runtime+0x150/0x230
> > [c0000000044c3970] [c000000000d405d4] .bpf_prepare_filter+0x504/0x6f0
> > [c0000000044c3a30] [c000000000d4085c] .bpf_prog_create+0x9c/0x140
> > [c0000000044c3ac0] [c000000002051d9c] .ptp_classifier_init+0x44/0x78
> > [c0000000044c3b50] [c000000002050f3c] .sock_init+0xe0/0x100
> > [c0000000044c3bd0] [c000000000010bd4] .do_one_initcall+0xa4/0x438
> > [c0000000044c3cc0] [c000000002005008] .kernel_init_freeable+0x378/0x428
> > [c0000000044c3da0] [c0000000000113d8] .kernel_init+0x28/0x1a0
> > [c0000000044c3e10] [c00000000000ca3c] .ret_from_kernel_thread+0x58/0x60
> > 
> > This in turn is because __flush_tlb_pending() calls:
> > 
> > static inline int mm_is_thread_local(struct mm_struct *mm)
> > {
> >          return cpumask_equal(mm_cpumask(mm),
> >                                cpumask_of(smp_processor_id()));
> > }
> > 
> > __flush_tlb_pending() has a comment about this:
> > 
> >   * Must be called from within some kind of spinlock/non-preempt region...
> >   */
> > void __flush_tlb_pending(struct ppc64_tlb_batch *batch)
> > 
> > So I guess that didn't happen for some reason? Maybe this is indicative
> > of some lock imbalance that then gets hit later?
>
> I managed to bisect that problem. Unfortunately it points to the
> scheduler merge. No idea what to do about that. Any idea ?
> I am copying Peter and Ingo for comments.
>

> # first bad commit: [30c999937f69abf935b0228b8411713737377d9e] Merge tag 'sched-core-2022-10-07' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

This might be a red herring because I can reproduce without it.
I think we can fix this with some preempt critical sections, they
don't look too much of a problem.

I don't know why it's not showing up earlier than this release,
I'll look into it a bit more.

Thanks,
Nick

Powered by blists - more mailing lists