[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <856b5cc2a3d4eb673743b52956bf1e60dcdf87a1.camel@gmx.de>
Date: Mon, 30 Nov 2020 15:42:46 +0100
From: Mike Galbraith <efault@....de>
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Oleksandr Natalenko <oleksandr@...alenko.name>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-rt-users@...r.kernel.org
Subject: Re: scheduling while atomic in z3fold
On Mon, 2020-11-30 at 14:20 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-29 12:41:14 [+0100], Mike Galbraith wrote:
> > On Sun, 2020-11-29 at 12:29 +0100, Oleksandr Natalenko wrote:
> > >
> > > Ummm so do compressors explode under non-rt kernel in your tests as
> > > well, or it is just -rt that triggers this?
> >
> > I only tested a non-rt kernel with z3fold, which worked just fine.
>
> I tried this and it did not not explode yet. Mike, can you please
> confirm?
This explodes in write_unlock() as mine did. Oleksandr's local_lock()
variant explodes in the lock he added. (ew, corruption)
I think I'll try a stable-rt tree. This master tree _should_ be fine
given it seems to work just peachy for everything else, but my box is
the only one making boom... and also NOT making boom with the zbud
compressor. Hrmph.
crash.rt> bt -sx
PID: 27961 TASK: ffff8f6ad5344500 CPU: 4 COMMAND: "oom01"
#0 [ffffa439d4747708] machine_kexec+0x16e at ffffffffbb05eace
#1 [ffffa439d4747750] __crash_kexec+0x5a at ffffffffbb145e5a
#2 [ffffa439d4747810] crash_kexec+0x24 at ffffffffbb147004
#3 [ffffa439d4747818] oops_end+0x93 at ffffffffbb025c03
#4 [ffffa439d4747838] exc_page_fault+0x6b at ffffffffbb9011bb
#5 [ffffa439d4747860] asm_exc_page_fault+0x1e at ffffffffbba00ace
#6 [ffffa439d47478e8] mark_wakeup_next_waiter+0x51 at ffffffffbb0e6781
#7 [ffffa439d4747950] rt_mutex_futex_unlock+0x50 at ffffffffbb90bae0
#8 [ffffa439d4747990] z3fold_free+0x4a8 at ffffffffbb2bf8a8
#9 [ffffa439d47479f0] zswap_free_entry+0x82 at ffffffffbb285dd2
#10 [ffffa439d4747a08] zswap_frontswap_invalidate_page+0x8c at ffffffffbb285edc
#11 [ffffa439d4747a30] __frontswap_invalidate_page+0x4e at ffffffffbb28548e
#12 [ffffa439d4747a58] swap_range_free.constprop.0+0x9e at ffffffffbb27fd4e
#13 [ffffa439d4747a78] swapcache_free_entries+0x10d at ffffffffbb28101d
#14 [ffffa439d4747ac0] free_swap_slot+0xac at ffffffffbb284d8c
#15 [ffffa439d4747ae0] __swap_entry_free+0x8f at ffffffffbb2808ff
#16 [ffffa439d4747b08] free_swap_and_cache+0x3b at ffffffffbb2829db
#17 [ffffa439d4747b38] zap_pte_range+0x164 at ffffffffbb258004
#18 [ffffa439d4747bc0] unmap_page_range+0x1dd at ffffffffbb258b6d
#19 [ffffa439d4747c38] __oom_reap_task_mm+0xd5 at ffffffffbb2235c5
#20 [ffffa439d4747d08] exit_mmap+0x154 at ffffffffbb264084
#21 [ffffa439d4747da0] mmput+0x4e at ffffffffbb07e66e
#22 [ffffa439d4747db0] exit_mm+0x172 at ffffffffbb088372
#23 [ffffa439d4747df0] do_exit+0x1a8 at ffffffffbb088588
#24 [ffffa439d4747e20] do_group_exit+0x39 at ffffffffbb0888a9
#25 [ffffa439d4747e48] get_signal+0x155 at ffffffffbb096ef5
#26 [ffffa439d4747e98] arch_do_signal+0x1a at ffffffffbb0224ba
#27 [ffffa439d4747f18] exit_to_user_mode_loop+0xc7 at ffffffffbb11c037
#28 [ffffa439d4747f38] exit_to_user_mode_prepare+0x6a at ffffffffbb11c12a
#29 [ffffa439d4747f48] irqentry_exit_to_user_mode+0x5 at ffffffffbb901ae5
#30 [ffffa439d4747f50] asm_exc_page_fault+0x1e at ffffffffbba00ace
RIP: 0000000000414300 RSP: 00007f193033bec0 RFLAGS: 00010206
RAX: 0000000000001000 RBX: 00000000c0000000 RCX: 00000000a6e6a000
RDX: 00007f19159a4000 RSI: 00000000c0000000 RDI: 0000000000000000
RBP: 00007f186eb3a000 R8: ffffffffffffffff R9: 0000000000000000
R10: 0000000000000022 R11: 0000000000000246 R12: 0000000000001000
R13: 0000000000000001 R14: 0000000000000001 R15: 00007ffd0ffb96d0
ORIG_RAX: ffffffffffffffff CS: 0033 SS: 002b
>
Powered by blists - more mailing lists