[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f1c39a0504310a97e42b667fc4d458af4a86d97a.camel@gmx.de>
Date: Sun, 29 Nov 2020 10:21:33 +0100
From: Mike Galbraith <efault@....de>
To: Oleksandr Natalenko <oleksandr@...alenko.name>,
linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-rt-users@...r.kernel.org
Subject: Re: scheduling while atomic in z3fold
On Sun, 2020-11-29 at 08:48 +0100, Mike Galbraith wrote:
> On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote:
> > On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote:
> > >
> > > > > Shouldn't the list manipulation be protected with
> > > > > local_lock+this_cpu_ptr instead of get_cpu_ptr+spin_lock?
> > >
> > > Totally untested:
> >
> > Hrm, the thing doesn't seem to care deeply about preemption being
> > disabled, so adding another lock may be overkill. It looks like you
> > could get the job done via migrate_disable()+this_cpu_ptr().
>
> There is however an ever so tiny chance that I'm wrong about that :)
Or not, your local_lock+this_cpu_ptr version exploded too.
Perhaps there's a bit of non-rt related racy racy going on in zswap
thingy that makes swap an even less wonderful idea for RT than usual.
crash.rt> bt -s
PID: 32399 TASK: ffff8e4528cd8000 CPU: 4 COMMAND: "cc1"
#0 [ffff9f0f1228f488] machine_kexec+366 at ffffffff8c05f87e
#1 [ffff9f0f1228f4d0] __crash_kexec+210 at ffffffff8c14c052
#2 [ffff9f0f1228f590] crash_kexec+48 at ffffffff8c14d240
#3 [ffff9f0f1228f5a0] oops_end+202 at ffffffff8c02680a
#4 [ffff9f0f1228f5c0] exc_general_protection+403 at ffffffff8c8be413
#5 [ffff9f0f1228f650] asm_exc_general_protection+30 at ffffffff8ca00a0e
#6 [ffff9f0f1228f6d8] __z3fold_alloc+118 at ffffffff8c2b4ea6
#7 [ffff9f0f1228f760] z3fold_zpool_malloc+115 at ffffffff8c2b56c3
#8 [ffff9f0f1228f7c8] zswap_frontswap_store+789 at ffffffff8c27d335
#9 [ffff9f0f1228f828] __frontswap_store+110 at ffffffff8c27bafe
#10 [ffff9f0f1228f858] swap_writepage+55 at ffffffff8c273b17
#11 [ffff9f0f1228f870] shmem_writepage+612 at ffffffff8c232964
#12 [ffff9f0f1228f8a8] pageout+210 at ffffffff8c225f12
#13 [ffff9f0f1228f928] shrink_page_list+2428 at ffffffff8c22744c
#14 [ffff9f0f1228f9c0] shrink_inactive_list+534 at ffffffff8c229746
#15 [ffff9f0f1228fa68] shrink_lruvec+927 at ffffffff8c22a35f
#16 [ffff9f0f1228fb78] shrink_node+567 at ffffffff8c22a7d7
#17 [ffff9f0f1228fbf8] do_try_to_free_pages+185 at ffffffff8c22ad39
#18 [ffff9f0f1228fc40] try_to_free_pages+201 at ffffffff8c22c239
#19 [ffff9f0f1228fcd0] __alloc_pages_slowpath.constprop.111+1056 at ffffffff8c26eb70
#20 [ffff9f0f1228fda8] __alloc_pages_nodemask+786 at ffffffff8c26f7e2
#21 [ffff9f0f1228fe00] alloc_pages_vma+309 at ffffffff8c288f15
#22 [ffff9f0f1228fe40] handle_mm_fault+1687 at ffffffff8c24ee97
#23 [ffff9f0f1228fef8] exc_page_fault+821 at ffffffff8c8c1be5
#24 [ffff9f0f1228ff50] asm_exc_page_fault+30 at ffffffff8ca00ace
RIP: 00000000010fea3b RSP: 00007ffc88ad5a50 RFLAGS: 00010206
RAX: 00007f4a548d1638 RBX: 00007f4a5c0c5000 RCX: 0000000000000000
RDX: 0000000000020000 RSI: 000000000000000f RDI: 0000000000018001
RBP: 00007f4a5547a000 R8: 0000000000018000 R9: 0000000000240000
R10: 00007f4a5523a000 R11: 0000000000098628 R12: 00007f4a548d1638
R13: 00007f4a54ab1478 R14: 000000005002ac29 R15: 0000000000000000
ORIG_RAX: ffffffffffffffff CS: 0033 SS: 002b
crash.rt>
Powered by blists - more mailing lists