[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <79ee43026efe5aaa560953ea8fe29a826ac4e855.camel@gmx.de>
Date: Sun, 29 Nov 2020 08:48:55 +0100
From: Mike Galbraith <efault@....de>
To: Oleksandr Natalenko <oleksandr@...alenko.name>,
linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-rt-users@...r.kernel.org
Subject: Re: scheduling while atomic in z3fold
On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote:
> On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote:
> >
> > > > Shouldn't the list manipulation be protected with
> > > > local_lock+this_cpu_ptr instead of get_cpu_ptr+spin_lock?
> >
> > Totally untested:
>
> Hrm, the thing doesn't seem to care deeply about preemption being
> disabled, so adding another lock may be overkill. It looks like you
> could get the job done via migrate_disable()+this_cpu_ptr().
There is however an ever so tiny chance that I'm wrong about that :)
crash.rt> bt -s
PID: 6699 TASK: ffff913c464b5640 CPU: 0 COMMAND: "oom01"
#0 [ffffb6b94adff6f0] machine_kexec+366 at ffffffffbd05f87e
#1 [ffffb6b94adff738] __crash_kexec+210 at ffffffffbd14c052
#2 [ffffb6b94adff7f8] crash_kexec+48 at ffffffffbd14d240
#3 [ffffb6b94adff808] oops_end+202 at ffffffffbd02680a
#4 [ffffb6b94adff828] no_context+333 at ffffffffbd06d7ed
#5 [ffffb6b94adff888] exc_page_fault+696 at ffffffffbd8c0b68
#6 [ffffb6b94adff8e0] asm_exc_page_fault+30 at ffffffffbda00ace
#7 [ffffb6b94adff968] mark_wakeup_next_waiter+81 at ffffffffbd0ea1e1
#8 [ffffb6b94adff9c8] rt_mutex_futex_unlock+79 at ffffffffbd8cc3cf
#9 [ffffb6b94adffa08] z3fold_zpool_free+1319 at ffffffffbd2b6b17
#10 [ffffb6b94adffa68] zswap_free_entry+67 at ffffffffbd27c6f3
#11 [ffffb6b94adffa78] zswap_frontswap_invalidate_page+138 at ffffffffbd27c7fa
#12 [ffffb6b94adffaa0] __frontswap_invalidate_page+72 at ffffffffbd27bee8
#13 [ffffb6b94adffac8] swapcache_free_entries+494 at ffffffffbd276e1e
#14 [ffffb6b94adffb10] free_swap_slot+173 at ffffffffbd27b7dd
#15 [ffffb6b94adffb30] __swap_entry_free+112 at ffffffffbd2768d0
#16 [ffffb6b94adffb58] free_swap_and_cache+57 at ffffffffbd278939
#17 [ffffb6b94adffb80] unmap_page_range+1485 at ffffffffbd24c52d
#18 [ffffb6b94adffc40] __oom_reap_task_mm+178 at ffffffffbd218f02
#19 [ffffb6b94adffd10] exit_mmap+339 at ffffffffbd257da3
#20 [ffffb6b94adffdb0] mmput+78 at ffffffffbd07fe7e
#21 [ffffb6b94adffdc0] do_exit+822 at ffffffffbd089bc6
#22 [ffffb6b94adffe28] do_group_exit+71 at ffffffffbd08a547
#23 [ffffb6b94adffe50] get_signal+319 at ffffffffbd0979ff
#24 [ffffb6b94adffe98] arch_do_signal+30 at ffffffffbd022cbe
#25 [ffffb6b94adfff28] exit_to_user_mode_prepare+293 at ffffffffbd1223e5
#26 [ffffb6b94adfff48] irqentry_exit_to_user_mode+5 at ffffffffbd8c1675
#27 [ffffb6b94adfff50] asm_exc_page_fault+30 at ffffffffbda00ace
RIP: 0000000000414300 RSP: 00007f5ddf065ec0 RFLAGS: 00010206
RAX: 0000000000001000 RBX: 00000000c0000000 RCX: 00000000adf28000
RDX: 00007f5d0bf8d000 RSI: 00000000c0000000 RDI: 0000000000000000
RBP: 00007f5c5e065000 R8: ffffffffffffffff R9: 0000000000000000
R10: 0000000000000022 R11: 0000000000000246 R12: 0000000000001000
R13: 0000000000000001 R14: 0000000000000001 R15: 00007ffc953ebcd0
ORIG_RAX: ffffffffffffffff CS: 0033 SS: 002b
crash.rt>
Powered by blists - more mailing lists