[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM4kBBL5+xNWq6DWHY6nQjwDTj8PZKem-rGuFvimi7jekjA+Xw@mail.gmail.com>
Date: Mon, 7 Dec 2020 12:52:47 +0100
From: Vitaly Wool <vitaly.wool@...sulko.com>
To: Mike Galbraith <efault@....de>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Oleksandr Natalenko <oleksandr@...alenko.name>,
LKML <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Steven Rostedt <rostedt@...dmis.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-rt-users@...r.kernel.org
Subject: Re: scheduling while atomic in z3fold
On Mon, Dec 7, 2020 at 3:18 AM Mike Galbraith <efault@....de> wrote:
>
> On Mon, 2020-12-07 at 02:05 +0100, Vitaly Wool wrote:
> >
> > Could you please try the following patch in your setup:
>
> crash> gdb list *z3fold_zpool_free+0x527
> 0xffffffffc0e14487 is in z3fold_zpool_free (mm/z3fold.c:341).
> 336 if (slots->slot[i]) {
> 337 is_free = false;
> 338 break;
> 339 }
> 340 }
> 341 write_unlock(&slots->lock); <== boom
> 342
> 343 if (is_free) {
> 344 struct z3fold_pool *pool = slots_to_pool(slots);
> 345
> crash> z3fold_buddy_slots -x ffff99a3287b8780
> struct z3fold_buddy_slots {
> slot = {0xdeadbeef, 0xdeadbeef, 0xdeadbeef, 0xdeadbeef},
> pool = 0xffff99a3146b8400,
> lock = {
> rtmutex = {
> wait_lock = {
> raw_lock = {
> {
> val = {
> counter = 0x1
> },
> {
> locked = 0x1,
> pending = 0x0
> },
> {
> locked_pending = 0x1,
> tail = 0x0
> }
> }
> }
> },
> waiters = {
> rb_root = {
> rb_node = 0xffff99a3287b8e00
> },
> rb_leftmost = 0x0
> },
> owner = 0xffff99a355c24500,
> save_state = 0x1
> },
> readers = {
> counter = 0x80000000
> }
> }
> }
Thanks. This trace beats me because I don't quite get how this could
have happened.
Hitting write_unlock at line 341 would mean that HANDLES_ORPHANED bit
is set but obviously it isn't.
Could you please comment out the ".shrink = z3fold_zpool_shrink" line
and retry? Reclaim is the trickiest thing over there since I have to
drop page lock while reclaiming.
Thanks,
Vitaly
> > diff --git a/mm/z3fold.c b/mm/z3fold.c
> > index 18feaa0bc537..efe9a012643d 100644
> > --- a/mm/z3fold.c
> > +++ b/mm/z3fold.c
> > @@ -544,12 +544,17 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
> > break;
> > }
> > }
> > - if (!is_free)
> > + if (!is_free) {
> > set_bit(HANDLES_ORPHANED, &zhdr->slots->pool);
> > - read_unlock(&zhdr->slots->lock);
> > -
> > - if (is_free)
> > + read_unlock(&zhdr->slots->lock);
> > + } else {
> > + zhdr->slots->slot[0] =
> > + zhdr->slots->slot[1] =
> > + zhdr->slots->slot[2] =
> > + zhdr->slots->slot[3] = 0xdeadbeef;
> > + read_unlock(&zhdr->slots->lock);
> > kmem_cache_free(pool->c_handle, zhdr->slots);
> > + }
> >
> > if (locked)
> > z3fold_page_unlock(zhdr);
>
Powered by blists - more mailing lists