[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140919093550.GE25400@atomlin.usersys.redhat.com>
Date: Fri, 19 Sep 2014 10:35:50 +0100
From: Aaron Tomlin <atomlin@...hat.com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: linux-fsdevel@...r.kernel.org, viro@...iv.linux.org.uk,
david@...morbit.com, bmr@...hat.com, jcastillo@...hat.com,
mguzik@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] fs: Use a seperate wq for do_sync_work() to avoid a
potential deadlock
On Wed, Sep 17, 2014 at 11:42:09PM +0200, Oleg Nesterov wrote:
> > Hopefully this helps:
> >
> > "umount" "events/1"
> >
> > sys_umount sysrq_handle_sync
> > deactivate_super(sb) emergency_sync
> > { schedule_work(work)
> > ... queue_work(system_wq, work)
> > down_write(&s->s_umount) do_sync_work(work)
> > ... sync_filesystems(0)
> > kill_block_super(s) ...
> > generic_shutdown_super(sb) down_read(&sb->s_umount)
> > // sop->put_super(sb)
> > ext4_put_super(sb)
> > invalidate_bdev(sb->s_bdev)
> > lru_add_drain_all()
> > for_each_online_cpu(cpu) {
> > schedule_work_on(cpu, work)
> > queue_work_on(cpu, system_wq, work)
> > ...
> > }
> > }
> >
> > - Both lru_add_drain and do_sync_work work items are added to
> > the same global system_wq
>
> Aha. Perhaps you hit this bug under the older kernel?
I did. Sorry for the noise.
> "same workqueue" doesn't mean "same worker thread" today, every CPU can
> run up to ->max_active works. And for system_wq uses max_active = 256.
>
> > - The current work fn on the system_wq is do_sync_work and is
> > blocked waiting to aquire an sb's s_umount for reading
>
> OK,
>
> > - The umount task is the current owner of the s_umount in
> > question but is waiting for do_sync_work to continue.
> > Thus we hit a deadlock situation.
>
> I don't this this can happen, another worker threaf from worker_pool can
> handle this work.
Understood.
--
Aaron Tomlin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists