[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200917124838.GT5449@casper.infradead.org>
Date: Thu, 17 Sep 2020 13:48:38 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Boaz Harrosh <boaz@...xistor.com>, Hou Tao <houtao1@...wei.com>,
peterz@...radead.org, Ingo Molnar <mingo@...hat.com>,
Will Deacon <will@...nel.org>, Dennis Zhou <dennis@...nel.org>,
Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...ux.com>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Jan Kara <jack@...e.cz>
Subject: Re: [RFC PATCH] locking/percpu-rwsem: use this_cpu_{inc|dec}() for
read_count
On Thu, Sep 17, 2020 at 02:01:33PM +0200, Oleg Nesterov wrote:
> IIUC, file_end_write() was never IRQ safe (at least if !CONFIG_SMP), even
> before 8129ed2964 ("change sb_writers to use percpu_rw_semaphore"), but this
> doesn't matter...
>
> Perhaps we can change aio.c, io_uring.c and fs/overlayfs/file.c to avoid
> file_end_write() in IRQ context, but I am not sure it's worth the trouble.
If we change bio_endio to invoke the ->bi_end_io callbacks in softirq
context instead of hardirq context, we can change the pagecache to take
BH-safe locks instead of IRQ-safe locks. I believe the only reason the
lock needs to be IRQ-safe is for the benefit of paths like:
mpage_end_io
page_endio
end_page_writeback
test_clear_page_writeback
Admittedly, I haven't audited all the places that call end_page_writeback;
there might be others called from non-BIO contexts (network filesystems?).
That was the point where I gave up my investigation of why we use an
IRQ-safe spinlock when basically all page cache operations are done
from user context.
Powered by blists - more mailing lists