[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e25a3354-04e4-54e9-a45f-7305bfd1f2bb@plexistor.com>
Date: Thu, 17 Sep 2020 16:46:38 +0300
From: Boaz Harrosh <boaz@...xistor.com>
To: Matthew Wilcox <willy@...radead.org>,
Oleg Nesterov <oleg@...hat.com>
Cc: Hou Tao <houtao1@...wei.com>, peterz@...radead.org,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Dennis Zhou <dennis@...nel.org>, Tejun Heo <tj@...nel.org>,
Christoph Lameter <cl@...ux.com>, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, Jan Kara <jack@...e.cz>
Subject: Re: [RFC PATCH] locking/percpu-rwsem: use this_cpu_{inc|dec}() for
read_count
On 17/09/2020 15:48, Matthew Wilcox wrote:
> On Thu, Sep 17, 2020 at 02:01:33PM +0200, Oleg Nesterov wrote:
<>
>
> If we change bio_endio to invoke the ->bi_end_io callbacks in softirq
> context instead of hardirq context, we can change the pagecache to take
> BH-safe locks instead of IRQ-safe locks. I believe the only reason the
> lock needs to be IRQ-safe is for the benefit of paths like:
>
From my totally subjective experience on the filesystem side (user of
bio_endio) all HW block drivers I used including Nvme isci, sata... etc.
end up calling bio_endio in softirq. The big exception to that is the
vdX drivers under KVM. Which is very Ironic to me.
I wish we could make all drivers be uniform in this regard.
But maybe I'm just speaking crap. Its only from my limited debuging
expirience.
Thanks
Boaz
Powered by blists - more mailing lists