lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 24 Feb 2021 18:44:11 +0100
From:   Jan Kara <jack@...e.cz>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     Jan Kara <jack@...e.cz>, linux-fsdevel@...r.kernel.org,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Christoph Hellwig <hch@....de>,
        Kent Overstreet <kent.overstreet@...il.com>
Subject: Re: [RFC] Better page cache error handling

On Wed 24-02-21 13:41:15, Matthew Wilcox wrote:
> On Wed, Feb 24, 2021 at 01:38:48PM +0100, Jan Kara wrote:
> > > We allocate a page and try to read it.  29 threads pile up waiting
> > > for the page lock in filemap_update_page().  The error returned by the
> > > original I/O is shared between all 29 waiters as well as being returned
> > > to the requesting thread.  The next request for index.html will send
> > > another I/O, and more waiters will pile up trying to get the page lock,
> > > but at no time will more than 30 threads be waiting for the I/O to fail.
> > 
> > Interesting idea. It certainly improves current behavior. I just wonder
> > whether this isn't a partial solution to a problem and a full solution of
> > it would have to go in a different direction? I mean it just seems
> > wrong that each reader (let's assume they just won't overlap) has to retry
> > the failed IO and wait for the HW to figure out it's not going to work.
> > Shouldn't we cache the error state with the page? And I understand that we
> > then also have to deal with the problem how to invalidate the error state
> > when the block might eventually become readable (for stuff like temporary
> > IO failures). That would need some signalling from the driver to the page
> > cache, maybe in a form of some error recovery sequence counter or something
> > like that. For stuff like iSCSI, multipath, or NBD it could be doable I
> > believe...
> 
> That felt like a larger change than I wanted to make.  I already have
> a few big projects on my plate!

I can understand that ;)

> Also, it's not clear to me that the host can necessarily figure out when
> a device has fixed an error -- certainly for the three cases you list
> it can be done.  I think we'd want a timer to indicate that it's worth
> retrying instead of returning the error.
> 
> Anyway, that seems like a lot of data to cram into a struct page.  So I
> think my proposal is still worth pursuing while waiting for someone to
> come up with a perfect solution.

Yes, timer could be a fallback. Or we could just schedule work to discard
all 'error' pages in the fs in an hour or so. Not perfect but more or less
workable I'd say. Also I don't think we need to cram this directly into
struct page - I think it is perfectly fine to kmalloc() structure we need
for caching if we hit error and just don't cache if the allocation fails.
Then we might just reference it from appropriate place... didn't put too
much thought to this...

								Honza
-- 
Jan Kara <jack@...e.com>
SUSE Labs, CR

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ