lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 13 Aug 2008 12:56:04 +0200
From:	Jan Kara <jack@...e.cz>
To:	Mingming Cao <cmm@...ibm.com>
Cc:	Chris Mason <chris.mason@...cle.com>,
	Hisashi Hifumi <hifumi.hisashi@....ntt.co.jp>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-ext4@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	Zach Brown <zach.brown@...cle.com>
Subject: Re: [PATCH] jbd jbd2: fix dio
	writereturningEIOwhentry_to_release_page fails

On Tue 12-08-08 13:06:39, Mingming Cao wrote:
> 在 2008-08-12二的 09:28 -0400,Chris Mason写道: 
> > On Mon, 2008-08-11 at 15:25 +0900, Hisashi Hifumi wrote:
> > > >> >> >I am wondering why we need stronger invalidate hurantees for DIO->
> > > >> >> >invalidate_inode_pages_range(),which force the page being removed from
> > > >> >> >page cache? In case of bh is busy due to ext3 writeout,
> > > >> >> >journal_try_to_free_buffers() could return different error number(EBUSY)
> > > >> >> >to try_to_releasepage() (instead of EIO).  In that case,  could we just
> > > >> >> >leave the page in the cache, clean pageuptodate() (to force later buffer
> > > >> >> >read to read from disk) and then invalidate_complete_page2() return
> > > >> >> >successfully? Any issue with this way?
> > > >> >> 
> > > >> >> My idea is that journal_try_to_free_buffers returns EBUSY if it fails due to
> > > >> >> bh busy, and dio write falls back to buffered write. This is easy to fix.
> > > >> >> 
> > > >> >> 
> > > >> >
> > > >> >What about the invalidates done after the DIO has already run
> > > >> >non-buffered?
> > > >> 
> > > >> Dio write falls back to buffered IO when writing to a hole on ext3, I 
> > > >think. I want to 
> > > >> apply this mechanism to fix this issue. When try_to_release_page fails on 
> > > >a page 
> > > >> due to bh busy, dio write does buffered write, sync_page_range, and 
> > > >> wait_on_page_writeback, imvalidates page cache to preserve dio semantics. 
> > > >> Even if page invalidation that is carried out after 
> > > >wait_on_page_writeback fails, 
> > > >> there is no inconsistency between HDD and page cache.
> > > >> 
> > > >
> > > >Sorry, I'm sure I wasn't very clear, I was referencing this code from
> > > >mm/filemap.c:
> > > >
> > > >        written = mapping->a_ops->direct_IO(WRITE, iocb, iov, pos, *nr_segs);
> > > >
> > > >        /*
> > > >         * Finally, try again to invalidate clean pages which might have been
> > > >         * cached by non-direct readahead, or faulted in by get_user_pages()
> > > >         * if the source of the write was an mmap'ed region of the file
> > > >         * we're writing.  Either one is a pretty crazy thing to do,
> > > >         * so we don't support it 100%.  If this invalidation
> > > >         * fails, tough, the write still worked...
> > > >         */
> > > >        if (mapping->nrpages) {
> > > >                invalidate_inode_pages2_range(mapping,
> > > >                                              pos >> PAGE_CACHE_SHIFT, end);
> > > >        }
> > > >
> > > >If this second invalidate fails during a DIO write, we'll have up to
> > > >date pages in cache that don't match the data on disk.  It is unlikely
> > > >to fail because the conditions that make jbd unable to free a buffer are
> > > >rare, but it can still happen with the write combination of mmap usage.
> > > >
> > > >The good news is the second invalidate doesn't make O_DIRECT return
> > > >-EIO.  But, it sounds like fixing do_launder_page to always call into
> > > >the FS can fix all of these problems.  Am I missing something?
> > > >
> > > 
> > > My approach is not implementing do_launder_page for ext3.
> > > It is needed to modify VFS.
> > > 
> > > My patch is as follows:
> > 
> > Sorry, I'm still not sure why the do_launder_page implementation is a
> > bad idea.  Clearly Mingming spent quite some time on it in the past, but
> > given that it could provide a hook for the FS to do expensive operations
> > to make the page really go away, why not do it?
> > 
> 
> > As far as I can tell, the only current users afs, nfs and fuse.  Pushing
> > down the PageDirty check to those filesystems should be trivial.
> > 
> > 
> 
> I thought about your suggestion before, there should be no problem to
> push down the pagedirty check to underlying fs. 
> 
>  My concern is  even if we wait for page writeback cleared  (from
> ext3_ordered_writepage() ) in the launder_page(),  (which the wait
> actually already done in previous DIO ->filemap_write_wait()),
> ext3_ordered_writepage()  still possibly hold the ref to the bh and
> later journal_try_to_free_buffers() could still fail due to that.
  Yes, how to properly wait for writepage() to finish is a different matter
and doing it launder_page() does not help. The only thing is that in
launder_page() we can do more expensive things because it is going to be
called only before DIO, not for ordinary page freeing on memory pressure.

> >        ->ext3_ordered_writepage()
> >          walk_page_buffers() <- take a bh ref
> >          block_write_full_page() <- unlock_page
> >               : <- end_page_writeback
> >                 : <- race! (dio write->try_to_release_page fails)
> 
>  here is the  window.
> >                  walk_page_buffers() <-release a bh ref
> 
> And we need someway to notify DIO code from ext3_ordered_writepage() to
> indicating they are done with those buffers. That's the hard way, as Jan
> mentioned.
  Well, we can always introduce something like a per-sb waitqueue where
processes waiting for references to some buffer to be released would dwell.
We would wakeup processes in this queue after writepage drops all it's
references, we could even use the same mechanism for waiting till commit
code releases those references... But returning EBUSY and falling back to
buffered writes is definitely easier to do (modulo what I wrote to Chris
about hiding possible problems).

> > With that said, I don't have strong feelings against falling back to
> > buffered IO when the invalidate fails.  
>  
> It seems a little odd that we have to back to buffered IO in this case.
> The pages are all flushed,  DIO just want to make sure the
> journaltransactions who still keep those buffers are removed from their
> list. It did that, the only reason to keep DIO fail is someone else
> hasn't release the bh.
> 
> Current code enforce all the buffers have to be freed and pages are
> removed from page cache, in order to force later read are from disk.  I
> am not sure why can't we just leave the page in the cache, just clear it
> uptodate flag, without reduce the page ref count?   I think DIO should
> proceed it's IO in this case...
  The problem with clearing page uptodate is described in commit
84209e02de48d72289650cc5a7ae8dd18223620f. The page may be currently in the
pipe and clearing the uptodate bit under it makes them unhappy (returning
errors or so). So either one has to change the pipe handling or we have to
cope without clearing page uptodate bit.

								Honza
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ