lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200911120529.GZ6583@casper.infradead.org>
Date:   Fri, 11 Sep 2020 13:05:29 +0100
From:   Matthew Wilcox <willy@...radead.org>
To:     "yukuai (C)" <yukuai3@...wei.com>
Cc:     hch@...radead.org, darrick.wong@...cle.com, david@...morbit.com,
        linux-xfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-kernel@...r.kernel.org, yi.zhang@...wei.com
Subject: Re: [RFC PATCH V4] iomap: add support to track dirty state of sub
 pages

On Fri, Sep 11, 2020 at 04:27:08PM +0800, yukuai (C) wrote:
> Since your THP iomap patches were reviewed, I made some modifications
> based on these patches.

Thanks!  This looks good to me!

> +static void
> +iomap_set_range_dirty(struct page *page, unsigned int off,
> +		unsigned int len)
> +{
> +	struct inode *inode = page->mapping->host;
> +	unsigned int blocks_per_page = i_blocks_per_page(inode, page);

I might call this nr_blocks.

> +	unsigned int first = (off >> inode->i_blkbits) + blocks_per_page;
> +	unsigned int last = ((off + len - 1) >> inode->i_blkbits) +
> blocks_per_page;
> +	unsigned long flags;
> +	struct iomap_page *iop;
> +
> +	if (PageError(page))
> +		return;

I think this is wrong.  If PageError is set then we've seen an I/O error
on this page (today either a read or a write, although I intend to make
that for writes only soon).  But I don't see a reason for declining to
make a page dirty if we've seen an eror -- indeed, that seems likely to
lead to further data loss as we then won't even try to write parts of
the page back to storage.

> +	if (len)
> +		iomap_set_page_dirty(page);
> +
> +	if (!page_has_private(page))
> +		return;
> +
> +	iop = to_iomap_page(page);

We usually do the last three lines as:

	iop = to_iomap_page(page);
	if (!iop)
		return;

(actually, we usually call to_iomap_page() at the start of the function
and then check it here)

> +static void
> +iomap_clear_range_dirty(struct page *page, unsigned int off,
> +		unsigned int len)
> +{
> +	struct inode *inode = page->mapping->host;
> +	unsigned int blocks_per_page = i_blocks_per_page(inode, page);
> +	unsigned int first = (off >> inode->i_blkbits) + blocks_per_page;
> +	unsigned int last = ((off + len - 1) >> inode->i_blkbits) +
> blocks_per_page;
> +	unsigned long flags;
> +	struct iomap_page *iop;
> +
> +	if (PageError(page))
> +		return;

It does make sense to avoid clearing the dirty state of the page here;
if the write failed, then the page is still dirty, and it'd be nice to
retry writes until they succeed.  So if you take out the PageError in
set_range_dirty, don't take it out here!

> +	if (!page_has_private(page))
> +		return;
> +
> +	iop = to_iomap_page(page);
> +	spin_lock_irqsave(&iop->state_lock, flags);
> +	bitmap_clear(iop->state, first, last - first + 1);
> +	spin_unlock_irqrestore(&iop->state_lock, flags);
> +}
> +
>  static void
>  iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len)
>  {
> @@ -148,11 +201,11 @@ iomap_iop_set_range_uptodate(struct page *page,
> unsigned off, unsigned len)
>  	unsigned last = (off + len - 1) >> inode->i_blkbits;
>  	unsigned long flags;
> 
> -	spin_lock_irqsave(&iop->uptodate_lock, flags);
> -	bitmap_set(iop->uptodate, first, last - first + 1);
> -	if (bitmap_full(iop->uptodate, i_blocks_per_page(inode, page)))
> +	spin_lock_irqsave(&iop->state_lock, flags);
> +	bitmap_set(iop->state, first, last - first + 1);
> +	if (bitmap_full(iop->state, i_blocks_per_page(inode, page)))
>  		SetPageUptodate(page);
> -	spin_unlock_irqrestore(&iop->uptodate_lock, flags);
> +	spin_unlock_irqrestore(&iop->state_lock, flags);
>  }
> 
>  static void
> @@ -445,7 +498,7 @@ iomap_is_partially_uptodate(struct page *page, unsigned
> long from,
> 
>  	if (iop) {
>  		for (i = first; i <= last; i++)
> -			if (!test_bit(i, iop->uptodate))
> +			if (!test_bit(i, iop->state))
>  				return 0;
>  		return 1;
>  	}
> @@ -683,7 +736,7 @@ static size_t __iomap_write_end(struct inode *inode,
> loff_t pos, size_t len,
>  	if (unlikely(copied < len && !PageUptodate(page)))
>  		return 0;
>  	iomap_set_range_uptodate(page, offset_in_page(pos), len);
> -	iomap_set_page_dirty(page);
> +	iomap_set_range_dirty(page, offset_in_page(pos), len);

I _think_ the call to set_range_uptodate here is now unnecessary.  The
blocks should already have been set uptodate in write_begin.  But please
check!

> @@ -1007,7 +1059,7 @@ vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf,
> const struct iomap_ops *ops)
>  {
>  	struct page *page = vmf->page;
>  	struct inode *inode = file_inode(vmf->vma->vm_file);
> -	unsigned long length;
> +	unsigned int length, dirty_bits;

This is dirty_bytes, surely?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ