[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200818155305.GR17456@casper.infradead.org>
Date: Tue, 18 Aug 2020 16:53:05 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Yu Kuai <yukuai3@...wei.com>
Cc: hch@...radead.org, darrick.wong@...cle.com, david@...morbit.com,
linux-xfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, yi.zhang@...wei.com
Subject: Re: [RFC PATCH V2] iomap: add support to track dirty state of sub
pages
On Tue, Aug 18, 2020 at 09:46:18PM +0800, Yu Kuai wrote:
> changes from v1:
> - separate set dirty and clear dirty functions
> - don't test uptodate bit in iomap_writepage_map()
> - use one bitmap array for uptodate and dirty.
This looks much better.
> + spinlock_t state_lock;
> + /*
> + * The first half bits are used to track sub-page uptodate status,
> + * the second half bits are for dirty status.
> + */
> + DECLARE_BITMAP(state, PAGE_SIZE / 256);
It would be better to use the same wording as below:
> + bitmap_zero(iop->state, PAGE_SIZE * 2 / SECTOR_SIZE);
[...]
> +static void
> +iomap_iop_set_range_dirty(struct page *page, unsigned int off,
> + unsigned int len)
> +{
> + struct iomap_page *iop = to_iomap_page(page);
> + struct inode *inode = page->mapping->host;
> + unsigned int total = PAGE_SIZE / SECTOR_SIZE;
> + unsigned int first = off >> inode->i_blkbits;
> + unsigned int last = (off + len - 1) >> inode->i_blkbits;
> + unsigned long flags;
> + unsigned int i;
> +
> + spin_lock_irqsave(&iop->state_lock, flags);
> + for (i = first; i <= last; i++)
> + set_bit(i + total, iop->state);
> + spin_unlock_irqrestore(&iop->state_lock, flags);
> +}
How about:
- unsigned int total = PAGE_SIZE / SECTOR_SIZE;
...
+ first += PAGE_SIZE / SECTOR_SIZE;
+ last += PAGE_SIZE / SECTOR_SIZE;
...
for (i = first; i <= last; i++)
- set_bit(i + total, iop->state);
+ set_bit(i, iop->state);
We might want
#define DIRTY_BITS(x) ((x) + PAGE_SIZE / SECTOR_SIZE)
and then we could do:
+ unsigned int last = DIRTY_BITS((off + len - 1) >> inode->i_blkbits);
That might be overthinking things a bit though.
> @@ -705,6 +767,7 @@ __iomap_write_end(struct inode *inode, loff_t pos, unsigned len,
> if (unlikely(copied < len && !PageUptodate(page)))
> return 0;
> iomap_set_range_uptodate(page, offset_in_page(pos), len);
> + iomap_set_range_dirty(page, offset_in_page(pos), len);
> iomap_set_page_dirty(page);
I would move the call to iomap_set_page_dirty() into
iomap_set_range_dirty() to parallel iomap_set_range_uptodate more closely.
We don't want a future change to add a call to iomap_set_range_dirty()
and miss the call to iomap_set_page_dirty().
> return copied;
> }
> @@ -1030,6 +1093,7 @@ iomap_page_mkwrite_actor(struct inode *inode, loff_t pos, loff_t length,
> WARN_ON_ONCE(!PageUptodate(page));
> iomap_page_create(inode, page);
> set_page_dirty(page);
> + iomap_set_range_dirty(page, offset_in_page(pos), length);
I would move all this from the mkwrite_actor() to iomap_page_mkwrite()
and call it once with (0, PAGE_SIZE) rather than calling it once for
each extent in the page.
> @@ -1435,6 +1500,8 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc,
> */
> set_page_writeback_keepwrite(page);
> } else {
> + iomap_clear_range_dirty(page, 0,
> + end_offset - page_offset(page) + 1);
> clear_page_dirty_for_io(page);
> set_page_writeback(page);
I'm not sure it's worth doing this calculation. Better to just clear
the dirty bits on the entire page? Opinions?
Powered by blists - more mailing lists