lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131015103900.GB12428@quack.suse.cz>
Date:	Tue, 15 Oct 2013 12:39:00 +0200
From:	Jan Kara <jack@...e.cz>
To:	Ming Lei <ming.lei@...onical.com>
Cc:	Jan Kara <jack@...e.cz>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Ted Tso <tytso@....edu>, linux-ext4@...r.kernel.org,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
	Ming Lei <tom.leiming@...il.com>
Subject: Re: [PATCH] ext4: fix checking on nr_to_write

On Tue 15-10-13 10:25:53, Ming Lei wrote:
> Looks it makes sense, so how about below change?
> 
> --
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index 32c04ab..c32b599 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -2294,7 +2294,7 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
>  {
>  	struct address_space *mapping = mpd->inode->i_mapping;
>  	struct pagevec pvec;
> -	unsigned int nr_pages;
> +	unsigned int nr_pages, nr_added = 0;
>  	pgoff_t index = mpd->first_page;
>  	pgoff_t end = mpd->last_page;
>  	int tag;
> @@ -2330,6 +2330,18 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
>  			if (page->index > end)
>  				goto out;
>  
> +			/*
> +			 * Accumulated enough dirty pages? This doesn't apply
> +			 * to WB_SYNC_ALL mode. For integrity sync we have to
> +			 * keep going because someone may be concurrently
> +			 * dirtying pages, and we might have synced a lot of
> +			 * newly appeared dirty pages, but have not synced all
> +			 * of the old dirty pages.
> +			 */
> +			if (mpd->wbc->sync_mode == WB_SYNC_NONE &&
> +					nr_added >= mpd->wbc->nr_to_write)
> +				goto out;
> +
  This won't quite work because if the page is fully mapped
mpage_process_page_bufs() will immediately submit the page and decrease
nr_to_write. So now you would end up writing less than you were asked for
in some cases. Attached patch should do what's needed. Can you try whether
it fixes the problem for you (it seems to work OK in my testing).

								Honza

>  			/* If we can't merge this page, we are done. */
>  			if (mpd->map.m_len > 0 && mpd->next_page != page->index)
>  				goto out;
> @@ -2364,19 +2376,7 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
>  			if (err <= 0)
>  				goto out;
>  			err = 0;
> -
> -			/*
> -			 * Accumulated enough dirty pages? This doesn't apply
> -			 * to WB_SYNC_ALL mode. For integrity sync we have to
> -			 * keep going because someone may be concurrently
> -			 * dirtying pages, and we might have synced a lot of
> -			 * newly appeared dirty pages, but have not synced all
> -			 * of the old dirty pages.
> -			 */
> -			if (mpd->wbc->sync_mode == WB_SYNC_NONE &&
> -			    mpd->next_page - mpd->first_page >=
> -							mpd->wbc->nr_to_write)
> -				goto out;
> +			nr_added++;
>  		}
>  		pagevec_release(&pvec);
>  		cond_resched();
> 
> 
> 
> Thanks,
> -- 
> Ming Lei
-- 
Jan Kara <jack@...e.cz>
SUSE Labs, CR

View attachment "0001-ext4-Fix-performance-regression-in-ext4_writepages.patch" of type "text/x-patch" (3105 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ