lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201117153947.GL29991@casper.infradead.org>
Date:   Tue, 17 Nov 2020 15:39:47 +0000
From:   Matthew Wilcox <willy@...radead.org>
To:     Hugh Dickins <hughd@...gle.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>, Jan Kara <jack@...e.cz>,
        William Kucharski <william.kucharski@...cle.com>,
        linux-fsdevel@...r.kernel.org, linux-mm@...ck.org, hch@....de,
        hannes@...xchg.org, yang.shi@...ux.alibaba.com,
        dchinner@...hat.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v4 00/16] Overhaul multi-page lookups for THP

On Mon, Nov 16, 2020 at 02:34:34AM -0800, Hugh Dickins wrote:
> Fix to [PATCH v4 15/16] mm/truncate,shmem: Handle truncates that split THPs.
> One machine ran fine, swapping and building in ext4 on loop0 on huge tmpfs;
> one machine got occasional pages of zeros in its .os; one machine couldn't
> get started because of ext4_find_dest_de errors on the newly mkfs'ed fs.
> The partial_end case was decided by PAGE_SIZE, when there might be a THP
> there.  The below patch has run well (for not very long), but I could
> easily have got it slightly wrong, off-by-one or whatever; and I have
> not looked into the similar code in mm/truncate.c, maybe that will need
> a similar fix or maybe not.

Thank you for the explanation in your later email!  There is indeed an
off-by-one, although in the safe direction.

> --- 5103w/mm/shmem.c	2020-11-12 15:46:21.075254036 -0800
> +++ 5103wh/mm/shmem.c	2020-11-16 01:09:35.431677308 -0800
> @@ -874,7 +874,7 @@ static void shmem_undo_range(struct inod
>  	long nr_swaps_freed = 0;
>  	pgoff_t index;
>  	int i;
> -	bool partial_end;
> +	bool same_page;
>  
>  	if (lend == -1)
>  		end = -1;	/* unsigned, so actually very big */
> @@ -907,16 +907,12 @@ static void shmem_undo_range(struct inod
>  		index++;
>  	}
>  
> -	partial_end = ((lend + 1) % PAGE_SIZE) > 0;
> +	same_page = (lstart >> PAGE_SHIFT) == end;

'end' is exclusive, so this is always false.  Maybe something "obvious":

	same_page = (lstart >> PAGE_SHIFT) == (lend >> PAGE_SHIFT);

(lend is inclusive, so lend in 0-4095 are all on the same page)

>  	page = NULL;
>  	shmem_getpage(inode, lstart >> PAGE_SHIFT, &page, SGP_READ);
>  	if (page) {
> -		bool same_page;
> -
>  		page = thp_head(page);
>  		same_page = lend < page_offset(page) + thp_size(page);
> -		if (same_page)
> -			partial_end = false;
>  		set_page_dirty(page);
>  		if (!truncate_inode_partial_page(page, lstart, lend)) {
>  			start = page->index + thp_nr_pages(page);
> @@ -928,7 +924,7 @@ static void shmem_undo_range(struct inod
>  		page = NULL;
>  	}
>  
> -	if (partial_end)
> +	if (!same_page)
>  		shmem_getpage(inode, end, &page, SGP_READ);
>  	if (page) {
>  		page = thp_head(page);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ