lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 14 Feb 2020 16:52:48 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 13/25] fs: Add zero_user_large

On Tue, Feb 11, 2020 at 08:18:33PM -0800, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@...radead.org>
> 
> We can't kmap() a THP, so add a wrapper around zero_user() for large
> pages.

I would rather address it closer to the root: make zero_user_segments()
handle compound pages.

> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
> ---
>  include/linux/highmem.h | 22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)
> 
> diff --git a/include/linux/highmem.h b/include/linux/highmem.h
> index ea5cdbd8c2c3..4465b8784353 100644
> --- a/include/linux/highmem.h
> +++ b/include/linux/highmem.h
> @@ -245,6 +245,28 @@ static inline void zero_user(struct page *page,
>  	zero_user_segments(page, start, start + size, 0, 0);
>  }
>  
> +static inline void zero_user_large(struct page *page,
> +		unsigned start, unsigned size)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < thp_order(page); i++) {
> +		if (start > PAGE_SIZE) {

Off-by-one? >= ?

> +			start -= PAGE_SIZE;
> +		} else {
> +			unsigned this_size = size;
> +
> +			if (size > (PAGE_SIZE - start))
> +				this_size = PAGE_SIZE - start;
> +			zero_user(page + i, start, this_size);
> +			start = 0;
> +			size -= this_size;
> +			if (!size)
> +				break;
> +		}
> +	}
> +}
> +
>  #ifndef __HAVE_ARCH_COPY_USER_HIGHPAGE
>  
>  static inline void copy_user_highpage(struct page *to, struct page *from,
> -- 
> 2.25.0
> 
> 

-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists