lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201014152845.GA1424414@gmail.com>
Date:   Wed, 14 Oct 2020 17:28:45 +0200
From:   Ingo Molnar <mingo@...nel.org>
To:     Ankur Arora <ankur.a.arora@...cle.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        kirill@...temov.name, mhocko@...nel.org,
        boris.ostrovsky@...cle.com, konrad.wilk@...cle.com,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 6/8] mm, clear_huge_page: use clear_page_uncached() for
 gigantic pages


* Ankur Arora <ankur.a.arora@...cle.com> wrote:

> Uncached writes are suitable for circumstances where the region written to
> is not expected to be read again soon, or the region written to is large
> enough that there's no expectation that we will find the writes in the
> cache.
> 
> Accordingly switch to using clear_page_uncached() for gigantic pages.
> 
> Signed-off-by: Ankur Arora <ankur.a.arora@...cle.com>
> ---
>  mm/memory.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index eeae590e526a..4d2c58f83ab1 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5092,7 +5092,7 @@ static void clear_gigantic_page(struct page *page,
>  	for (i = 0; i < pages_per_huge_page;
>  	     i++, p = mem_map_next(p, page, i)) {
>  		cond_resched();
> -		clear_user_highpage(p, addr + i * PAGE_SIZE);
> +		clear_user_highpage_uncached(p, addr + i * PAGE_SIZE);
>  	}
>  }

So this does the clearing in 4K chunks, and your measurements suggest that 
short memory clearing is not as efficient, right?

I'm wondering whether it would make sense to do 2MB chunked clearing on 
64-bit CPUs, instead of 512x 4k clearing? Both 2MB and GB pages are 
continuous in memory, so accessible to these instructions in a single 
narrow loop.

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ