lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 5 Jun 2024 18:04:19 +0800
From: Alex Shi <seakeel@...il.com>
To: alexs@...nel.org, Andrew Morton <akpm@...ux-foundation.org>,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org,
 izik.eidus@...ellosystems.com, willy@...radead.org, aarcange@...hat.com,
 chrisw@...s-sol.org, hughd@...gle.com, david@...hat.com
Subject: Re: [RFC 3/3] mm/ksm: move flush_anon_page before checksum
 calculation

Let me withdraw this patch, the flush_anon_page come with the first patchset of ksm, and no explanation for them.
Though, no any guarantee for concurrent page write for the flush, but anyway I give up on the flush optimize.

Sorry for disturber.

Alex

On 6/5/24 5:53 PM, alexs@...nel.org wrote:
> From: "Alex Shi (tencent)" <alexs@...nel.org>
> 
> commit 6020dff09252 ("[ARM] Resolve fuse and direct-IO failures due to missing cache flushes")
> explain that the aim of flush_anon_page() is to keep the cache and memory
> content synced. Also as David Hildenbrand pointed, flush page without
> the page contents reading here is meaningless, so let's move the flush action
> just before page contents reading, like calc_checksum(), not
> just find a page, flush it, w/o clear purpose. This should save some flush
> actions why keep page content safely synced.
> 
> BTW, write_protect_page() do another type flush actions before pages_identical().
> 
> Signed-off-by: Alex Shi (tencent) <alexs@...nel.org>
> ---
>  mm/ksm.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index ef335ee508d3..77e8c1ded9bb 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -784,10 +784,7 @@ static struct page *get_mergeable_page(struct ksm_rmap_item *rmap_item)
>  		goto out;
>  	if (is_zone_device_page(page))
>  		goto out_putpage;
> -	if (PageAnon(page)) {
> -		flush_anon_page(vma, page, addr);
> -		flush_dcache_page(page);
> -	} else {
> +	if (!PageAnon(page)) {
>  out_putpage:
>  		put_page(page);
>  out:
> @@ -2378,7 +2375,12 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
>  		mmap_read_unlock(mm);
>  		return;
>  	}
> +
> +	/* flush page contents before calculate checksum */
> +	flush_anon_page(vma, page, rmap_item->address);
> +	flush_dcache_page(page);
>  	checksum = calc_checksum(page);
> +
>  	if (rmap_item->oldchecksum != checksum) {
>  		rmap_item->oldchecksum = checksum;
>  		mmap_read_unlock(mm);
> @@ -2662,8 +2664,6 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>  			if (is_zone_device_page(*page))
>  				goto next_page;
>  			if (PageAnon(*page)) {
> -				flush_anon_page(vma, *page, ksm_scan.address);
> -				flush_dcache_page(*page);
>  				rmap_item = get_next_rmap_item(mm_slot,
>  					ksm_scan.rmap_list, ksm_scan.address);
>  				if (rmap_item) {

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ