lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 4 Jun 2024 10:12:12 +0200
From: David Hildenbrand <david@...hat.com>
To: alexs@...nel.org, Andrew Morton <akpm@...ux-foundation.org>,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org,
 izik.eidus@...ellosystems.com, willy@...radead.org, aarcange@...hat.com,
 chrisw@...s-sol.org, hughd@...gle.com
Subject: Re: [PATCH 02/10] mm/ksm: skip subpages of compound pages

On 04.06.24 06:24, alexs@...nel.org wrote:
> From: "Alex Shi (tencent)" <alexs@...nel.org>
> 
> When a folio isn't fit for KSM, the subpages are unlikely to be good,
> So let's skip the rest page checking to save some actions.
> 
> Signed-off-by: Alex Shi (tencent) <alexs@...nel.org>
> ---
>   mm/ksm.c | 9 +++++++--
>   1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 97e5b41f8c4b..e2fdb9dd98e2 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -2644,6 +2644,8 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>   		goto no_vmas;
>   
>   	for_each_vma(vmi, vma) {
> +		int nr = 1;
> +
>   		if (!(vma->vm_flags & VM_MERGEABLE))
>   			continue;
>   		if (ksm_scan.address < vma->vm_start)
> @@ -2660,6 +2662,9 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>   				cond_resched();
>   				continue;
>   			}
> +
> +			VM_WARN_ON(PageTail(*page));
> +			nr = compound_nr(*page);
>   			if (is_zone_device_page(*page))
>   				goto next_page;
>   			if (PageAnon(*page)) {
> @@ -2672,7 +2677,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>   					if (should_skip_rmap_item(*page, rmap_item))
>   						goto next_page;
>   
> -					ksm_scan.address += PAGE_SIZE;
> +					ksm_scan.address += nr * PAGE_SIZE;
>   				} else
>   					put_page(*page);
>   				mmap_read_unlock(mm);
> @@ -2680,7 +2685,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>   			}
>   next_page:
>   			put_page(*page);
> -			ksm_scan.address += PAGE_SIZE;
> +			ksm_scan.address += nr * PAGE_SIZE;
>   			cond_resched();
>   		}
>   	}

You might be jumping over pages that don't belong to that folio. What 
you would actually want to do is somehow use folio_pte_batch() to really 
know the PTEs point at the same folio, so you can skip them. But that's 
not that easy when using follow_page() ...

So I suggest dropping this change for now.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ