lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 17 Aug 2022 16:09:02 +0000
From:   SeongJae Park <sj@...nel.org>
To:     Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc:     sj@...nel.org, akpm@...ux-foundation.org, damon@...ts.linux.dev,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/damon: Validate if the pmd entry is present before accessing

Hi Baolin,


Thank you always for your great patch!

On Wed, 17 Aug 2022 14:21:12 +0800 Baolin Wang <baolin.wang@...ux.alibaba.com> wrote:

> The pmd_huge() is used to validate if the pmd entry is mapped by a huge
> page, also including the case of non-present (migration or hwpoisoned)
> pmd entry on arm64 or x86 architectures. Thus we should validate if it
> is present before making the pmd entry old or getting young state,
> otherwise we can not get the correct corresponding page.

Maybe I'm missing something, but... I'm unsure if the page is present or not
really matters from the perspective of access checking.  In the case, DAMON
could simply report the page has accessed once for the first check after the
page being non-present if it really accessed before, and then report the page
as not accessed, which is true.

Please let me know if I'm missing something.


Thanks,
SJ

> 
> Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
> ---
>  mm/damon/vaddr.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
> index 3c7b9d6..1d16c6c 100644
> --- a/mm/damon/vaddr.c
> +++ b/mm/damon/vaddr.c
> @@ -304,6 +304,11 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
>  
>  	if (pmd_huge(*pmd)) {
>  		ptl = pmd_lock(walk->mm, pmd);
> +		if (!pmd_present(*pmd)) {
> +			spin_unlock(ptl);
> +			return 0;
> +		}
> +
>  		if (pmd_huge(*pmd)) {
>  			damon_pmdp_mkold(pmd, walk->mm, addr);
>  			spin_unlock(ptl);
> @@ -431,6 +436,11 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>  	if (pmd_huge(*pmd)) {
>  		ptl = pmd_lock(walk->mm, pmd);
> +		if (!pmd_present(*pmd)) {
> +			spin_unlock(ptl);
> +			return 0;
> +		}
> +
>  		if (!pmd_huge(*pmd)) {
>  			spin_unlock(ptl);
>  			goto regular_page;
> -- 
> 1.8.3.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ