lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20110509164034.164C.A69D9226@jp.fujitsu.com>
Date:	Mon,  9 May 2011 16:38:49 +0900 (JST)
From:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To:	Stephen Wilson <wilsons@...rt.ca>
Cc:	kosaki.motohiro@...fujitsu.com,
	Andrew Morton <akpm@...ux-foundation.org>,
	Alexander Viro <viro@...iv.linux.org.uk>,
	Hugh Dickins <hughd@...gle.com>,
	David Rientjes <rientjes@...gle.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/8] mm: use walk_page_range() instead of custom page table walking code

Hello,

sorry for the long delay.

> In the specific case of show_numa_map(), the custom page table walking
> logic implemented in mempolicy.c does not provide any special service
> beyond that provided by walk_page_range().
> 
> Also, converting show_numa_map() to use the generic routine decouples
> the function from mempolicy.c, allowing it to be moved out of the mm
> subsystem and into fs/proc.
> 
> Signed-off-by: Stephen Wilson <wilsons@...rt.ca>
> ---
>  mm/mempolicy.c |   53 ++++++++++++++++++++++++++++++++++++++++++++++-------
>  1 files changed, 46 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 5bfb03e..dfe27e3 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2568,6 +2568,22 @@ static void gather_stats(struct page *page, void *private, int pte_dirty)
>  	md->node[page_to_nid(page)]++;
>  }
>  
> +static int gather_pte_stats(pte_t *pte, unsigned long addr,
> +		unsigned long pte_size, struct mm_walk *walk)
> +{
> +	struct page *page;
> +
> +	if (pte_none(*pte))
> +		return 0;
> +
> +	page = pte_page(*pte);
> +	if (!page)
> +		return 0;

original check_pte_range() has following logic.

        orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
        do {
                struct page *page;
                int nid;

                if (!pte_present(*pte))
                        continue;
                page = vm_normal_page(vma, addr, *pte);
                if (!page)
                        continue;
                /*
                 * vm_normal_page() filters out zero pages, but there might
                 * still be PageReserved pages to skip, perhaps in a VDSO.
                 * And we cannot move PageKsm pages sensibly or safely yet.
                 */
                if (PageReserved(page) || PageKsm(page))
                        continue;
                gather_stats(page, private, pte_dirty(*pte));

Why did you drop a lot of check? Is it safe?

Other parts looks good to me.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ