lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 1 Feb 2013 16:08:23 -0800
From:	Andrew Morton <akpm@...ux-foundation.org>
To:	Michel Lespinasse <walken@...gle.com>
Cc:	Andrea Arcangeli <aarcange@...hat.com>,
	Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>,
	Hugh Dickins <hughd@...gle.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] mm: accelerate mm_populate() treatment of THP pages

On Wed, 30 Jan 2013 16:26:19 -0800
Michel Lespinasse <walken@...gle.com> wrote:

> This change adds a page_mask argument to follow_page.
> 
> follow_page sets *page_mask to HPAGE_PMD_NR - 1 when it encounters a THP page,
> and to 0 in other cases.
> 
> __get_user_pages() makes use of this in order to accelerate populating
> THP ranges - that is, when both the pages and vmas arrays are NULL,
> we don't need to iterate HPAGE_PMD_NR times to cover a single THP page
> (and we also avoid taking mm->page_table_lock that many times).
> 
> Other follow_page() call sites can safely ignore the value returned in
> *page_mask.

Seems sensible, but the implementation is rather ungainly.

If we're going to add this unused page_mask local to many functions
then let's at least miminize the scope of that variable and explain why
it's there.  For example,

--- a/mm/ksm.c~mm-accelerate-mm_populate-treatment-of-thp-pages-fix
+++ a/mm/ksm.c
@@ -356,9 +356,10 @@ static int break_ksm(struct vm_area_stru
 {
 	struct page *page;
 	int ret = 0;
-	long page_mask;
 
 	do {
+		long page_mask;		/* unused */
+
 		cond_resched();
 		page = follow_page(vma, addr, FOLL_GET, &page_mask);
 		if (IS_ERR_OR_NULL(page))
@@ -454,7 +455,7 @@ static struct page *get_mergeable_page(s
 	unsigned long addr = rmap_item->address;
 	struct vm_area_struct *vma;
 	struct page *page;
-	long page_mask;
+	long page_mask;		/* unused */
 
 	down_read(&mm->mmap_sem);
 	vma = find_mergeable_vma(mm, addr);
@@ -1525,7 +1526,6 @@ static struct rmap_item *scan_get_next_r
 	struct mm_slot *slot;
 	struct vm_area_struct *vma;
 	struct rmap_item *rmap_item;
-	long page_mask;
 	int nid;
 
 	if (list_empty(&ksm_mm_head.mm_list))
@@ -1600,6 +1600,8 @@ next_mm:
 			ksm_scan.address = vma->vm_end;
 
 		while (ksm_scan.address < vma->vm_end) {
+			long page_mask;		/* unused */
+
 			if (ksm_test_exit(mm))
 				break;
 			*page = follow_page(vma, ksm_scan.address, FOLL_GET,



Alternatively we could permit the passing of page_mask==NULL, which is
a common convention in the get_user_pages() area.  Possibly that's a
tad more expensive, but it will save a stack slot.


However I think the best approach here is to just leave the
follow_page() interface alone.  Rename the present implementation to
follow_page_mask() or whatever and add

static inline<?> void follow_page(args)
{
	long page_mask;		/* unused */

	return follow_page_mask(args, &page_mask);
}

Also, I see no sense in permitting negative page_mask values and I
doubt if we'll be needing 64-bit values here.  Make it `unsigned int'?


Finally, this version of the patch surely breaks the nommu build?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ