lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1343385965-7738-2-git-send-email-mgorman@suse.de>
Date:	Fri, 27 Jul 2012 11:46:04 +0100
From:	Mel Gorman <mgorman@...e.de>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Hugh Dickins <hughd@...gle.com>,
	Rik van Riel <riel@...hat.com>,
	Larry Woodman <lwoodman@...hat.com>,
	Michal Hocko <mhocko@...e.cz>, Ken Chen <kenchen@...gle.com>,
	Cong Wang <xiyou.wangcong@...il.com>,
	Linux-MM <linux-mm@...ck.org>,
	LKML <linux-kernel@...r.kernel.org>, Mel Gorman <mgorman@...e.de>
Subject: [PATCH 1/2] Revert "hugetlb: avoid taking i_mmap_mutex in unmap_single_vma() for hugetlb"

This reverts the patch "hugetlb: avoid taking i_mmap_mutex in
unmap_single_vma() for hugetlb" from mmotm.

This patch is possibly a mistake and blocks the merging of a hugetlb fix
where page tables can get corrupted (https://lkml.org/lkml/2012/7/24/93).
The motivation of the patch appears to be two-fold.

First, it believes that the i_mmap_mutex is to protect against list
corruption of the page->lru lock but that is not quite accurate. The
i_mmap_mutex for shared page tables is meant to protect against races
when sharing and unsharing the page tables. For example, an important
use of i_mmap_mutex is to stabilise the page_count of the PMD page
during huge_pmd_unshare.

Second, it is protecting against a potential deadlock when
unmap_unsingle_page is called from unmap_mapping_range(). However, hugetlbfs
should never be in this path. It has its own setattr and truncate handlers
where are the paths that use unmap_mapping_range().

Unless Aneesh has another reason for the patch, it should be reverted
to preserve hugetlb page sharing locking.

Signed-off-by: Mel Gorman <mgorman@...e.de>
---
 mm/memory.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index 8a989f1..22bc695 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1344,8 +1344,11 @@ static void unmap_single_vma(struct mmu_gather *tlb,
 			 * Since no pte has actually been setup, it is
 			 * safe to do nothing in this case.
 			 */
-			if (vma->vm_file)
+			if (vma->vm_file) {
+				mutex_lock(&vma->vm_file->f_mapping->i_mmap_mutex);
 				__unmap_hugepage_range(tlb, vma, start, end, NULL);
+				mutex_unlock(&vma->vm_file->f_mapping->i_mmap_mutex);
+			}
 		} else
 			unmap_page_range(tlb, vma, start, end, details);
 	}
-- 
1.7.9.2

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ