lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 8 Mar 2012 19:59:27 +0800
From:	Hillf Danton <dhillf@...il.com>
To:	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>
Cc:	David Gibson <david@...son.dropbear.id.au>,
	akpm@...ux-foundation.org, hughd@...gle.com, paulus@...ba.org,
	linux-kernel@...r.kernel.org, Andrew Barry <abarry@...y.com>,
	Mel Gorman <mgorman@...e.de>,
	Minchan Kim <minchan.kim@...il.com>
Subject: Re: [PATCH 2/2] hugepages: Fix use after free bug in "quota" handling

On Thu, Mar 8, 2012 at 12:17 PM, Aneesh Kumar K.V
<aneesh.kumar@...ux.vnet.ibm.com> wrote:
> On Wed, 7 Mar 2012 20:28:39 +0800, Hillf Danton <dhillf@...il.com> wrote:
>> On Wed, Mar 7, 2012 at 12:48 PM, David Gibson
>> > @@ -533,9 +611,9 @@ static void free_huge_page(struct page *page)
>> >         */
>> >        struct hstate *h = page_hstate(page);
>> >        int nid = page_to_nid(page);
>> > -       struct address_space *mapping;
>> > +       struct hugepage_subpool *spool =
>> > +               (struct hugepage_subpool *)page_private(page);
>> >
>> > -       mapping = (struct address_space *) page_private(page);
>> >        set_page_private(page, 0);
>> >        page->mapping = NULL;
>> >        BUG_ON(page_count(page));
>> > @@ -551,8 +629,7 @@ static void free_huge_page(struct page *page)
>> >                enqueue_huge_page(h, page);
>> >        }
>> >        spin_unlock(&hugetlb_lock);
>> > -       if (mapping)
>> > -               hugetlb_put_quota(mapping, 1);
>> > +       hugepage_subpool_put_pages(spool, 1);
>>
>> Like current code, quota is handed back *unconditionally*, but ...
>
>
> We will end up doing get_quota for every allocated page. get_quota
> happens either during mmap() if MAP_NORESERVE is not specified.
> or during alloc_huge_page if we haven't done a quota reservation during
> mmap for that range. Are you finding any part of code where we miss that ?
>
>
Thank you, Aneesh, I work it out along the direction of your question.

Apart from David's approach, quota is no longer reclaimed when pages are freed,
but when they are truncated, and we end up with smaller .text size, and with
the use-after-free bug fixed as well.

-hd

--- a/mm/hugetlb.c	Mon Mar  5 20:20:34 2012
+++ b/mm/hugetlb.c	Thu Mar  8 19:47:26 2012
@@ -533,9 +533,7 @@ static void free_huge_page(struct page *
 	 */
 	struct hstate *h = page_hstate(page);
 	int nid = page_to_nid(page);
-	struct address_space *mapping;

-	mapping = (struct address_space *) page_private(page);
 	set_page_private(page, 0);
 	page->mapping = NULL;
 	BUG_ON(page_count(page));
@@ -551,8 +549,6 @@ static void free_huge_page(struct page *
 		enqueue_huge_page(h, page);
 	}
 	spin_unlock(&hugetlb_lock);
-	if (mapping)
-		hugetlb_put_quota(mapping, 1);
 }

 static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
@@ -2944,8 +2940,8 @@ void hugetlb_unreserve_pages(struct inod
 	inode->i_blocks -= (blocks_per_huge_page(h) * freed);
 	spin_unlock(&inode->i_lock);

-	hugetlb_put_quota(inode->i_mapping, (chg - freed));
-	hugetlb_acct_memory(h, -(chg - freed));
+	hugetlb_put_quota(inode->i_mapping, chg);
+	hugetlb_acct_memory(h, -chg);
 }

 #ifdef CONFIG_MEMORY_FAILURE
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ