lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 22 Dec 2011 17:36:04 +0100
From:	Michal Hocko <mhocko@...e.cz>
To:	Hillf Danton <dhillf@...il.com>
Cc:	linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: Re: [PATCH] mm: hugetlb: undo change to page mapcount in fault
 handler

On Thu 22-12-11 21:36:34, Hillf Danton wrote:
> Page mapcount is changed only when it is folded into page table entry.

The changelog is rather cryptic. What about something like:

Page mapcount should be updated only if we are sure that the page ends
up in the page table otherwise we would leak if we couldn't COW due to
reservations or if idx is out of bounds.

The patch itself looks correct.

> 
> Cc: Michal Hocko <mhocko@...e.cz>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Signed-off-by: Hillf Danton <dhillf@...il.com>

Reviewed-by: Michal Hocko <mhocko@...e.cz>

Thanks
> ---
> 
> --- a/mm/hugetlb.c	Tue Dec 20 21:26:30 2011
> +++ b/mm/hugetlb.c	Thu Dec 22 21:29:42 2011
> @@ -2509,6 +2509,7 @@ static int hugetlb_no_page(struct mm_str
>  {
>  	struct hstate *h = hstate_vma(vma);
>  	int ret = VM_FAULT_SIGBUS;
> +	int anon_rmap = 0;
>  	pgoff_t idx;
>  	unsigned long size;
>  	struct page *page;
> @@ -2563,14 +2564,13 @@ retry:
>  			spin_lock(&inode->i_lock);
>  			inode->i_blocks += blocks_per_huge_page(h);
>  			spin_unlock(&inode->i_lock);
> -			page_dup_rmap(page);
>  		} else {
>  			lock_page(page);
>  			if (unlikely(anon_vma_prepare(vma))) {
>  				ret = VM_FAULT_OOM;
>  				goto backout_unlocked;
>  			}
> -			hugepage_add_new_anon_rmap(page, vma, address);
> +			anon_rmap = 1;
>  		}
>  	} else {
>  		/*
> @@ -2583,7 +2583,6 @@ retry:
>  			      VM_FAULT_SET_HINDEX(h - hstates);
>  			goto backout_unlocked;
>  		}
> -		page_dup_rmap(page);
>  	}
> 
>  	/*
> @@ -2607,6 +2606,10 @@ retry:
>  	if (!huge_pte_none(huge_ptep_get(ptep)))
>  		goto backout;
> 
> +	if (anon_rmap)
> +		hugepage_add_new_anon_rmap(page, vma, address);
> +	else
> +		page_dup_rmap(page);
>  	new_pte = make_huge_pte(vma, page, ((vma->vm_flags & VM_WRITE)
>  				&& (vma->vm_flags & VM_SHARED)));
>  	set_huge_pte_at(mm, address, ptep, new_pte);

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ