lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150609190737.GV13008@uranus>
Date:	Tue, 9 Jun 2015 22:07:37 +0300
From:	Cyrill Gorcunov <gorcunov@...il.com>
To:	Minchan Kim <minchan@...nel.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Hugh Dickins <hughd@...gle.com>,
	Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>,
	Michal Hocko <mhocko@...e.cz>,
	Johannes Weiner <hannes@...xchg.org>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Pavel Emelyanov <xemul@...allels.com>,
	Yalin Wang <yalin.wang@...ymobile.com>
Subject: Re: [RFC 3/6] mm: mark dirty bit on swapped-in page

On Wed, Jun 03, 2015 at 03:15:42PM +0900, Minchan Kim wrote:
> Basically, MADV_FREE relys on the dirty bit in page table entry
> to decide whether VM allows to discard the page or not.
> IOW, if page table entry includes marked dirty bit, VM shouldn't
> discard the page.
> 
> However, if swap-in by read fault happens, page table entry
> point out the page doesn't have marked dirty bit so MADV_FREE
> might discard the page wrongly.
> 
> To fix the problem, this patch marks page table entry of page
> swapping-in as dirty so VM shouldn't discard the page suddenly
> under us.
> 
> With MADV_FREE point of view, marking dirty unconditionally is
> no problem because we dropped swapped page in MADV_FREE sycall
> context(ie, Look at madvise_free_pte_range) so every swapping-in
> pages are no MADV_FREE hinted pages.
> 
> Cc: Hugh Dickins <hughd@...gle.com>
> Cc: Cyrill Gorcunov <gorcunov@...il.com>
> Cc: Pavel Emelyanov <xemul@...allels.com>
> Reported-by: Yalin Wang <yalin.wang@...ymobile.com>
> Signed-off-by: Minchan Kim <minchan@...nel.org>
> ---
>  mm/memory.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 8a2fc9945b46..d1709f763152 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2557,9 +2557,11 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,
>  
>  	inc_mm_counter_fast(mm, MM_ANONPAGES);
>  	dec_mm_counter_fast(mm, MM_SWAPENTS);
> -	pte = mk_pte(page, vma->vm_page_prot);
> +
> +	/* Mark dirty bit of page table because MADV_FREE relies on it */
> +	pte = pte_mkdirty(mk_pte(page, vma->vm_page_prot));
>  	if ((flags & FAULT_FLAG_WRITE) && reuse_swap_page(page)) {
> -		pte = maybe_mkwrite(pte_mkdirty(pte), vma);
> +		pte = maybe_mkwrite(pte, vma);
>  		flags &= ~FAULT_FLAG_WRITE;
>  		ret |= VM_FAULT_WRITE;
>  		exclusive = 1;

Hi Minchan! Really sorry for delay in reply. Look, I don't understand
the moment -- if page has fault on read then before the patch the
PTE won't carry the dirty flag but now we do set it up unconditionally
and to me it looks somehow strange at least because this as well
sets soft-dirty bit on pages which were not modified but only swapped
out. Am I missing something obvious?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ