lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 7 Feb 2017 17:19:56 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     Zi Yan <zi.yan@...t.com>, Andrea Arcangeli <aarcange@...hat.com>,
        Minchan Kim <minchan@...nel.org>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        kirill.shutemov@...ux.intel.com, akpm@...ux-foundation.org,
        minchan@...nel.org, vbabka@...e.cz, mgorman@...hsingularity.net,
        n-horiguchi@...jp.nec.com, khandual@...ux.vnet.ibm.com,
        zi.yan@...rutgers.edu, Zi Yan <ziy@...dia.com>
Subject: Re: [PATCH v3 03/14] mm: use pmd lock instead of racy checks in
 zap_pmd_range()

On Sun, Feb 05, 2017 at 11:12:41AM -0500, Zi Yan wrote:
> From: Zi Yan <ziy@...dia.com>
> 
> Originally, zap_pmd_range() checks pmd value without taking pmd lock.
> This can cause pmd_protnone entry not being freed.
> 
> Because there are two steps in changing a pmd entry to a pmd_protnone
> entry. First, the pmd entry is cleared to a pmd_none entry, then,
> the pmd_none entry is changed into a pmd_protnone entry.
> The racy check, even with barrier, might only see the pmd_none entry
> in zap_pmd_range(), thus, the mapping is neither split nor zapped.

Okay, this only can happen to MADV_DONTNEED as we hold
down_write(mmap_sem) for the rest of zap_pmd_range() and whoever modifies
page tables has to hold at least down_read(mmap_sem) or exclude parallel
modification in other ways.

See 1a5a9906d4e8 ("mm: thp: fix pmd_bad() triggering in code paths holding
mmap_sem read mode") for more details.

+Andrea.

> Later, in free_pmd_range(), pmd_none_or_clear() will see the
> pmd_protnone entry and clear it as a pmd_bad entry. Furthermore,
> since the pmd_protnone entry is not properly freed, the corresponding
> deposited pte page table is not freed either.

free_pmd_range() should be fine: we only free page tables after vmas gone
(under down_write(mmap_sem() in exit_mmap() and unmap_region()) or after
pagetables moved (under down_write(mmap_sem) in shift_arg_pages()).

> This causes memory leak or kernel crashing, if VM_BUG_ON() is enabled.

The problem is that numabalancing calls change_huge_pmd() under
down_read(mmap_sem), not down_write(mmap_sem) as the rest of users do.
It makes numabalancing the only code path beyond page fault that can turn
pmd_none() into pmd_trans_huge() under down_read(mmap_sem).

This can lead to race when MADV_DONTNEED miss THP. That's not critical for
pagefault vs. MADV_DONTNEED race as we will end up with clear page in that
case. Not so much for change_huge_pmd().

Looks like we need pmdp_modify() or something to modify protection bits
inplace, without clearing pmd.

Not sure how to get crash scenario.

BTW, Zi, have you observed the crash? Or is it based on code inspection?
Any backtraces?

Ouch! madvise_free_huge_pmd() is broken too. We shouldn't clear pmd in the
middle of it as we only hold down_read(mmap_sem). I guess we need a helper
to clear both access and dirty bits.
Minchan, could you look into it?

-- 
 Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ