lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez3yLkMcyaTXFt_+w8_-HtmrjW=XB51DDQSGdjPj43XWmA@mail.gmail.com>
Date:   Fri, 12 Oct 2018 05:21:32 +0200
From:   Jann Horn <jannh@...gle.com>
To:     joel@...lfernandes.org
Cc:     kernel list <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>, kernel-team@...roid.com,
        Minchan Kim <minchan@...gle.com>,
        Hugh Dickins <hughd@...gle.com>, lokeshgidra@...gle.com,
        Andrew Morton <akpm@...ux-foundation.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Kate Stewart <kstewart@...uxfoundation.org>,
        pombredanne@...b.com, Thomas Gleixner <tglx@...utronix.de>,
        Boris Ostrovsky <boris.ostrovsky@...cle.com>,
        Juergen Gross <jgross@...e.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Radim Krčmář <rkrcmar@...hat.com>,
        kvm@...r.kernel.org
Subject: Re: [PATCH] mm: Speed up mremap on large regions

+cc xen maintainers and kvm folks

On Fri, Oct 12, 2018 at 4:40 AM Joel Fernandes (Google)
<joel@...lfernandes.org> wrote:
> Android needs to mremap large regions of memory during memory management
> related operations. The mremap system call can be really slow if THP is
> not enabled. The bottleneck is move_page_tables, which is copying each
> pte at a time, and can be really slow across a large map. Turning on THP
> may not be a viable option, and is not for us. This patch speeds up the
> performance for non-THP system by copying at the PMD level when possible.
[...]
> +bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> +                 unsigned long new_addr, unsigned long old_end,
> +                 pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
> +{
[...]
> +       /*
> +        * We don't have to worry about the ordering of src and dst
> +        * ptlocks because exclusive mmap_sem prevents deadlock.
> +        */
> +       old_ptl = pmd_lock(vma->vm_mm, old_pmd);
> +       if (old_ptl) {
> +               pmd_t pmd;
> +
> +               new_ptl = pmd_lockptr(mm, new_pmd);
> +               if (new_ptl != old_ptl)
> +                       spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
> +
> +               /* Clear the pmd */
> +               pmd = *old_pmd;
> +               pmd_clear(old_pmd);
> +
> +               VM_BUG_ON(!pmd_none(*new_pmd));
> +
> +               /* Set the new pmd */
> +               set_pmd_at(mm, new_addr, new_pmd, pmd);
> +               if (new_ptl != old_ptl)
> +                       spin_unlock(new_ptl);
> +               spin_unlock(old_ptl);

How does this interact with Xen PV? From a quick look at the Xen PV
integration code in xen_alloc_ptpage(), it looks to me as if, in a
config that doesn't use split ptlocks, this is going to temporarily
drop Xen's type count for the page to zero, causing Xen to de-validate
and then re-validate the L1 pagetable; if you first set the new pmd
before clearing the old one, that wouldn't happen. I don't know how
this interacts with shadow paging implementations.

> +               *need_flush = true;
> +               return true;
> +       }
> +       return false;
> +}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ