lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpEHJTqG+PkAPJknf5_41ZKFqjk8pY=gTg_VZgsfY-=9Tg@mail.gmail.com>
Date:   Wed, 29 Dec 2021 21:59:55 -0800
From:   Suren Baghdasaryan <surenb@...gle.com>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     Michal Hocko <mhocko@...e.com>, akpm@...ux-foundation.org,
        rientjes@...gle.com, willy@...radead.org, guro@...com,
        riel@...riel.com, minchan@...nel.org, kirill@...temov.name,
        aarcange@...hat.com, christian@...uner.io, hch@...radead.org,
        oleg@...hat.com, david@...hat.com, jannh@...gle.com,
        shakeelb@...gle.com, luto@...nel.org, christian.brauner@...ntu.com,
        fweimer@...hat.com, jengelh@...i.de, timmurray@...gle.com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        kernel-team@...roid.com
Subject: Re: [PATCH 4/3] mm: drop MMF_OOM_SKIP from exit_mmap

On Thu, Dec 16, 2021 at 9:23 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> On Thu, Dec 16, 2021 at 3:49 AM Johannes Weiner <hannes@...xchg.org> wrote:
> >
> > On Wed, Dec 15, 2021 at 06:26:11PM -0800, Suren Baghdasaryan wrote:
> > > On Thu, Dec 9, 2021 at 9:06 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
> > > >
> > > > On Thu, Dec 9, 2021 at 8:47 AM Michal Hocko <mhocko@...e.com> wrote:
> > > > >
> > > > > On Thu 09-12-21 08:24:04, Suren Baghdasaryan wrote:
> > > > > > On Thu, Dec 9, 2021 at 1:12 AM Michal Hocko <mhocko@...e.com> wrote:
> > > > > > >
> > > > > > > Do we want this on top?
> > > > > >
> > > > > > As we discussed in this thread
> > > > > > https://lore.kernel.org/all/YY4snVzZZZYhbigV@dhcp22.suse.cz,
> > > > > > __oom_reap_task_mm in exit_mmap allows oom-reaper/process_mrelease to
> > > > > > unmap pages in parallel with exit_mmap without blocking each other.
> > > > > > Removal of __oom_reap_task_mm from exit_mmap prevents this parallelism
> > > > > > and has a negative impact on performance. So the conclusion of that
> > > > > > thread I thought was to keep that part. My understanding is that we
> > > > > > also wanted to remove MMF_OOM_SKIP as a follow-up patch but
> > > > > > __oom_reap_task_mm would stay.
> > > > >
> > > > > OK, then we were talking past each other, I am afraid. I really wanted
> > > > > to get rid of this oom specific stuff from exit_mmap. It was there out
> > > > > of necessity. With a proper locking we can finally get rid of the crud.
> > > > > As I've said previously oom reaping has never been a hot path.
> > > > >
> > > > > If we really want to optimize this path then I would much rather see a
> > > > > generic solution which would allow to move the write lock down after
> > > > > unmap_vmas. That would require oom reaper to be able to handle mlocked
> > > > > memory.
> > > >
> > > > Ok, let's work on that and when that's done we can get rid of the oom
> > > > stuff in exit_mmap. I'll look into this over the weekend and will
> > > > likely be back with questions.
> > >
> > > As promised, I have a question:
> > > Any particular reason why munlock_vma_pages_range clears VM_LOCKED
> > > before unlocking pages and not after (see:
> > > https://elixir.bootlin.com/linux/latest/source/mm/mlock.c#L424)? Seems
> > > to me if VM_LOCKED was reset at the end (with proper ordering) then
> > > __oom_reap_task_mm would correctly skip VM_LOCKED vmas.
> > > https://lore.kernel.org/lkml/20180514064824.534798031@linuxfoundation.org/
> > > has this explanation:
> > >
> > > "Since munlock_vma_pages_range() depends on clearing VM_LOCKED from
> > > vm_flags before actually doing the munlock to determine if any other
> > > vmas are locking the same memory, the check for VM_LOCKED in the oom
> > > reaper is racy."
> > >
> > > but "to determine if any other vmas are locking the same memory"
> > > explanation eludes me... Any insights?
> >
> > A page's mlock state is determined by whether any of the vmas that map
> > it are mlocked. The munlock code does:
> >
> > vma->vm_flags &= VM_LOCKED_CLEAR_MASK
> > TestClearPageMlocked()
> > isolate_lru_page()
> > __munlock_isolated_page()
> >   page_mlock()
> >     rmap_walk() # for_each_vma()
> >       page_mlock_one()
> >         (vma->vm_flags & VM_LOCKED) && TestSetPageMlocked()
> >
> > If we didn't clear the VM_LOCKED flag first, racing threads could
> > re-lock pages under us because they see that flag and think our vma
> > wants those pages mlocked when we're in the process of munlocking.
>
> Thanks for the explanation Johannes!
> So far I didn't find an easy way to let __oom_reap_task_mm() run
> concurrently with unlock_range(). Will keep exploring.

After some more digging I think there are two acceptable options:

1. Call unlock_range() under mmap_write_lock and then downgrade it to
read lock so that both exit_mmap() and __oom_reap_task_mm() can unmap
vmas in parallel like this:

    if (mm->locked_vm) {
        mmap_write_lock(mm);
        unlock_range(mm->mmap, ULONG_MAX);
        mmap_write_downgrade(mm);
    } else
        mmap_read_lock(mm);
...
    unmap_vmas(&tlb, vma, 0, -1);
    mmap_read_unlock(mm);
    mmap_write_lock(mm);
    free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
...
    mm->mmap = NULL;
    mmap_write_unlock(mm);

This way exit_mmap() might block __oom_reap_task_mm() but for a much
shorter time during unlock_range() call.

2. Introduce another vm_flag mask similar to VM_LOCKED which is set
before munlock_vma_pages_range() clears VM_LOCKED so that
__oom_reap_task_mm() can identify vmas being unlocked and skip them.

Option 1 seems cleaner to me because it keeps the locking pattern
around unlock_range() in exit_mmap() consistent with all other places
it is used (in mremap() and munmap()) with mmap_write_lock taken.
WDYT?

> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ