lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 30 Dec 2021 09:24:04 +0100
From:   Michal Hocko <mhocko@...e.com>
To:     Suren Baghdasaryan <surenb@...gle.com>
Cc:     Johannes Weiner <hannes@...xchg.org>, akpm@...ux-foundation.org,
        rientjes@...gle.com, willy@...radead.org, guro@...com,
        riel@...riel.com, minchan@...nel.org, kirill@...temov.name,
        aarcange@...hat.com, christian@...uner.io, hch@...radead.org,
        oleg@...hat.com, david@...hat.com, jannh@...gle.com,
        shakeelb@...gle.com, luto@...nel.org, christian.brauner@...ntu.com,
        fweimer@...hat.com, jengelh@...i.de, timmurray@...gle.com,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        kernel-team@...roid.com
Subject: Re: [PATCH 4/3] mm: drop MMF_OOM_SKIP from exit_mmap

On Wed 29-12-21 21:59:55, Suren Baghdasaryan wrote:
[...]
> After some more digging I think there are two acceptable options:
> 
> 1. Call unlock_range() under mmap_write_lock and then downgrade it to
> read lock so that both exit_mmap() and __oom_reap_task_mm() can unmap
> vmas in parallel like this:
> 
>     if (mm->locked_vm) {
>         mmap_write_lock(mm);
>         unlock_range(mm->mmap, ULONG_MAX);
>         mmap_write_downgrade(mm);
>     } else
>         mmap_read_lock(mm);
> ...
>     unmap_vmas(&tlb, vma, 0, -1);
>     mmap_read_unlock(mm);
>     mmap_write_lock(mm);
>     free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
> ...
>     mm->mmap = NULL;
>     mmap_write_unlock(mm);
> 
> This way exit_mmap() might block __oom_reap_task_mm() but for a much
> shorter time during unlock_range() call.

IIRC unlock_range depends on page lock at some stage and that can mean
this will block for a long time or for ever when the holder of the lock
depends on a memory allocation. This was the primary problem why the oom
reaper skips over mlocked vmas.

> 2. Introduce another vm_flag mask similar to VM_LOCKED which is set
> before munlock_vma_pages_range() clears VM_LOCKED so that
> __oom_reap_task_mm() can identify vmas being unlocked and skip them.
> 
> Option 1 seems cleaner to me because it keeps the locking pattern
> around unlock_range() in exit_mmap() consistent with all other places
> it is used (in mremap() and munmap()) with mmap_write_lock taken.
> WDYT?

It would be really great to make unlock_range oom reaper aware IMHO.

You do not quote your change in the full length so it is not really
clear whether you are planning to drop __oom_reap_task_mm from exit_mmap
as well. If yes then 1) could push oom reaper to timeout while the
unlock_range could be dropped on something so that wouldn't be an
improvement. 2) sounds like a workaround to me as it doesn't really
address the underlying problem.

I have to say that I am not really a great fan of __oom_reap_task_mm in
exit_mmap but I would rather see it in place than making the surrounding
code more complex/tricky.

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ