[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220519122904.5aef389360ee2cba42d7b4ca@linux-foundation.org>
Date: Thu, 19 May 2022 12:29:04 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: mhocko@...e.com, rientjes@...gle.com, willy@...radead.org,
hannes@...xchg.org, guro@...com, minchan@...nel.org,
kirill@...temov.name, aarcange@...hat.com, brauner@...nel.org,
hch@...radead.org, oleg@...hat.com, david@...hat.com,
jannh@...gle.com, shakeelb@...gle.com, peterx@...hat.com,
jhubbard@...dia.com, shuah@...nel.org,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
kernel-team@...roid.com, Liam Howlett <liam.howlett@...cle.com>
Subject: Re: [PATCH v2 1/2] mm: drop oom code from exit_mmap
On Mon, 16 May 2022 00:56:18 -0700 Suren Baghdasaryan <surenb@...gle.com> wrote:
> The primary reason to invoke the oom reaper from the exit_mmap path used
> to be a prevention of an excessive oom killing if the oom victim exit
> races with the oom reaper (see [1] for more details). The invocation has
> moved around since then because of the interaction with the munlock
> logic but the underlying reason has remained the same (see [2]).
>
> Munlock code is no longer a problem since [3] and there shouldn't be
> any blocking operation before the memory is unmapped by exit_mmap so
> the oom reaper invocation can be dropped. The unmapping part can be done
> with the non-exclusive mmap_sem and the exclusive one is only required
> when page tables are freed.
>
> Remove the oom_reaper from exit_mmap which will make the code easier to
> read. This is really unlikely to make any observable difference although
> some microbenchmarks could benefit from one less branch that needs to be
> evaluated even though it almost never is true.
>
Liam, this mucks "mm: start tracking VMAs with maple tree" somewhat.
> --- a/include/linux/oom.h
> +++ b/include/linux/oom.h
> @@ -106,8 +106,6 @@ static inline vm_fault_t check_stable_address_space(struct mm_struct *mm)
> return 0;
> }
>
> -bool __oom_reap_task_mm(struct mm_struct *mm);
> -
> long oom_badness(struct task_struct *p,
> unsigned long totalpages);
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 313b57d55a63..ded42150e706 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -3105,30 +3105,13 @@ void exit_mmap(struct mm_struct *mm)
> /* mm's last user has gone, and its about to be pulled down */
> mmu_notifier_release(mm);
>
> - if (unlikely(mm_is_oom_victim(mm))) {
> - /*
> - * Manually reap the mm to free as much memory as possible.
> - * Then, as the oom reaper does, set MMF_OOM_SKIP to disregard
> - * this mm from further consideration. Taking mm->mmap_lock for
> - * write after setting MMF_OOM_SKIP will guarantee that the oom
> - * reaper will not run on this mm again after mmap_lock is
> - * dropped.
> - *
> - * Nothing can be holding mm->mmap_lock here and the above call
> - * to mmu_notifier_release(mm) ensures mmu notifier callbacks in
> - * __oom_reap_task_mm() will not block.
> - */
> - (void)__oom_reap_task_mm(mm);
> - set_bit(MMF_OOM_SKIP, &mm->flags);
> - }
> -
> - mmap_write_lock(mm);
> + mmap_read_lock(mm);
> arch_exit_mmap(mm);
>
> vma = mm->mmap;
> if (!vma) {
> /* Can happen if dup_mmap() received an OOM */
> - mmap_write_unlock(mm);
> + mmap_read_unlock(mm);
> return;
> }
>
> @@ -3138,6 +3121,16 @@ void exit_mmap(struct mm_struct *mm)
> /* update_hiwater_rss(mm) here? but nobody should be looking */
> /* Use -1 here to ensure all VMAs in the mm are unmapped */
> unmap_vmas(&tlb, vma, 0, -1);
> + mmap_read_unlock(mm);
> +
> + /*
> + * Set MMF_OOM_SKIP to hide this task from the oom killer/reaper
> + * because the memory has been already freed. Do not bother checking
> + * mm_is_oom_victim because setting a bit unconditionally is cheaper.
> + */
> + set_bit(MMF_OOM_SKIP, &mm->flags);
> +
> + mmap_write_lock(mm);
> free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
> tlb_finish_mmu(&tlb);
>
I ended up with the below rework of "mm: start tracking VMAs with maple
tree". Please triple check?
void exit_mmap(struct mm_struct *mm)
{
struct mmu_gather tlb;
struct vm_area_struct *vma;
unsigned long nr_accounted = 0;
/* mm's last user has gone, and its about to be pulled down */
mmu_notifier_release(mm);
mmap_write_lock(mm);
arch_exit_mmap(mm);
vma = mm->mmap;
if (!vma) {
/* Can happen if dup_mmap() received an OOM */
mmap_write_unlock(mm);
return;
}
lru_add_drain();
flush_cache_mm(mm);
tlb_gather_mmu_fullmm(&tlb, mm);
/* update_hiwater_rss(mm) here? but nobody should be looking */
/* Use -1 here to ensure all VMAs in the mm are unmapped */
unmap_vmas(&tlb, vma, 0, -1);
/*
* Set MMF_OOM_SKIP to hide this task from the oom killer/reaper
* because the memory has been already freed. Do not bother checking
* mm_is_oom_victim because setting a bit unconditionally is cheaper.
*/
set_bit(MMF_OOM_SKIP, &mm->flags);
free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
tlb_finish_mmu(&tlb);
/* Walk the list again, actually closing and freeing it. */
while (vma) {
if (vma->vm_flags & VM_ACCOUNT)
nr_accounted += vma_pages(vma);
vma = remove_vma(vma);
cond_resched();
}
trace_exit_mmap(mm);
__mt_destroy(&mm->mm_mt);
mmap_write_unlock(mm);
vm_unacct_memory(nr_accounted);
}
And "mm: remove the vma linked list" needed further reworking. I ended
up with
void exit_mmap(struct mm_struct *mm)
{
struct mmu_gather tlb;
struct vm_area_struct *vma;
unsigned long nr_accounted = 0;
MA_STATE(mas, &mm->mm_mt, 0, 0);
int count = 0;
/* mm's last user has gone, and its about to be pulled down */
mmu_notifier_release(mm);
mmap_write_lock(mm);
arch_exit_mmap(mm);
vma = mas_find(&mas, ULONG_MAX);
if (!vma) {
/* Can happen if dup_mmap() received an OOM */
mmap_write_unlock(mm);
return;
}
lru_add_drain();
flush_cache_mm(mm);
tlb_gather_mmu_fullmm(&tlb, mm);
/* update_hiwater_rss(mm) here? but nobody should be looking */
/* Use ULONG_MAX here to ensure all VMAs in the mm are unmapped */
unmap_vmas(&tlb, &mm->mm_mt, vma, 0, ULONG_MAX);
/*
* Set MMF_OOM_SKIP to hide this task from the oom killer/reaper
* because the memory has been already freed. Do not bother checking
* mm_is_oom_victim because setting a bit unconditionally is cheaper.
*/
set_bit(MMF_OOM_SKIP, &mm->flags);
free_pgtables(&tlb, &mm->mm_mt, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING);
tlb_finish_mmu(&tlb);
/*
* Walk the list again, actually closing and freeing it, with preemption
* enabled, without holding any MM locks besides the unreachable
* mmap_write_lock.
*/
do {
if (vma->vm_flags & VM_ACCOUNT)
nr_accounted += vma_pages(vma);
remove_vma(vma);
count++;
cond_resched();
} while ((vma = mas_find(&mas, ULONG_MAX)) != NULL);
BUG_ON(count != mm->map_count);
trace_exit_mmap(mm);
__mt_destroy(&mm->mm_mt);
mmap_write_unlock(mm);
vm_unacct_memory(nr_accounted);
}
The mapletree patches remain hidden from mm.git until, I expect, next week.
Thanks.
Powered by blists - more mailing lists