[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180326081356.GA5652@dhcp22.suse.cz>
Date: Mon, 26 Mar 2018 10:13:56 +0200
From: Michal Hocko <mhocko@...nel.org>
To: jglisse@...hat.com
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
David Rientjes <rientjes@...gle.com>,
Dan Williams <dan.j.williams@...el.com>,
Joerg Roedel <joro@...tes.org>,
Christian König <christian.koenig@....com>,
Paolo Bonzini <pbonzini@...hat.com>,
Leon Romanovsky <leonro@...lanox.com>,
Artemy Kovalyov <artemyko@...lanox.com>,
Evgeny Baskakov <ebaskakov@...dia.com>,
Ralph Campbell <rcampbell@...dia.com>,
Mark Hairgrove <mhairgrove@...dia.com>,
John Hubbard <jhubbard@...dia.com>,
Mike Marciniszyn <mike.marciniszyn@...el.com>,
Dennis Dalessandro <dennis.dalessandro@...el.com>,
Alex Deucher <alexander.deucher@....com>,
Sudeep Dutt <sudeep.dutt@...el.com>,
Ashutosh Dixit <ashutosh.dixit@...el.com>,
Dimitri Sivanich <sivanich@....com>
Subject: Re: [RFC PATCH 0/3] mmu_notifier contextual information
I haven't read through the whole thread, I just wanted to clarify the
OOM aspect.
On Fri 23-03-18 13:17:45, jglisse@...hat.com wrote:
[...]
> OOM is also an interesting case, recently a patchset was added to
> avoid OOM on a mm if a blocking mmu_notifier listener have been
> registered [1].
This is not quite right. We only skip oom _reaper_ (aka async oom victim
address space tear down). We still do allow such a task to be selected
as an OOM victim and killed. So the worst case that we might kill
another task if the current victim is not able to make a forward
progress on its own.
> This can be improve by adding a new OOM event type and
> having listener take special path on those. All mmu_notifier i know
> can easily have a special path for OOM that do not block (beside
> taking a short lived, across driver, spinlock). If mmu_notifier usage
> grows (from a point of view of more process using devices that rely on
> them) then we should also make sure OOM can do its bidding.
If we can distinguish the OOM path and enforce no locks or indirect
dependencies on the memory allocation the the situation would improve
for sure.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists