[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201707172250.DFE18753.VOSMOFOFFLQHtJ@I-love.SAKURA.ne.jp>
Date: Mon, 17 Jul 2017 22:50:47 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: mhocko@...nel.org
Cc: linux-mm@...ck.org, hannes@...xchg.org, rientjes@...gle.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm/page_alloc: Wait for oom_lock before retrying.
Michal Hocko wrote:
> On Sun 16-07-17 19:59:51, Tetsuo Handa wrote:
> > Since the whole memory reclaim path has never been designed to handle the
> > scheduling priority inversions, those locations which are assuming that
> > execution of some code path shall eventually complete without using
> > synchronization mechanisms can get stuck (livelock) due to scheduling
> > priority inversions, for CPU time is not guaranteed to be yielded to some
> > thread doing such code path.
> >
> > mutex_trylock() in __alloc_pages_may_oom() (waiting for oom_lock) and
> > schedule_timeout_killable(1) in out_of_memory() (already held oom_lock) is
> > one of such locations, and it was demonstrated using artificial stressing
> > that the system gets stuck effectively forever because SCHED_IDLE priority
> > thread is unable to resume execution at schedule_timeout_killable(1) if
> > a lot of !SCHED_IDLE priority threads are wasting CPU time [1].
>
> I do not understand this. All the contending tasks will go and sleep for
> 1s. How can they preempt the lock holder?
Not 1s. It sleeps for only 1 jiffies, which is 1ms if CONFIG_HZ=1000.
And 1ms may not be long enough to allow the owner of oom_lock when there are
many threads doing the same thing. I demonstrated that SCHED_IDLE oom_lock
owner is completely defeated by a bunch of !SCHED_IDLE contending threads.
Powered by blists - more mailing lists