[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151023111145.GH2410@dhcp22.suse.cz>
Date: Fri, 23 Oct 2015 13:11:45 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Tejun Heo <htejun@...il.com>
Cc: Christoph Lameter <cl@...ux.com>,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
torvalds@...ux-foundation.org,
David Rientjes <rientjes@...gle.com>, oleg@...hat.com,
kwalker@...hat.com, akpm@...ux-foundation.org, hannes@...xchg.org,
vdavydov@...allels.com, skozina@...hat.com, mgorman@...e.de,
riel@...hat.com
Subject: Re: [PATCH] mm,vmscan: Use accurate values for zone_reclaimable()
checks
On Fri 23-10-15 19:36:30, Tejun Heo wrote:
> Hello, Michal.
>
> On Fri, Oct 23, 2015 at 10:33:16AM +0200, Michal Hocko wrote:
> > Ohh, OK I can see wq_worker_sleeping now. I've missed your point in
> > other email, sorry about that. But now I am wondering whether this
> > is an intended behavior. The documentation says:
>
> This is.
>
> > WQ_MEM_RECLAIM
> >
> > All wq which might be used in the memory reclaim paths _MUST_
> > have this flag set. The wq is guaranteed to have at least one
> > execution context regardless of memory pressure.
> >
> > Which doesn't seem to be true currently, right? Now I can see your patch
>
> It is true.
>
> > to introduce WQ_IMMEDIATE but I am wondering which WQ_MEM_RECLAIM users
> > could do without WQ_IMMEDIATE? I mean all current workers might be
> > looping in the page allocator and it seems possible that WQ_MEM_RECLAIM
> > work items might be waiting behind them so they cannot help to relieve
> > the memory pressure. This doesn't sound right to me. Or I am completely
> > confused and still fail to understand what is WQ_MEM_RECLAIM supposed to
> > be used for.
>
> It guarantees that there always is enough execution resource to
> execute a work item from that workqueue.
OK, strictly speaking the rescuer is there but it is kind of pointless
if it doesn't fire up and do a work.
> The problem here is not lack
> of execution resource but concurrency management misunderstanding the
> situation.
And this sounds like a bug to me.
> This also can be fixed by teaching concurrency management
> to be a bit smarter - e.g. if a work item is burning a lot of CPU
> cycles continuously or pool hasn't finished a work item over a certain
> amount of time, automatically ignore the in-flight work item for the
> purpose of concurrency management; however, this sort of inter-work
> item busy waits are so extremely rare and undesirable that I'm not
> sure the added complexity would be worthwhile.
Don't we have some IO related paths which would suffer from the same
problem. I haven't checked all the WQ_MEM_RECLAIM users but from the
name I would expect they _do_ participate in the reclaim and so they
should be able to make a progress. Now if your new IMMEDIATE flag will
guarantee that then I would argue that it should be implicit for
WQ_MEM_RECLAIM otherwise we always risk a similar situation. What would
be a counter argument for doing that?
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists