[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151106001648.GA18183@mtj.duckdns.org>
Date: Thu, 5 Nov 2015 19:16:48 -0500
From: Tejun Heo <htejun@...il.com>
To: Christoph Lameter <cl@...ux.com>
Cc: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
mhocko@...nel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, torvalds@...ux-foundation.org,
rientjes@...gle.com, oleg@...hat.com, kwalker@...hat.com,
akpm@...ux-foundation.org, hannes@...xchg.org,
vdavydov@...allels.com, skozina@...hat.com, mgorman@...e.de,
riel@...hat.com
Subject: Re: [PATCH] mm,vmscan: Use accurate values for zone_reclaimable()
checks
Hello,
On Thu, Nov 05, 2015 at 11:45:42AM -0600, Christoph Lameter wrote:
> Sorry but we need work queue processing for vmstat counters that is
I made this analogy before but this is similar to looping with
preemption off. If anything on workqueue stays RUNNING w/o making
forward progress, it's buggy. I'd venture to say any code which busy
loops without making forward progress in the time scale noticeable to
human beings is borderline buggy too. If things need to be retried in
that time scale, putting in a short sleep between trials is a sensible
thing to do. There's no point in occupying the cpu and burning cycles
without making forward progress.
These things actually matter. Freezer used to burn cycles this way
and was really good at burning off the last remaining battery reserve
during emergency hibernation if freezing takes some amount of time.
It is true that as it currently stands this is error-prone as
workqueue can't detect these conditions and warn about them. The same
goes for workqueues which sit in memory reclaim path but forgets
WQ_MEM_RECLAIM. I'm going to add lockup detection, similar to how
softlockup but that's a different issue, so please update the code.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists