[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130911154057.GA16765@teo>
Date: Wed, 11 Sep 2013 08:40:57 -0700
From: Anton Vorontsov <anton@...msg.org>
To: Michal Hocko <mhocko@...e.cz>
Cc: Hugh Dickins <hughd@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH] vmpressure: fix divide-by-0 in vmpressure_work_fn
On Mon, Sep 09, 2013 at 01:08:47PM +0200, Michal Hocko wrote:
> On Fri 06-09-13 22:59:16, Hugh Dickins wrote:
> > Hit divide-by-0 in vmpressure_work_fn(): checking vmpr->scanned before
> > taking the lock is not enough, we must check scanned afterwards too.
>
> As vmpressure_work_fn seems the be the only place where we set scanned
> to 0 (except for the rare occasion when scanned overflows which
> would be really surprising) then the only possible way would be two
> vmpressure_work_fn racing over the same work item. system_wq is
> !WQ_NON_REENTRANT so one work item might be processed by multiple
> workers on different CPUs. This means that the vmpr->scanned check in
> the beginning of vmpressure_work_fn is inherently racy.
>
> Hugh's patch fixes the issue obviously but doesn't it make more sense to
> move the initial vmpr->scanned check under the lock instead?
>
> Anton, what was the initial motivation for the out of the lock
> check? Does it really optimize anything?
Thanks a lot for the explanation.
Answering your question: the idea was to minimize the lock section, but the
section is quite small anyway so I doubt that it makes any difference (during
development I could not measure any effect of vmpressure() calls in my system,
though the system itself was quite small).
I am happy with moving the check under the lock or moving the work into
its own WQ_NON_REENTRANT queue.
Anton
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists