[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52E2F6B7.3050304@redhat.com>
Date: Fri, 24 Jan 2014 18:26:47 -0500
From: Rik van Riel <riel@...hat.com>
To: Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>
CC: Tejun Heo <tj@...nel.org>, Mel Gorman <mgorman@...e.de>,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [patch 0/2] mm: reduce reclaim stalls with heavy anon and dirty
cache
On 01/24/2014 05:51 PM, Johannes Weiner wrote:
> On Fri, Jan 24, 2014 at 02:30:03PM -0800, Andrew Morton wrote:
>> On Fri, 24 Jan 2014 17:03:02 -0500 Johannes Weiner <hannes@...xchg.org> wrote:
>>
>>> Tejun reported stuttering and latency spikes on a system where random
>>> tasks would enter direct reclaim and get stuck on dirty pages. Around
>>> 50% of memory was occupied by tmpfs backed by an SSD, and another disk
>>> (rotating) was reading and writing at max speed to shrink a partition.
>>
>> Do you think this is serious enough to squeeze these into 3.14?
>
> We have been biasing towards cache reclaim at least as far back as the
> LRU split and we always considered anon dirtyable, so it's not really
> a *new* problem. And there is a chance of regressing write bandwidth
> for certain workloads by effectively shrinking their dirty limit -
> although that is easily fixed by changing dirty_ratio.
>
> On the other hand, the stuttering is pretty nasty (could reproduce it
> locally too) and the workload is not exactly esoteric. Plus, I'm not
> sure if waiting will increase the test exposure.
>
> So 3.14 would work for me, unless Mel and Rik have concerns.
3.14 would be fine, indeed.
On the other hand, if there are enough user reports of the stuttering
problem on older kernels, a -stable backport could be appropriate
too...
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists