[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131126021546.GW3556@cmpxchg.org>
Date: Mon, 25 Nov 2013 21:15:46 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Dave Chinner <david@...morbit.com>, Rik van Riel <riel@...hat.com>,
Jan Kara <jack@...e.cz>, Vlastimil Babka <vbabka@...e.cz>,
Peter Zijlstra <peterz@...radead.org>,
Tejun Heo <tj@...nel.org>, Andi Kleen <andi@...stfloor.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Greg Thelen <gthelen@...gle.com>,
Christoph Hellwig <hch@...radead.org>,
Hugh Dickins <hughd@...gle.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Mel Gorman <mgorman@...e.de>,
Minchan Kim <minchan.kim@...il.com>,
Michel Lespinasse <walken@...gle.com>,
Seth Jennings <sjenning@...ux.vnet.ibm.com>,
Roman Gushchin <klamm@...dex-team.ru>,
Ozgun Erdogan <ozgun@...usdata.com>,
Metin Doslu <metin@...usdata.com>, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 7/9] mm: thrash detection-based file cache sizing
On Mon, Nov 25, 2013 at 03:50:11PM -0800, Andrew Morton wrote:
> On Sun, 24 Nov 2013 18:38:26 -0500 Johannes Weiner <hannes@...xchg.org> wrote:
>
> > ...
> >
> > + * Access frequency and refault distance
> > + *
> > + * A workload is trashing when its pages are frequently used but they
> > + * are evicted from the inactive list every time before another access
> > + * would have promoted them to the active list.
> > + *
> > + * In cases where the average access distance between thrashing pages
> > + * is bigger than the size of memory there is nothing that can be
> > + * done - the thrashing set could never fit into memory under any
> > + * circumstance.
> > + *
> > + * However, the average access distance could be bigger than the
> > + * inactive list, yet smaller than the size of memory. In this case,
> > + * the set could fit into memory if it weren't for the currently
> > + * active pages - which may be used more, hopefully less frequently:
> > + *
> > + * +-memory available to cache-+
> > + * | |
> > + * +-inactive------+-active----+
> > + * a b | c d e f g h i | J K L M N |
> > + * +---------------+-----------+
>
> So making the inactive list smaller will worsen this problem?
Only if the inactive list size is a factor in detecting repeatedly
used pages. This patch series is all about removing that dependency
and using non-residency information to cover that deficit a small
inactive list would otherwise create.
> If so, don't we have a conflict with this objective:
>
> > Right now we have a fixed ratio (50:50) between inactive and active
> > list but we already have complaints about working sets exceeding half
> > of memory being pushed out of the cache by simple streaming in the
> > background. Ultimately, we want to adjust this ratio and allow for a
> > much smaller inactive list.
No, this IS the objective. The patches get us there by being able to
detect repeated references with an arbitrary inactive list size.
> > + * It is prohibitively expensive to accurately track access frequency
> > + * of pages. But a reasonable approximation can be made to measure
> > + * thrashing on the inactive list, after which refaulting pages can be
> > + * activated optimistically to compete with the existing active pages.
> > + *
> > + * Approximating inactive page access frequency - Observations:
> > + *
> > + * 1. When a page is accesed for the first time, it is added to the
>
> "accessed"
Whoopsa :-) Will fix that up.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists