lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121213100704.GV1009@suse.de>
Date:	Thu, 13 Dec 2012 10:07:05 +0000
From:	Mel Gorman <mgorman@...e.de>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	Rik van Riel <riel@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Michal Hocko <mhocko@...e.cz>, Hugh Dickins <hughd@...gle.com>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [patch 1/8] mm: memcg: only evict file pages when we have plenty

On Wed, Dec 12, 2012 at 05:28:44PM -0500, Johannes Weiner wrote:
> On Wed, Dec 12, 2012 at 04:53:36PM -0500, Rik van Riel wrote:
> > On 12/12/2012 04:43 PM, Johannes Weiner wrote:
> > >dc0422c "mm: vmscan: only evict file pages when we have plenty" makes

You are using some internal tree for that commit. Now that it's upstream
it is commit e9868505987a03a26a3979f27b82911ccc003752.

> > >a point of not going for anonymous memory while there is still enough
> > >inactive cache around.
> > >
> > >The check was added only for global reclaim, but it is just as useful
> > >for memory cgroup reclaim.
> > >
> > >Signed-off-by: Johannes Weiner <hannes@...xchg.org>
> > >---
> > >  mm/vmscan.c | 19 ++++++++++---------
> > >  1 file changed, 10 insertions(+), 9 deletions(-)
> > >
> > >diff --git a/mm/vmscan.c b/mm/vmscan.c
> > >index 157bb11..3874dcb 100644
> > >--- a/mm/vmscan.c
> > >+++ b/mm/vmscan.c
> > >@@ -1671,6 +1671,16 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
> > >  		denominator = 1;
> > >  		goto out;
> > >  	}
> > >+	/*
> > >+	 * There is enough inactive page cache, do not reclaim
> > >+	 * anything from the anonymous working set right now.
> > >+	 */
> > >+	if (!inactive_file_is_low(lruvec)) {
> > >+		fraction[0] = 0;
> > >+		fraction[1] = 1;
> > >+		denominator = 1;
> > >+		goto out;
> > >+	}
> > >
> > >  	anon  = get_lru_size(lruvec, LRU_ACTIVE_ANON) +
> > >  		get_lru_size(lruvec, LRU_INACTIVE_ANON);
> > >@@ -1688,15 +1698,6 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
> > >  			fraction[1] = 0;
> > >  			denominator = 1;
> > >  			goto out;
> > >-		} else if (!inactive_file_is_low_global(zone)) {
> > >-			/*
> > >-			 * There is enough inactive page cache, do not
> > >-			 * reclaim anything from the working set right now.
> > >-			 */
> > >-			fraction[0] = 0;
> > >-			fraction[1] = 1;
> > >-			denominator = 1;
> > >-			goto out;
> > >  		}
> > >  	}
> > >
> > >
> > 
> > I believe the if() block should be moved to AFTER
> > the check where we make sure we actually have enough
> > file pages.
> 
> You are absolutely right, this makes more sense.  Although I'd figure
> the impact would be small because if there actually is that little
> file cache, it won't be there for long with force-file scanning... :-)
> 

Does it actually make sense? Lets take the global reclaim case.

low_file         == if (unlikely(file + free <= high_wmark_pages(zone)))
inactive_is_high == if (!inactive_file_is_low_global(zone))

Current
  low_file	inactive_is_high	force reclaim anon
  low_file	!inactive_is_high	force reclaim anon
  !low_file	inactive_is_high	force reclaim file
  !low_file	!inactive_is_high	normal split

Your patch

  low_file	inactive_is_high	force reclaim anon
  low_file	!inactive_is_high	force reclaim anon
  !low_file	inactive_is_high	force reclaim file
  !low_file	!inactive_is_high	normal split

However, if you move the inactive_file_is_low check down you get

Moving the check
  low_file	inactive_is_high	force reclaim file
  low_file	!inactive_is_high	force reclaim anon
  !low_file	inactive_is_high	force reclaim file
  !low_file	!inactive_is_high	normal split

There is a small but important change in results. I easily could have made
a mistake so double check.

I'm not being super thorough because I'm not quite sure this is the right
patch if the motivation is for memcg to use the same logic. Instead of
moving this if, why do you not estimate "free" for the memcg based on the
hard limit and current usage? 

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ