[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130604050103.GC14719@blaptop>
Date: Tue, 4 Jun 2013 14:01:03 +0900
From: Minchan Kim <minchan@...nel.org>
To: Dave Hansen <dave@...1.net>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, mgorman@...e.de,
tim.c.chen@...ux.intel.com
Subject: Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking
operations
On Mon, Jun 03, 2013 at 01:02:08PM -0700, Dave Hansen wrote:
>
> From: Dave Hansen <dave.hansen@...ux.intel.com>
> changes for v2:
> * remove batch_has_same_mapping() helper. A local varible makes
> the check cheaper and cleaner
> * Move batch draining later to where we already know
> page_mapping(). This probably fixes a truncation race anyway
> * rename batch_for_mapping_removal -> batch_for_mapping_rm. It
> caused a line over 80 chars and needed shortening anyway.
> * Note: we only set 'batch_mapping' when there are pages in the
> batch_for_mapping_rm list
>
> --
>
> We batch like this so that several pages can be freed with a
> single mapping->tree_lock acquisition/release pair. This reduces
> the number of atomic operations and ensures that we do not bounce
> cachelines around.
>
> Tim Chen's earlier version of these patches just unconditionally
> created large batches of pages, even if they did not share a
> page_mapping(). This is a bit suboptimal for a few reasons:
> 1. if we can not consolidate lock acquisitions, it makes little
> sense to batch
> 2. The page locks are held for long periods of time, so we only
> want to do this when we are sure that we will gain a
> substantial throughput improvement because we pay a latency
> cost by holding the locks.
>
> This patch makes sure to only batch when all the pages on
> 'batch_for_mapping_rm' continue to share a page_mapping().
> This only happens in practice in cases where pages in the same
> file are close to each other on the LRU. That seems like a
> reasonable assumption.
>
> In a 128MB virtual machine doing kernel compiles, the average
> batch size when calling __remove_mapping_batch() is around 5,
> so this does seem to do some good in practice.
>
> On a 160-cpu system doing kernel compiles, I still saw an
> average batch length of about 2.8. One promising feature:
> as the memory pressure went up, the average batches seem to
> have gotten larger.
>
> It has shown some substantial performance benefits on
> microbenchmarks.
>
> Signed-off-by: Dave Hansen <dave.hansen@...ux.intel.com>
> Acked-by: Mel Gorman <mgorman@...e.de>
Look at below comment, otherwise, looks good to me.
Reviewed-by: Minchan Kim <minchan@...nel.org>
> ---
>
> linux.git-davehans/mm/vmscan.c | 95 +++++++++++++++++++++++++++++++++++++----
> 1 file changed, 86 insertions(+), 9 deletions(-)
>
> diff -puN mm/vmscan.c~create-remove_mapping_batch mm/vmscan.c
> --- linux.git/mm/vmscan.c~create-remove_mapping_batch 2013-06-03 12:41:31.408751324 -0700
> +++ linux.git-davehans/mm/vmscan.c 2013-06-03 12:41:31.412751500 -0700
> @@ -550,6 +550,61 @@ int remove_mapping(struct address_space
> return 0;
> }
>
> +/*
> + * pages come in here (via remove_list) locked and leave unlocked
> + * (on either ret_pages or free_pages)
> + *
> + * We do this batching so that we free batches of pages with a
> + * single mapping->tree_lock acquisition/release. This optimization
> + * only makes sense when the pages on remove_list all share a
> + * page_mapping(). If this is violated you will BUG_ON().
> + */
> +static int __remove_mapping_batch(struct list_head *remove_list,
> + struct list_head *ret_pages,
> + struct list_head *free_pages)
> +{
> + int nr_reclaimed = 0;
> + struct address_space *mapping;
> + struct page *page;
> + LIST_HEAD(need_free_mapping);
> +
> + if (list_empty(remove_list))
> + return 0;
> +
> + mapping = page_mapping(lru_to_page(remove_list));
> + spin_lock_irq(&mapping->tree_lock);
> + while (!list_empty(remove_list)) {
> + page = lru_to_page(remove_list);
> + BUG_ON(!PageLocked(page));
> + BUG_ON(page_mapping(page) != mapping);
> + list_del(&page->lru);
> +
> + if (!__remove_mapping(mapping, page)) {
> + unlock_page(page);
> + list_add(&page->lru, ret_pages);
> + continue;
> + }
> + list_add(&page->lru, &need_free_mapping);
Why do we need new lru list instead of using @free_pages?
> + }
> + spin_unlock_irq(&mapping->tree_lock);
> +
> + while (!list_empty(&need_free_mapping)) {
> + page = lru_to_page(&need_free_mapping);
> + list_move(&page->list, free_pages);
> + mapping_release_page(mapping, page);
> + /*
> + * At this point, we have no other references and there is
> + * no way to pick any more up (removed from LRU, removed
> + * from pagecache). Can use non-atomic bitops now (and
> + * we obviously don't have to worry about waking up a process
> + * waiting on the page lock, because there are no references.
> + */
> + __clear_page_locked(page);
> + nr_reclaimed++;
> + }
> + return nr_reclaimed;
> +}
> +
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists