[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1464805433.22178.191.camel@linux.intel.com>
Date: Wed, 01 Jun 2016 11:23:53 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Minchan Kim <minchan@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Vladimir Davydov <vdavydov@...tuozzo.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...e.cz>, Hugh Dickins <hughd@...gle.com>,
"Kirill A.Shutemov" <kirill.shutemov@...ux.intel.com>,
Andi Kleen <andi@...stfloor.org>,
Aaron Lu <aaron.lu@...el.com>,
Huang Ying <ying.huang@...el.com>,
linux-mm <linux-mm@...ck.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: Cleanup - Reorganize the shrink_page_list code into
smaller functions
On Wed, 2016-06-01 at 16:12 +0900, Minchan Kim wrote:
>
> Hi Tim,
>
> To me, this reorganization is too limited and not good for me,
> frankly speaking. It works for only your goal which allocate batch
> swap slot, I guess. :)
>
> My goal is to make them work with batch page_check_references,
> batch try_to_unmap and batch __remove_mapping where we can avoid frequent
> mapping->lock(e.g., anon_vma or i_mmap_lock with hoping such batch locking
> help system performance) if batch pages has same inode or anon.
This is also my goal to group pages that are either under the same
mapping or are anonymous pages together so we can reduce the i_mmap_lock
acquisition. One logic that's yet to be implemented in your patch
is the grouping of similar pages together so we only need one i_mmap_lock
acquisition. Doing this efficiently is non-trivial.
I punted the problem somewhat in my patch and elected to defer the processing
of the anonymous pages at the end so they are naturally grouped without
having to traverse the page_list more than once. So I'm batching the
anonymous pages but the file mapped pages were not grouped.
In your implementation, you may need to traverse the page_list in two pass, where the
first one is to categorize the pages and grouping them and the second one
is the actual processing. Then the lock batching can be implemented
for the pages. Otherwise the locking is still done page by page in
your patch, and can only be batched if the next page on page_list happens
to have the same mapping. Your idea of using a spl_batch_pages is pretty
neat. It may need some enhancement so it is known whether some locks
are already held for lock batching purpose.
Thanks.
Tim
Powered by blists - more mailing lists