lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160607082158.GA23435@bbox>
Date:	Tue, 7 Jun 2016 17:21:58 +0900
From:	Minchan Kim <minchan@...nel.org>
To:	Tim Chen <tim.c.chen@...ux.intel.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Vladimir Davydov <vdavydov@...tuozzo.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Michal Hocko <mhocko@...e.cz>, Hugh Dickins <hughd@...gle.com>,
	"Kirill A.Shutemov" <kirill.shutemov@...ux.intel.com>,
	Andi Kleen <andi@...stfloor.org>,
	Aaron Lu <aaron.lu@...el.com>,
	Huang Ying <ying.huang@...el.com>,
	linux-mm <linux-mm@...ck.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: Cleanup - Reorganize the shrink_page_list code into
 smaller functions

On Wed, Jun 01, 2016 at 11:23:53AM -0700, Tim Chen wrote:
> On Wed, 2016-06-01 at 16:12 +0900, Minchan Kim wrote:
> > 
> > Hi Tim,
> > 
> > To me, this reorganization is too limited and not good for me,
> > frankly speaking. It works for only your goal which allocate batch
> > swap slot, I guess. :)
> > 
> > My goal is to make them work with batch page_check_references,
> > batch try_to_unmap and batch __remove_mapping where we can avoid frequent
> > mapping->lock(e.g., anon_vma or i_mmap_lock with hoping such batch locking
> > help system performance) if batch pages has same inode or anon.
> 
> This is also my goal to group pages that are either under the same
> mapping or are anonymous pages together so we can reduce the i_mmap_lock
> acquisition.  One logic that's yet to be implemented in your patch
> is the grouping of similar pages together so we only need one i_mmap_lock
> acquisition.  Doing this efficiently is non-trivial.  

Hmm, my assumption is based on same inode pages are likely to order
in LRU so no need to group them. If successive page in page_list comes
from different inode, we can drop the lock and get new lock from new
inode. That sounds strange?

> 
> I punted the problem somewhat in my patch and elected to defer the processing
> of the anonymous pages at the end so they are naturally grouped without
> having to traverse the page_list more than once.  So I'm batching the
> anonymous pages but the file mapped pages were not grouped.
> 
> In your implementation, you may need to traverse the page_list in two pass, where the
> first one is to categorize the pages and grouping them and the second one
> is the actual processing.  Then the lock batching can be implemented
> for the pages.  Otherwise the locking is still done page by page in
> your patch, and can only be batched if the next page on page_list happens
> to have the same mapping.  Your idea of using a spl_batch_pages is pretty

Yes. as I said above, I expect pages in LRU would be likely to order per
inode normally. If it's not, yeb, we need grouping but such overhead would
mitigate the benefit of lock batch as SWAP_CLUSTER_MAX get bigger.

> neat.  It may need some enhancement so it is known whether some locks
> are already held for lock batching purpose.
> 
> 
> Thanks.
> 
> Tim

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ