lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 03 May 2016 14:00:39 -0700
From:	Tim Chen <tim.c.chen@...ux.intel.com>
To:	Andrew Morton <akpm@...ux-foundation.org>,
	Vladimir Davydov <vdavydov@...tuozzo.com>,
	Johannes Weiner <hannes@...xchg.org>,
	Michal Hocko <mhocko@...e.cz>,
	Minchan Kim <minchan@...nel.org>,
	Hugh Dickins <hughd@...gle.com>
Cc:	"Kirill A.Shutemov" <kirill.shutemov@...ux.intel.com>,
	Andi Kleen <andi@...stfloor.org>,
	Aaron Lu <aaron.lu@...el.com>,
	Huang Ying <ying.huang@...el.com>,
	linux-mm <linux-mm@...ck.org>, linux-kernel@...r.kernel.org
Subject: [PATCH 0/7] mm: Improve swap path scalability with batched
 operations

The page swap out path is not scalable due to the numerous locks
acquired and released along the way, which are all executed on a page
by page basis, e.g.:

1. The acquisition of the mapping tree lock in swap cache when adding
a page to swap cache, and then again when deleting a page from swap cache after
it has been swapped out. 
2. The acquisition of the lock on swap device to allocate a swap slot for
a page to be swapped out. 

With the advent of high speed block devices that's several orders  
of magnitude faster than the old spinning disks, these bottlenecks
become fairly significant, especially on server class machines
with many theads running.  To reduce these locking costs, this patch
series attempt to batch the pages on the following oprations needed
on for swap:
1. Allocate swap slots in large batches, so locks on the swap device
don't need to be acquired as often. 
2. Add anonymous pages to the swap cache for the same swap device in             
batches, so the mapping tree lock can be acquired less.
3. Delete pages from swap cache also in batches.

We experimented the effect of this patches. We set up N threads to access
memory in excess of memory capcity, causing swap.  In experiments using
a single pmem based fast block device on a 2 socket machine, we saw
that for 1 thread, there is a ~25% increase in swap throughput and for
16 threads, the swap throughput increase by ~85%, when compared with the
vanilla kernel. Batching helps even for 1 thread because of contention
with kswapd when doing direct memory reclaim.

Feedbacks and reviews to this patch series are much appreciated.

Thanks.

Tim


Tim Chen (7):
  mm: Cleanup - Reorganize the shrink_page_list code into smaller
    functions
  mm: Group the processing of anonymous pages to be swapped in
    shrink_page_list
  mm: Add new functions to allocate swap slots in batches
  mm: Shrink page list batch allocates swap slots for page swapping
  mm: Batch addtion of pages to swap cache
  mm: Cleanup - Reorganize code to group handling of page
  mm: Batch unmapping of pages that are in swap cache

 include/linux/swap.h |  29 ++-
 mm/swap_state.c      | 253 +++++++++++++-----
 mm/swapfile.c        | 215 +++++++++++++--
 mm/vmscan.c          | 725 ++++++++++++++++++++++++++++++++++++++-------------
 4 files changed, 945 insertions(+), 277 deletions(-)

-- 
2.5.5

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ