lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 17 Jul 2016 12:35:20 -0700
From:	kpusukur <kishore.kumar.pusukuri@...cle.com>
To:	linux-kernel@...r.kernel.org
Cc:	sparclinux@...r.kernel.org
Subject: RFC: using worker threadpool to speed up clear_huge_page() by up to
 5x

A prototype implementation of multi-threaded clear_huge_page() function 
based on the kernel work queue mechansim speeds up the function by up to 
5x. The existing code requires 320ms to clear a 2Gb huge page on a Sparc 
M7 processor, while the multi-threaded version achieves this in 65ms 
using 16 threads. 8Mb huge pages see a 3.7x improvement, from 1400us to 
380us. Even though the M7 has a vast number of CPUs at its disposal, 
this idea could be utilized even on small multicore systems with just a 
few CPUs to achieve a significant performance gain. For instance, on a 
x86_64 system (ie, with an Intel E5-2630 v2), while it speeds up the 
function by 3.8x using 4 threads in clearing 1GB page, 3.7x using 4 
threads in clearing 2MB page. The principal application we have in mind 
that would benefit from this is an in-memory database which uses 
hundreds of huge pages, and starts up 2.5x faster using this 
implementation. In other words, it improves database down-times by 2.5x.

Here is a table which shows speedups in clearing 2GB huge pages on Sparc 
M7 (by default it takes 320 milliseconds). Time is in milliseconds.
#workers    Time
2    166
4    87
8    70
16    65
32    66
64    66

Please see attached patch for an implementation, which serves to 
illustrate the idea. There are many ways to improve it and tune it for 
different sized systems, and some of the issues we are thinking about are:
  1) How many tasks (workers) to use? There is just so much memory 
bandwidth, so scaling is not stellar, so it might be satisfactory to 
shoot for modest performance without tying up too many processors.
  2) The system load needs to be taken into account somehow..
  3) Numa issues might/should influence which cpus are chosen for the work.

We would welcome feedback and discussion of potential problems.

We would also like to hear ideas for other areas in the kernel where a 
similar technique could be employed. For example, we've also applied 
this idea to copy on write operations for huge pages and it achieves 
around 20x speedup.

Thank you.

Best
Kishore Pusukuri

View attachment "uek4_clear_huge_page_with_workqueues.patch" of type "text/x-patch" (4202 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ