lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 6 Nov 2018 11:00:29 -0800
From:   Daniel Jordan <daniel.m.jordan@...cle.com>
To:     Zi Yan <zi.yan@...rutgers.edu>
Cc:     Daniel Jordan <daniel.m.jordan@...cle.com>, linux-mm@...ck.org,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        aarcange@...hat.com, aaron.lu@...el.com, akpm@...ux-foundation.org,
        alex.williamson@...hat.com, bsd@...hat.com,
        darrick.wong@...cle.com, dave.hansen@...ux.intel.com,
        jgg@...lanox.com, jwadams@...gle.com, jiangshanlai@...il.com,
        mhocko@...nel.org, mike.kravetz@...cle.com,
        Pavel.Tatashin@...rosoft.com, prasad.singamsetty@...cle.com,
        rdunlap@...radead.org, steven.sistare@...cle.com,
        tim.c.chen@...el.com, tj@...nel.org, vbabka@...e.cz
Subject: Re: [RFC PATCH v4 00/13] ktask: multithread CPU-intensive kernel work

On Mon, Nov 05, 2018 at 09:48:56PM -0500, Zi Yan wrote:
> On 5 Nov 2018, at 21:20, Daniel Jordan wrote:
> 
> > Hi Zi,
> >
> > On Mon, Nov 05, 2018 at 01:49:14PM -0500, Zi Yan wrote:
> >> On 5 Nov 2018, at 11:55, Daniel Jordan wrote:
> >>
> >> Do you think if it makes sense to use ktask for huge page migration (the data
> >> copy part)?
> >
> > It certainly could.
> >
> >> I did some experiments back in 2016[1], which showed that migrating one 2MB page
> >> with 8 threads could achieve 2.8x throughput of the existing single-threaded method.
> >> The problem with my parallel page migration patchset at that time was that it
> >> has no CPU-utilization awareness, which is solved by your patches now.
> >
> > Did you run with fewer than 8 threads?  I'd want a bigger speedup than 2.8x for
> > 8, and a smaller thread count might improve thread utilization.
> 
> Yes. When migrating one 2MB THP with migrate_pages() system call on a two-socket server
> with 2 E5-2650 v3 CPUs (10 cores per socket) across two sockets, here are the page migration
> throughput numbers:
> 
>              throughput       factor
> 1 thread      2.15 GB/s         1x
> 2 threads     3.05 GB/s         1.42x
> 4 threads     4.50 GB/s         2.09x
> 8 threads     5.98 GB/s         2.78x

Thanks.  Looks like in your patches you start a worker for every piece of the
huge page copy and have the main thread wait.  I'm curious what the workqueue
overhead is like on your machine.  On a newer Xeon it's ~50usec from queueing a
work to starting to execute it and another ~20usec to flush a work
(barrier_func), which could happen after the work is already done.  A pretty
significant piece of the copy time for part of a THP.

            bash 60728 [087] 155865.157116:                   probe:ktask_run: (ffffffffb7ee7a80)
            bash 60728 [087] 155865.157119:    workqueue:workqueue_queue_work: work struct=0xffff95fb73276000
            bash 60728 [087] 155865.157119: workqueue:workqueue_activate_work: work struct 0xffff95fb73276000
 kworker/u194:3- 86730 [095] 155865.157168: workqueue:workqueue_execute_start: work struct 0xffff95fb73276000: function ktask_thread
 kworker/u194:3- 86730 [095] 155865.157170:   workqueue:workqueue_execute_end: work struct 0xffff95fb73276000
 kworker/u194:3- 86730 [095] 155865.157171: workqueue:workqueue_execute_start: work struct 0xffffa676995bfb90: function wq_barrier_func
 kworker/u194:3- 86730 [095] 155865.157190:   workqueue:workqueue_execute_end: work struct 0xffffa676995bfb90
            bash 60728 [087] 155865.157207:       probe:ktask_run_ret__return: (ffffffffb7ee7a80 <- ffffffffb7ee7b7b)

> >
> > It would be nice to multithread at a higher granularity than 2M, too: a range
> > of THPs might also perform better than a single page.
> 
> Sure. But the kernel currently does not copy multiple pages altogether even if a range
> of THPs is migrated. Page copy function is interleaved with page table operations
> for every single page.
> 
> I also did some study and modified the kernel to improve this, which I called
> concurrent page migration in https://lwn.net/Articles/714991/. It further
> improves page migration throughput.

Ok, over 4x with 8 threads for 16 THPs.  Is 16 a typical number for migration,
or does it get larger?  What workloads do you have in mind with this change?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ