lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170412124905.25443-6-aryabinin@virtuozzo.com>
Date:   Wed, 12 Apr 2017 15:49:05 +0300
From:   Andrey Ryabinin <aryabinin@...tuozzo.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
CC:     <linux-kernel@...r.kernel.org>,
        Andrey Ryabinin <aryabinin@...tuozzo.com>,
        <penguin-kernel@...ove.SAKURA.ne.jp>, <mhocko@...nel.org>,
        <linux-mm@...ck.org>, <hpa@...or.com>, <chris@...is-wilson.co.uk>,
        <hch@....de>, <mingo@...e.hu>, <jszhang@...vell.com>,
        <joelaf@...gle.com>, <joaodias@...gle.com>, <willy@...radead.org>,
        <tglx@...utronix.de>, <thellstrom@...are.com>
Subject: [PATCH v2 5/5] mm/vmalloc: Don't spawn workers if somebody already purging

Don't schedule purge_vmap_work if mutex_is_locked(&vmap_purge_lock),
as this means that purging is already running in another thread.
There is no point to schedule extra purge_vmap_work if somebody
is already purging for us, because that extra work will not do anything
useful.

To evaluate performance impact of this change test that calls
fork() 100 000 times on the kernel with enabled CONFIG_VMAP_STACK=y
and NR_CACHED_STACK changed to 0 (so that each fork()/exit() executes
vmalloc()/vfree() call) was used.

Commands:
~ # grep try_purge /proc/kallsyms
ffffffff811d0dd0 t try_purge_vmap_area_lazy

~ # perf stat --repeat 10 -ae workqueue:workqueue_queue_work \
              --filter 'function == 0xffffffff811d0dd0' ./fork

gave me the following results:

before:
   30      workqueue:workqueue_queue_work                ( +-  1.31% )
   1.613231060 seconds time elapsed                      ( +-  0.38% )

after:
   15      workqueue:workqueue_queue_work                ( +-  0.88% )
   1.615368474 seconds time elapsed                      ( +-  0.41% )

So there is no measurable difference on the performance of the test itself,
but without the optimization we queue twice more jobs. This should save
kworkers from doing some useless job.

Signed-off-by: Andrey Ryabinin <aryabinin@...tuozzo.com>
Suggested-by: Thomas Hellstrom <thellstrom@...are.com>
Reviewed-by: Thomas Hellstrom <thellstrom@...are.com>
---
 mm/vmalloc.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index ee62c0a..1079555 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -737,7 +737,8 @@ static void free_vmap_area_noflush(struct vmap_area *va)
 	/* After this point, we may free va at any time */
 	llist_add(&va->purge_list, &vmap_purge_list);
 
-	if (unlikely(nr_lazy > lazy_max_pages()))
+	if (unlikely(nr_lazy > lazy_max_pages()) &&
+	    !mutex_is_locked(&vmap_purge_lock))
 		schedule_work(&purge_vmap_work);
 }
 
-- 
2.10.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ