lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZeBeiy5QkSo7AJA7@pc636>
Date: Thu, 29 Feb 2024 11:38:03 +0100
From: Uladzislau Rezki <urezki@...il.com>
To: Baoquan He <bhe@...hat.com>
Cc: Uladzislau Rezki <urezki@...il.com>,
	Pedro Falcato <pedro.falcato@...il.com>,
	Matthew Wilcox <willy@...radead.org>, Mel Gorman <mgorman@...e.de>,
	kirill.shutemov@...ux.intel.com,
	Vishal Moola <vishal.moola@...il.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Lorenzo Stoakes <lstoakes@...il.com>,
	Christoph Hellwig <hch@...radead.org>,
	"Liam R . Howlett" <Liam.Howlett@...cle.com>,
	Dave Chinner <david@...morbit.com>,
	"Paul E . McKenney" <paulmck@...nel.org>,
	Joel Fernandes <joel@...lfernandes.org>,
	Oleksiy Avramchenko <oleksiy.avramchenko@...y.com>,
	linux-mm@...ck.org
Subject: Re: [PATCH v3 00/11] Mitigate a vmap lock contention v3

> 
> I finally finished the testing w/o and with your above improvement
> patch. Testing is done on a system with 128 cpus. The system with 288
> cpus is not available because of some console connection. Attach the log
> here. In some testing after rebooting, I found it could take more than 30
> minutes, I am not sure if it's caused by my messy code change. I finally
> cleaned up all of them and take a clean linux-next to test, then apply
> your above draft code.

> [root@...l-per6515-03 linux]# nproc 
> 128
> [root@...l-per6515-03 linux]# free -h
>                total        used        free      shared  buff/cache   available
> Mem:           124Gi       2.6Gi       122Gi        21Mi       402Mi       122Gi
> Swap:          4.0Gi          0B       4.0Gi
> 
> 1)linux-next kernel w/o improving code from Uladzislau
> -------------------------------------------------------
> [root@...l-per6515-03 linux]# time tools/testing/selftests/mm/test_vmalloc.sh run_test_mask=127 nr_threads=64
> Run the test with following parameters: run_test_mask=127 nr_threads=64
> Done.
> Check the kernel ring buffer to see the summary.
> 
> real	4m28.018s
> user	0m0.015s
> sys	0m4.712s
> [root@...l-per6515-03 ~]# sort -h /proc/allocinfo | tail -10
>     21405696     5226 mm/memory.c:1122 func:folio_prealloc 
>     26199936     7980 kernel/fork.c:309 func:alloc_thread_stack_node 
>     29822976     7281 mm/readahead.c:247 func:page_cache_ra_unbounded 
>     99090432    96768 drivers/iommu/iova.c:604 func:iova_magazine_alloc 
>    107638784     6320 mm/readahead.c:468 func:ra_alloc_folio 
>    120560528    29439 mm/mm_init.c:2521 func:alloc_large_system_hash 
>    134742016    32896 mm/percpu-vm.c:95 func:pcpu_alloc_pages 
>    263192576    64256 mm/page_ext.c:270 func:alloc_page_ext 
>    266797056    65136 include/linux/mm.h:2848 func:pagetable_alloc 
>    507617280    32796 mm/slub.c:2305 func:alloc_slab_page 
> [root@...l-per6515-03 ~]# 
> [root@...l-per6515-03 ~]# 
> [root@...l-per6515-03 linux]# time tools/testing/selftests/mm/test_vmalloc.sh run_test_mask=127 nr_threads=128
> Run the test with following parameters: run_test_mask=127 nr_threads=128
> Done.
> Check the kernel ring buffer to see the summary.
> 
> real	6m19.328s
> user	0m0.005s
> sys	0m9.476s
> [root@...l-per6515-03 ~]# sort -h /proc/allocinfo | tail -10
>     21405696     5226 mm/memory.c:1122 func:folio_prealloc 
>     26889408     8190 kernel/fork.c:309 func:alloc_thread_stack_node 
>     29822976     7281 mm/readahead.c:247 func:page_cache_ra_unbounded 
>     99090432    96768 drivers/iommu/iova.c:604 func:iova_magazine_alloc 
>    107638784     6320 mm/readahead.c:468 func:ra_alloc_folio 
>    120560528    29439 mm/mm_init.c:2521 func:alloc_large_system_hash 
>    134742016    32896 mm/percpu-vm.c:95 func:pcpu_alloc_pages 
>    263192576    64256 mm/page_ext.c:270 func:alloc_page_ext 
>    550068224    34086 mm/slub.c:2305 func:alloc_slab_page 
>    664535040   162240 include/linux/mm.h:2848 func:pagetable_alloc 
> [root@...l-per6515-03 ~]# 
> [root@...l-per6515-03 ~]# 
> [root@...l-per6515-03 linux]# time tools/testing/selftests/mm/test_vmalloc.sh run_test_mask=127 nr_threads=256
> Run the test with following parameters: run_test_mask=127 nr_threads=256
> Done.
> Check the kernel ring buffer to see the summary.
> 
> real	19m10.657s
> user	0m0.015s
> sys	0m20.959s
> [root@...l-per6515-03 ~]# sort -h /proc/allocinfo | tail -10
>     22441984     5479 mm/shmem.c:1634 func:shmem_alloc_folio 
>     26758080     8150 kernel/fork.c:309 func:alloc_thread_stack_node 
>     35880960     8760 mm/readahead.c:247 func:page_cache_ra_unbounded 
>     99090432    96768 drivers/iommu/iova.c:604 func:iova_magazine_alloc 
>    120560528    29439 mm/mm_init.c:2521 func:alloc_large_system_hash 
>    122355712     7852 mm/readahead.c:468 func:ra_alloc_folio 
>    134742016    32896 mm/percpu-vm.c:95 func:pcpu_alloc_pages 
>    263192576    64256 mm/page_ext.c:270 func:alloc_page_ext 
>    708231168    50309 mm/slub.c:2305 func:alloc_slab_page 
>   1107296256   270336 include/linux/mm.h:2848 func:pagetable_alloc 
> [root@...l-per6515-03 ~]# 
> 
> 2)linux-next kernel with improving code from Uladzislau
> -----------------------------------------------------
> [root@...l-per6515-03 linux]# time tools/testing/selftests/mm/test_vmalloc.sh run_test_mask=127 nr_threads=64
> Run the test with following parameters: run_test_mask=127 nr_threads=64
> Done.
> Check the kernel ring buffer to see the summary.
> 
> real	4m27.226s
> user	0m0.006s
> sys	0m4.709s
> [root@...l-per6515-03 linux]# sort -h /proc/allocinfo | tail -10
>     38023168     9283 mm/readahead.c:247 func:page_cache_ra_unbounded 
>     72228864    17634 fs/xfs/xfs_buf.c:390 [xfs] func:xfs_buf_alloc_pages 
>     99090432    96768 drivers/iommu/iova.c:604 func:iova_magazine_alloc 
>     99863552    97523 fs/xfs/xfs_icache.c:81 [xfs] func:xfs_inode_alloc 
>    120560528    29439 mm/mm_init.c:2521 func:alloc_large_system_hash 
>    136314880    33280 mm/percpu-vm.c:95 func:pcpu_alloc_pages 
>    184176640    10684 mm/readahead.c:468 func:ra_alloc_folio 
>    263192576    64256 mm/page_ext.c:270 func:alloc_page_ext 
>    284700672    69507 include/linux/mm.h:2848 func:pagetable_alloc 
>    601427968    36377 mm/slub.c:2305 func:alloc_slab_page 
> [root@...l-per6515-03 linux]# time tools/testing/selftests/mm/test_vmalloc.sh run_test_mask=127 nr_threads=128
> Run the test with following parameters: run_test_mask=127 nr_threads=128
> Done.
> Check the kernel ring buffer to see the summary.
> 
> real	6m16.960s
> user	0m0.007s
> sys	0m9.465s
> [root@...l-per6515-03 linux]# sort -h /proc/allocinfo | tail -10
>     38158336     9316 mm/readahead.c:247 func:page_cache_ra_unbounded 
>     72220672    17632 fs/xfs/xfs_buf.c:390 [xfs] func:xfs_buf_alloc_pages 
>     99090432    96768 drivers/iommu/iova.c:604 func:iova_magazine_alloc 
>     99863552    97523 fs/xfs/xfs_icache.c:81 [xfs] func:xfs_inode_alloc 
>    120560528    29439 mm/mm_init.c:2521 func:alloc_large_system_hash 
>    136314880    33280 mm/percpu-vm.c:95 func:pcpu_alloc_pages 
>    184504320    10710 mm/readahead.c:468 func:ra_alloc_folio 
>    263192576    64256 mm/page_ext.c:270 func:alloc_page_ext 
>    427884544   104464 include/linux/mm.h:2848 func:pagetable_alloc 
>    697311232    45159 mm/slub.c:2305 func:alloc_slab_page
> [root@...l-per6515-03 linux]# time tools/testing/selftests/mm/test_vmalloc.sh run_test_mask=127 nr_threads=256
> Run the test with following parameters: run_test_mask=127 nr_threads=256
> Done.
> Check the kernel ring buffer to see the summary.
> 
> real	21m15.673s
> user	0m0.008s
> sys	0m20.259s
> [root@...l-per6515-03 linux]# sort -h /proc/allocinfo | tail -10
>     38158336     9316 mm/readahead.c:247 func:page_cache_ra_unbounded 
>     72224768    17633 fs/xfs/xfs_buf.c:390 [xfs] func:xfs_buf_alloc_pages 
>     99090432    96768 drivers/iommu/iova.c:604 func:iova_magazine_alloc 
>     99863552    97523 fs/xfs/xfs_icache.c:81 [xfs] func:xfs_inode_alloc 
>    120560528    29439 mm/mm_init.c:2521 func:alloc_large_system_hash 
>    136314880    33280 mm/percpu-vm.c:95 func:pcpu_alloc_pages 
>    184504320    10710 mm/readahead.c:468 func:ra_alloc_folio 
>    263192576    64256 mm/page_ext.c:270 func:alloc_page_ext 
>    506974208   123773 include/linux/mm.h:2848 func:pagetable_alloc 
>    809504768    53621 mm/slub.c:2305 func:alloc_slab_page
> [root@...l-per6515-03 linux]# time tools/testing/selftests/mm/test_vmalloc.sh run_test_mask=127 nr_threads=256
> Run the test with following parameters: run_test_mask=127 nr_threads=256
> Done.
> Check the kernel ring buffer to see the summary.
> 
> real	21m36.580s
> user	0m0.012s
> sys	0m19.912s
> [root@...l-per6515-03 linux]# sort -h /proc/allocinfo | tail -10
>     38977536     9516 mm/readahead.c:247 func:page_cache_ra_unbounded 
>     72273920    17645 fs/xfs/xfs_buf.c:390 [xfs] func:xfs_buf_alloc_pages 
>     99090432    96768 drivers/iommu/iova.c:604 func:iova_magazine_alloc 
>     99895296    97554 fs/xfs/xfs_icache.c:81 [xfs] func:xfs_inode_alloc 
>    120560528    29439 mm/mm_init.c:2521 func:alloc_large_system_hash 
>    141033472    34432 mm/percpu-vm.c:95 func:pcpu_alloc_pages 
>    186064896    10841 mm/readahead.c:468 func:ra_alloc_folio 
>    263192576    64256 mm/page_ext.c:270 func:alloc_page_ext 
>    541237248   132138 include/linux/mm.h:2848 func:pagetable_alloc 
>    694718464    41216 mm/slub.c:2305 func:alloc_slab_page
> 
> 
Thank you for testing this. So ~132mb with a patch. I think it looks
good but i might change the draft version and send out a new version.

Thank you again!

--
Uladzislau Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ