lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZGMocw0rvtErnatJ@feng-clx>
Date:   Tue, 16 May 2023 14:53:39 +0800
From:   Feng Tang <feng.tang@...el.com>
To:     Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
CC:     Peter Zijlstra <peterz@...radead.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "Liu, Yujie" <yujie.liu@...el.com>,
        "Lu, Aaron" <aaron.lu@...el.com>,
        Olivier Dion <odion@...icios.com>,
        "michael.christie@...cle.com" <michael.christie@...cle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        John Hubbard <jhubbard@...dia.com>,
        "Jason Gunthorpe" <jgg@...dia.com>, Peter Xu <peterx@...hat.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Waiman Long <llong@...hat.com>
Subject: Re: [PATCH] mm: Move mm_count into its own cache line

Hi Mathieu,

On Mon, May 15, 2023 at 10:35:36PM +0800, Mathieu Desnoyers wrote:
> The mm_struct mm_count field is frequently updated by mmgrab/mmdrop
> performed by context switch. This causes false-sharing for surrounding
> mm_struct fields which are read-mostly.
> 
> This has been observed on a 2sockets/112core/224cpu Intel Sapphire
> Rapids server running hackbench, and by the kernel test robot
> will-it-scale testcase.
> 
> Move the mm_count field into its own cache line to prevent false-sharing
> with other mm_struct fields.
> 
> Move mm_count to the first field of mm_struct to minimize the amount of
> padding required: rather than adding padding before and after the
> mm_count field, padding is only added after mm_count.
> 
> Note that I noticed this odd comment in mm_struct:
> 
> commit 2e3025434a6b ("mm: relocate 'write_protect_seq' in struct mm_struct")
> 
>                 /*
>                  * With some kernel config, the current mmap_lock's offset
>                  * inside 'mm_struct' is at 0x120, which is very optimal, as
>                  * its two hot fields 'count' and 'owner' sit in 2 different
>                  * cachelines,  and when mmap_lock is highly contended, both
>                  * of the 2 fields will be accessed frequently, current layout
>                  * will help to reduce cache bouncing.
>                  *
>                  * So please be careful with adding new fields before
>                  * mmap_lock, which can easily push the 2 fields into one
>                  * cacheline.
>                  */
>                 struct rw_semaphore mmap_lock;
> 
> This comment is rather odd for a few reasons:
> 
> - It requires addition/removal of mm_struct fields to carefully consider
>   field alignment of _other_ fields,
> - It expresses the wish to keep an "optimal" alignment for a specific
>   kernel config.
> 
> I suspect that the author of this comment may want to revisit this topic
> and perhaps introduce a split-struct approach for struct rw_semaphore,
> if the need is to place various fields of this structure in different
> cache lines.

Thanks for bringing this up.


The full context of the commit 2e3025434a6b is here:
https://lore.kernel.org/lkml/20210525031636.GB7744@xsang-OptiPlex-9020/

Add Linus, Waiman who have analyzed this case.  

That a commit changed the cacheline layout of mmap_lock inside of
'mm_struct', which caused a will-it-scale regression. As false sharing
handling is tricky and we chosed to be defensive and just _restore_
its cacheline layout as before (even if it is kind of weired as
being related to kernel configs :)).

As for rw_semaphore, it is a fundermental thing while that regerssion
is just one single workload of micro-benchmark. IMHO, any change to
its layout should consider more workloads, and deserve a wide range
of benchmark tests.

I just checked latest kernel, seems the cache layout is already
different from what 2e3025434a6b try to restore, that the 'count' and
'owner' fields sit in 2 different cachelines. So this patch won't
'hurt' in this regard.

Thanks,
Feng

> 
> Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced by mm_cid")
> Fixes: af7f588d8f73 ("sched: Introduce per-memory-map concurrency ID")
> Link: https://lore.kernel.org/lkml/7a0c1db1-103d-d518-ed96-1584a28fbf32@efficios.com
> Reported-by: kernel test robot <yujie.liu@...el.com>
> Link: https://lore.kernel.org/oe-lkp/202305151017.27581d75-yujie.liu@intel.com
> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Aaron Lu <aaron.lu@...el.com>
> Cc: Olivier Dion <odion@...icios.com>
> Cc: michael.christie@...cle.com
> Cc: Andrew Morton <akpm@...ux-foundation.org>
> Cc: Feng Tang <feng.tang@...el.com>
> Cc: John Hubbard <jhubbard@...dia.com>
> Cc: Jason Gunthorpe <jgg@...dia.com>
> Cc: Peter Xu <peterx@...hat.com>
> Cc: linux-mm@...ck.org
> ---
>  include/linux/mm_types.h | 23 +++++++++++++++--------
>  1 file changed, 15 insertions(+), 8 deletions(-)
> 
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 306a3d1a0fa6..de10fc797c8e 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -583,6 +583,21 @@ struct mm_cid {
>  struct kioctx_table;
>  struct mm_struct {
>  	struct {
> +		/*
> +		 * Fields which are often written to are placed in a separate
> +		 * cache line.
> +		 */
> +		struct {
> +			/**
> +			 * @mm_count: The number of references to &struct
> +			 * mm_struct (@mm_users count as 1).
> +			 *
> +			 * Use mmgrab()/mmdrop() to modify. When this drops to
> +			 * 0, the &struct mm_struct is freed.
> +			 */
> +			atomic_t mm_count;
> +		} ____cacheline_aligned_in_smp;
> +
>  		struct maple_tree mm_mt;
>  #ifdef CONFIG_MMU
>  		unsigned long (*get_unmapped_area) (struct file *filp,
> @@ -620,14 +635,6 @@ struct mm_struct {
>  		 */
>  		atomic_t mm_users;
>  
> -		/**
> -		 * @mm_count: The number of references to &struct mm_struct
> -		 * (@mm_users count as 1).
> -		 *
> -		 * Use mmgrab()/mmdrop() to modify. When this drops to 0, the
> -		 * &struct mm_struct is freed.
> -		 */
> -		atomic_t mm_count;
>  #ifdef CONFIG_SCHED_MM_CID
>  		/**
>  		 * @pcpu_cid: Per-cpu current cid.
> -- 
> 2.25.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ