lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8351fcd6-d4a2-9656-eae8-96e92e3e5257@intel.com>
Date:   Tue, 5 Sep 2023 13:41:36 +0800
From:   Yin Fengwei <fengwei.yin@...el.com>
To:     Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        kernel test robot <oliver.sang@...el.com>
CC:     <oe-lkp@...ts.linux.dev>, <lkp@...el.com>,
        <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        kernel test robot <yujie.liu@...el.com>,
        Aaron Lu <aaron.lu@...el.com>,
        John Hubbard <jhubbard@...dia.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Olivier Dion <odion@...icios.com>,
        Feng Tang <feng.tang@...el.com>,
        Jason Gunthorpe <jgg@...dia.com>, Peter Xu <peterx@...hat.com>,
        <ying.huang@...el.com>
Subject: Re: [linus:master] [mm] c1753fd02a: stress-ng.madvise.ops_per_sec
 -6.5% regression



On 9/4/23 18:04, Mathieu Desnoyers wrote:
> On 9/4/23 01:32, Yin Fengwei wrote:
>>
>>
>> On 7/19/23 14:34, kernel test robot wrote:
>>>
>>> hi, Mathieu Desnoyers,
>>>
>>> we noticed that this commit addressed issue:
>>>    "[linus:master] [sched] af7f588d8f: will-it-scale.per_thread_ops -13.9% regression"
>>> we reported before on:
>>>    https://lore.kernel.org/oe-lkp/202305151017.27581d75-yujie.liu@intel.com/
>>>
>>> we really saw a will-it-scale.per_thread_ops 92.2% improvement by this commit
>>> (details are as below).
>>> however, we also noticed a stress-ng regression.
>>>
>>> below detail report FYI.
>>>
>>>
>>> Hello,
>>>
>>> kernel test robot noticed a -6.5% regression of stress-ng.madvise.ops_per_sec on:
>>>
>>>
>>> commit: c1753fd02a0058ea43cbb31ab26d25be2f6cfe08 ("mm: move mm_count into its own cache line")
>>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>> I noticed that the struct mm_struct has following layout change after this patch.
>> Without the patch:
>>                  spinlock_t         page_table_lock;      /*   124     4 */
>>                  /* --- cacheline 2 boundary (128 bytes) --- */
>>                  struct rw_semaphore mmap_lock;           /*   128    40 */   ----> in one cache line
>>                  struct list_head   mmlist;               /*   168    16 */
>>                  int                mm_lock_seq;          /*   184     4 */
>>
>> With the patch:
>>                  spinlock_t         page_table_lock;      /*   180     4 */
>>                  struct rw_semaphore mmap_lock;           /*   184    40 */   ----> cross to two cache lines
>>                  /* --- cacheline 3 boundary (192 bytes) was 32 bytes ago --- */
>>                  struct list_head   mmlist;               /*   224    16 */
> 
> If your intent is just to make sure that mmap_lock is entirely contained
> within a cache line by forcing it to begin on a cache line boundary, you
> can do:
> 
> struct mm_struct {
> [...]
>     struct rw_semaphore mmap_lock ____cacheline_aligned_in_smp;
>     struct list_head mmlist;
> [...]
> };
> 
> The code above keeps mmlist on the same cache line as mmap_lock if
> there happens to be enough room in the cache line after mmap_lock.
> 
> Otherwise, if your intent is to also eliminate false sharing by making
> sure that mmap_lock sits alone in its cache line, you can do the following:
> 
> struct mm_struct {
> [...]
>     struct {
>         struct rw_semaphore mmap_lock;
>     } ____cacheline_aligned_in_smp;
>     struct list_head mmlist;
> [...]
> };
> 
> The code above keeps mmlist in a separate cache line from mmap_lock;
> 
> Depending on the usage, one or the other may be better. Comparative
> benchmarks of both approaches would help choosing the best way forward
> here.
Tested the will_it_scale.mmap1 on Intel Ice Lake 48C/96T + 192G ram. And
confirmed that my patch bring around 12% regression. Which confirmed the
information in
commit 2e3025434a6b ("mm: relocate 'write_protect_seq' in struct mm_struct")

Putting state and owner of rwsem to different cache line can benefit the 
will_it_scale.mmap1.

So we may just keep the mm_struct as it now.


Regards
Yin, Fengwei

> 
> Thanks,
> 
> Mathieu
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ