lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a4bcddad-e56f-cedc-891a-916e86d9a02c@intel.com>
Date:   Fri, 16 Dec 2022 19:48:33 +0800
From:   "Yin, Fengwei" <fengwei.yin@...el.com>
To:     Rik van Riel <riel@...riel.com>,
        Thorsten Leemhuis <regressions@...mhuis.info>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "Yang Shi" <shy828301@...il.com>,
        "Shutemov, Kirill" <kirill.shutemov@...el.com>
CC:     "Huang, Ying" <ying.huang@...el.com>,
        kernel test robot <yujie.liu@...el.com>, <lkp@...ts.01.org>,
        <lkp@...el.com>, Matthew Wilcox <willy@...radead.org>,
        <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
        <feng.tang@...el.com>, <zhengjun.xing@...ux.intel.com>,
        Nathan Chancellor <nathan@...nel.org>
Subject: Re: [mm] f35b5d7d67: will-it-scale.per_process_ops -95.5% regression



On 12/2/2022 4:29 AM, Rik van Riel wrote:
> On Thu, 2022-12-01 at 19:33 +0100, Thorsten Leemhuis wrote:
>> Hi, this is your Linux kernel regression tracker.
>>
>> On 28.11.22 07:40, Nathan Chancellor wrote:
>>> Hi Rik,
>>
>> I wonder what we should do about below performance regression. Is
>> reverting the culprit now and reapplying it later together with a fix
>> a
>> viable option? Or was anything done/is anybody doing something
>> already
>> to address the problem and I just missed it?
> 
> The changeset in question speeds up kernel compiles with
> GCC, as well as the runtime speed of other programs, due
> to being able to use THPs more. However, it slows down kernel
> compiles with clang, due to ... something clang does.
> 
> I have not figured out what that something is yet.
> 
> I don't know if I have the wrong version of clang here,
> but I have not seen any smoking gun at all when tracing
> clang system calls. I see predominantly small mmap and
> unmap calls, and nothing that even triggers 2MB alignment.

There are some findings in my side. I not very sure that it's root
case and like to share here to call more eyes on it.
1. The regression is not related to clang. It's related with
   ld.lld.
   In my env, command
      make LD=ld.lld -skj96 allmodconfig all
   could reproduce the regression. But command
      make CC=clang -skj96 allmodconfig all
   can't reproduce the regression.

2. It looks like ld.lld calls madvise(MADV_DONTNEED) many times to
   free memory. But the parameters "len" is not friendly to THP.

trace-bpfcc with do_madvise gave:
531607  531732  ld.lld          do_madvise.part.0 start: 0x7feca9000000, len: 0x7fb000, behavior: 0x4
531607  531793  ld.lld          do_madvise.part.0 start: 0x7fec86a00000, len: 0x7fb000, behavior: 0x4
531607  531767  ld.lld          do_madvise.part.0 start: 0x7fec95000000, len: 0x7fb000, behavior: 0x4
531607  531703  ld.lld          do_madvise.part.0 start: 0x7fecb3000000, len: 0x7fb000, behavior: 0x4
531607  531757  ld.lld          do_madvise.part.0 start: 0x7fec9b400000, len: 0x7fb000, behavior: 0x4
531607  531701  ld.lld          do_madvise.part.0 start: 0x7fecb4400000, len: 0x7fb000, behavior: 0x4
531607  531795  ld.lld          do_madvise.part.0 start: 0x7fec85600000, len: 0x7fb000, behavior: 0x4
531607  531792  ld.lld          do_madvise.part.0 start: 0x7fec87400000, len: 0x7fb000, behavior: 0x4
531607  531695  ld.lld          do_madvise.part.0 start: 0x7fecbca00000, len: 0x7fb000, behavior: 0x4
531607  531760  ld.lld          do_madvise.part.0 start: 0x7fec99600000, len: 0x7fb000, behavior: 0x4
531607  531706  ld.lld          do_madvise.part.0 start: 0x7fecb1200000, len: 0x7fb000, behavior: 0x4
531607  531769  ld.lld          do_madvise.part.0 start: 0x7fec94600000, len: 0x7fb000, behavior: 0x4
531607  531699  ld.lld          do_madvise.part.0 start: 0x7fecb5800000, len: 0x7fb000, behavior: 0x4
531607  531738  ld.lld          do_madvise.part.0 start: 0x7feca5e00000, len: 0x7fb000, behavior: 0x4

3. I captured the perf data for 50s when kernel build with ld.lld
   was ongoing and got:

    24.20%     0.27%  cc1              [kernel.kallsyms]           [k] asm_exc_page_fault
           9.52%
                asm_exc_page_fault
                exc_page_fault
                do_user_addr_fault
                handle_mm_fault
                __handle_mm_fault
                do_huge_pmd_anonymous_page
                vma_alloc_folio
                __folio_alloc
                __alloc_pages
                get_page_from_freelist
                clear_page_erms

    14.85%     0.00%  ld.lld           [kernel.kallsyms]           [k] entry_SYSCALL_64_after_hwframe
           11.52%
                entry_SYSCALL_64_after_hwframe
                do_syscall_64
                __x64_sys_madvise
                do_madvise.part.0
                zap_page_range
                unmap_single_vma
                unmap_page_range
                page_remove_rmap
                deferred_split_huge_page
                __lock_text_start
                native_queued_spin_lock_slowpath

  My understanding is that the deferred_split_huge_page() here is kind of
  matching len: 0x7fb000 in do_madvise() call. The end is not PMD size 
  aligned and trigger lock contention on deferred_split_queue.

With GNU ld, there is no any madvise(MADV_DONTNEED) called.


Regards
Yin, Fengwei

> 
>>
>> Yang Shi, Andrew, what's your option on this? I ask you directly,
>> because it looks like Rik hasn't posted anything to lists archived on
>> lore during the last few weeks. :-/
>>
>> Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker'
>> hat)
>>
>> P.S.: As the Linux kernel's regression tracker I deal with a lot of
>> reports and sometimes miss something important when writing mails
>> like
>> this. If that's the case here, don't hesitate to tell me in a public
>> reply, it's in everyone's interest to set the public record straight.
>>
>>> On Thu, Oct 20, 2022 at 10:16:20AM -0700, Nathan Chancellor wrote:
>>>> On Thu, Oct 20, 2022 at 11:28:16AM -0400, Rik van Riel wrote:
>>>>> On Thu, 2022-10-20 at 13:07 +0800, Huang, Ying wrote:
>>>>>> Nathan Chancellor <nathan@...nel.org> writes:
>>>>>>>
>>>>>>> For what it's worth, I just bisected a massive and visible
>>>>>>> performance
>>>>>>> regression on my Threadripper 3990X workstation to commit
>>>>>>> f35b5d7d676e
>>>>>>> ("mm: align larger anonymous mappings on THP boundaries"),
>>>>>>> which
>>>>>>> seems
>>>>>>> directly related to this report/analysis. I initially
>>>>>>> noticed this
>>>>>>> because my full set of kernel builds against mainline went
>>>>>>> from 2
>>>>>>> hours
>>>>>>> and 20 minutes or so to over 3 hours. Zeroing in on x86_64
>>>>>>> allmodconfig,
>>>>>>> which I used for the bisect:
>>>>>>>
>>>>>>> @ 7b5a0b664ebe ("mm/page_ext: remove unused variable in
>>>>>>> offline_page_ext"):
>>>>>>>
>>>>>>> Benchmark 1: make -skj128 LLVM=1 allmodconfig all
>>>>>>>   Time (mean ± σ):     318.172 s ±  0.730 s    [User:
>>>>>>> 31750.902 s,
>>>>>>> System: 4564.246 s]
>>>>>>>   Range (min … max):   317.332 s … 318.662 s    3 runs
>>>>>>>
>>>>>>> @ f35b5d7d676e ("mm: align larger anonymous mappings on THP
>>>>>>> boundaries"):
>>>>>>>
>>>>>>> Benchmark 1: make -skj128 LLVM=1 allmodconfig all
>>>>>>>   Time (mean ± σ):     406.688 s ±  0.676 s    [User:
>>>>>>> 31819.526 s,
>>>>> System: 16327.022 s]
>>>>>>>   Range (min … max):   405.954 s … 407.284 s    3 run
>>>>>>
>>>>>> Have you tried to build with gcc?  Want to check whether is
>>>>>> this
>>>>>> clang
>>>>>> specific issue or not.
>>>>>
>>>>> This may indeed be something LLVM specific. In previous tests,
>>>>> GCC has generally seen a benefit from increased THP usage.
>>>>> Many other applications also benefit from getting more THPs.
>>>>
>>>> Indeed, GCC builds actually appear to be slightly faster on my
>>>> system now,
>>>> apologies for not trying that before reporting :/
>>>>
>>>> 7b5a0b664ebe:
>>>>
>>>> Benchmark 1: make -skj128 allmodconfig all
>>>>   Time (mean ± σ):     355.294 s ±  0.931 s    [User: 33620.469
>>>> s, System: 6390.064 s]
>>>>   Range (min … max):   354.571 s … 356.344 s    3 runs
>>>>
>>>> f35b5d7d676e:
>>>>
>>>> Benchmark 1: make -skj128 allmodconfig all
>>>>   Time (mean ± σ):     347.400 s ±  2.029 s    [User: 34389.724
>>>> s, System: 4603.175 s]
>>>>   Range (min … max):   345.815 s … 349.686 s    3 runs
>>>>
>>>>> LLVM showing 10% system time before this change, and a whopping
>>>>> 30% system time after that change, suggests that LLVM is
>>>>> behaving
>>>>> quite differently from GCC in some ways.
>>>>
>>>> The above tests were done with GCC 12.2.0 from Arch Linux. The
>>>> previous LLVM
>>>> tests were done with a self-compiled version of LLVM from the
>>>> main branch
>>>> (16.0.0), optimized with BOLT [1]. To eliminate that as a source
>>>> of issues, I
>>>> used my distribution's version of clang (14.0.6) and saw similar
>>>> results as
>>>> before:
>>>>
>>>> 7b5a0b664ebe:
>>>>
>>>> Benchmark 1: make -skj128 LLVM=/usr/bin/ allmodconfig all
>>>>   Time (mean ± σ):     462.517 s ±  1.214 s    [User: 48544.240
>>>> s, System: 5586.212 s]
>>>>   Range (min … max):   461.115 s … 463.245 s    3 runs
>>>>
>>>> f35b5d7d676e:
>>>>
>>>> Benchmark 1: make -skj128 LLVM=/usr/bin/ allmodconfig all
>>>>   Time (mean ± σ):     547.927 s ±  0.862 s    [User: 47913.709
>>>> s, System: 17682.514 s]
>>>>   Range (min … max):   547.429 s … 548.922 s    3 runs
>>>>
>>>>> If we can figure out what these differences are, maybe we can
>>>>> just fine tune the code to avoid this issue.
>>>>>
>>>>> I'll try to play around with LLVM compilation a little bit next
>>>>> week, to see if I can figure out what might be going on. I
>>>>> wonder
>>>>> if LLVM is doing lots of mremap calls or something...
>>>>
>>>> If there is any further information I can provide or patches I
>>>> can test,
>>>> I am more than happy to do so.
>>>>
>>>> [1]:
>>>> https://github.com/llvm/llvm-project/tree/96552e73900176d65ee6650facae8d669d6f9498/bolt
>>>
>>> Was there ever a follow up to this report that I missed? I just
>>> noticed that I am still reverting f35b5d7d676e in my mainline
>>> kernel.
>>>
>>> Cheers,
>>> Nathan
>>>
>>
>> #regzbot ignore-activity
>>
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ