lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 16 Oct 2020 22:51:25 +0800
From:   "Chen, Rong A" <rong.a.chen@...el.com>
To:     Jan Kara <jack@...e.cz>, NeilBrown <neilb@...e.de>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Christoph Hellwig <hch@....de>,
        Trond Myklebust <trond.myklebust@...merspace.com>,
        Chuck Lever <chuck.lever@...cle.com>,
        LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
        lkp@...el.com, ying.huang@...el.com, feng.tang@...el.com,
        zhengjun.xing@...el.com
Subject: Re: [mm/writeback] 8d92890bd6: will-it-scale.per_process_ops -15.3%
 regression



On 10/15/2020 5:12 PM, Jan Kara wrote:
> On Thu 15-10-20 11:08:43, Jan Kara wrote:
>> On Thu 15-10-20 08:46:01, NeilBrown wrote:
>>> On Wed, Oct 14 2020, Jan Kara wrote:
>>>
>>>> On Wed 14-10-20 16:47:06, kernel test robot wrote:
>>>>> Greeting,
>>>>>
>>>>> FYI, we noticed a -15.3% regression of will-it-scale.per_process_ops due
>>>>> to commit:
>>>>>
>>>>> commit: 8d92890bd6b8502d6aee4b37430ae6444ade7a8c ("mm/writeback: discard
>>>>> NR_UNSTABLE_NFS, use NR_WRITEBACK instead")
>>>>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>>>>
>>>> Thanks for report but it doesn't quite make sense to me. If we omit
>>>> reporting & NFS changes in that commit (which is code not excercised by
>>>> this benchmark), what remains are changes like:
>>>>
>>>>          nr_pages += node_page_state(pgdat, NR_FILE_DIRTY);
>>>> -       nr_pages += node_page_state(pgdat, NR_UNSTABLE_NFS);
>>>>          nr_pages += node_page_state(pgdat, NR_WRITEBACK);
>>>> ...
>>>> -               nr_reclaimable = global_node_page_state(NR_FILE_DIRTY) +
>>>> -                                       global_node_page_state(NR_UNSTABLE_NFS);
>>>> +               nr_reclaimable = global_node_page_state(NR_FILE_DIRTY);
>>>> ...
>>>> -       gdtc->dirty = global_node_page_state(NR_FILE_DIRTY) +
>>>> -                     global_node_page_state(NR_UNSTABLE_NFS);
>>>> +       gdtc->dirty = global_node_page_state(NR_FILE_DIRTY);
>>>>
>>>> So if there's any negative performance impact of these changes, they're
>>>> likely due to code alignment changes or something like that... So I don't
>>>> think there's much to do here since optimal code alignment is highly specific
>>>> to a particular CPU etc.
>>>
>>> I agree, it seems odd.
>>>
>>> Removing NR_UNSTABLE_NFS from enum node_stat_item would renumber all the
>>> following value and would, I think, change NR_DIRTIED from 32 to 31.
>>> Might that move something to a different cache line and change some
>>> contention?
>>
>> Interesting theory, it could be possible.
>>
>>> That would be easy enough to test: just re-add NR_UNSTABLE_NFS.
>>
>> Yeah, easy enough to test. Patch for this is attached. 0-day people, can
>> you check whether applying this patch changes anything in your perf
>> numbers?
> 
> Forgot the patch. Attached now.
> 
> 								Honza
> 

Hi,

We tested the patch and the regression became worse, but as you said the
problem seems odd, so we tested v5.9 and regression already disappeared.

a37b0715ddf30077 8d92890bd6b8502d6aee4b37430                        v5.9
---------------- --------------------------- ---------------------------
          %stddev     %change         %stddev     %change         %stddev
              \          |                \          |                \
     341015 ±  9%     -18.4%     278292           +32.4%     451473 
   will-it-scale.per_process_ops
   65475001 ±  9%     -18.4%   53432256           +32.4%   86682938 
   will-it-scale.workload

Best Regards,
Rong Chen

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ