lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wjMgTdpsXeuBfRBz23mTSD67V_BB_aV2bCzDHF2-k0WBg@mail.gmail.com>
Date:   Thu, 5 Nov 2020 10:37:08 -0800
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     Xing Zhengjun <zhengjun.xing@...ux.intel.com>
Cc:     kernel test robot <rong.a.chen@...el.com>,
        Jann Horn <jannh@...gle.com>, Peter Xu <peterx@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>, lkp@...ts.01.org,
        kernel test robot <lkp@...el.com>, zhengjun.xing@...el.com
Subject: Re: [LKP] Re: [mm/gup] a308c71bf1: stress-ng.vm-splice.ops_per_sec
 -95.6% regression

On Thu, Nov 5, 2020 at 12:29 AM Xing Zhengjun
<zhengjun.xing@...ux.intel.com> wrote:
>
> > Rong - mind testing this? I don't think the zero-page _should_ be
> > something that real loads care about, but hey, maybe people do want to
> > do things like splice zeroes very efficiently..
>
> I test the patch, the regression still existed.

Thanks.

So Jann's suspicion seems interesting but apparently not the reason
for this particular case.

For being such a _huge_ difference (20x improvement followed by a 20x
regression), it's surprising how little the numbers give a clue. The
big changes are things like
"interrupts.CPU19.CAL:Function_call_interrupts", but while those
change by hundreds of percent, most of the changes seem to just be
about them moving to different CPU's. IOW, we have things like

      5652 ± 59%    +387.9%      27579 ± 96%
interrupts.CPU13.CAL:Function_call_interrupts
     28249 ± 32%     -69.3%       8675 ± 50%
interrupts.CPU28.CAL:Function_call_interrupts

which isn't really much of a change at all despite the changes looking
very big - it's just the stats jumping from one CPU to another.

Maybe there's some actual change in there, but it's very well hidden if so.

Yes, some of the numbers get worse:

    868396 ±  3%     +20.9%    1050234 ± 14%
interrupts.RES:Rescheduling_interrupts

so that's a 20% increase in rescheduling interrupts,  But it's a 20%
increase, not a 500% one. So the fact that performance changes by 20x
is still very unclear to me.

We do have a lot of those numa-meminfo changes, but they could just
come from allocation patterns.

That said - another difference between the fast-cup code and the
regular gup code is that the fast-gup code does

                if (pte_protnone(pte))
                        goto pte_unmap;

and the regular slow case does

        if ((flags & FOLL_NUMA) && pte_protnone(pte))
                goto no_page;

now, FOLL_NUMA is always set in the slow case if we don't have
FOLL_FORCE set, so this difference isn't "real", but it's one of those
cases where the zero-page might be marked for NUMA faulting, and doing
the forced COW might then cause it to be accessible.

Just out of curiosity, do the numbers change enormously if you just remove that

                if (pte_protnone(pte))
                        goto pte_unmap;

test from the fast-cup case (top of the loop in gup_pte_range()) -
effectively making fast-gup basically act like FOLL_FORCE wrt numa
placement..

I'm not convinced that's a valid change in general, so this is just a
"to debug the odd performance numbers" issue.

Also out of curiosity: is the performance profile limited to just the
load, or is it a system profile (ie do you have "-a" on the perf
record line or not).

               Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ