lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YidR4iKM/GjWto4Y@hirez.programming.kicks-ass.net>
Date:   Tue, 8 Mar 2022 13:53:54 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     Wen Yang <wenyang@...ux.alibaba.com>
Cc:     Stephane Eranian <eranian@...gle.com>,
        Wen Yang <simon.wy@...baba-inc.com>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        mark rutland <mark.rutland@....com>,
        jiri olsa <jolsa@...hat.com>,
        namhyung kim <namhyung@...nel.org>,
        borislav petkov <bp@...en8.de>, x86@...nel.org,
        "h. peter anvin" <hpa@...or.com>, linux-perf-users@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [RESEND PATCH 2/2] perf/x86: improve the event scheduling to
 avoid unnecessary pmu_stop/start

On Tue, Mar 08, 2022 at 02:42:10PM +0800, Wen Yang wrote:

> Perhaps the following code could ensure that the pmu counter value is
> stable:
> 
> 
>     /*
>      * Careful: an NMI might modify the previous event value.
>      *
>      * Our tactic to handle this is to first atomically read and
>      * exchange a new raw count - then add that new-prev delta
>      * count to the generic event atomically:
>      */
> again:
>     prev_raw_count = local64_read(&hwc->prev_count);
>     rdpmcl(hwc->event_base_rdpmc, new_raw_count);
> 
>     if (local64_cmpxchg(&hwc->prev_count, prev_raw_count,
>                     new_raw_count) != prev_raw_count)
>         goto again;
> 
> 
> It might be better if we could reduce the calls to goto again.

This is completely unrelated. And that goto is rather unlikely, unless
you're doing some truely weird things.

That case happens when the PMI for a counter lands in the middle of a
read() for that counter. In that case both will try and fold the
hardware delta into the software counter. This conflict is fundamentally
unavoidable and needs to be dealt with. The above guarantees correctness
in this case.

It is however extremely unlikely and has *NOTHING* what so ever to do
with counter scheduling.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ