lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 12 Jul 2019 12:54:41 -0700
From:   Y Song <ys114321@...il.com>
To:     Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc:     Ilya Leoshkevich <iii@...ux.ibm.com>, bpf <bpf@...r.kernel.org>,
        Networking <netdev@...r.kernel.org>, gor@...ux.ibm.com,
        heiko.carstens@...ibm.com
Subject: Re: [PATCH bpf] selftests/bpf: fix test_send_signal_nmi on s390

On Fri, Jul 12, 2019 at 11:24 AM Andrii Nakryiko
<andrii.nakryiko@...il.com> wrote:
>
> On Fri, Jul 12, 2019 at 10:46 AM Ilya Leoshkevich <iii@...ux.ibm.com> wrote:
> >
> > Many s390 setups (most notably, KVM guests) do not have access to
> > hardware performance events.
> >
> > Therefore, use the software event instead.
> >
> > Signed-off-by: Ilya Leoshkevich <iii@...ux.ibm.com>
> > Acked-by: Vasily Gorbik <gor@...ux.ibm.com>
> > ---
> >  tools/testing/selftests/bpf/prog_tests/send_signal.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > index 67cea1686305..4a45ea0b8448 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c
> > @@ -176,10 +176,19 @@ static int test_send_signal_tracepoint(void)
> >  static int test_send_signal_nmi(void)
> >  {
> >         struct perf_event_attr attr = {
> > +#if defined(__s390__)
> > +               /* Many s390 setups (most notably, KVM guests) do not have
> > +                * access to hardware performance events.
> > +                */
> > +               .sample_period = 1,
> > +               .type = PERF_TYPE_SOFTWARE,
> > +               .config = PERF_COUNT_SW_CPU_CLOCK,
> > +#else
>
> Is there any harm in switching all archs to software event? I'd rather
> avoid all those special arch cases, which will be really hard to test
> for people without direct access to them.

I still like to do hardware cpu_cycles in order to test nmi.
In a physical box.
$ perf list
List of pre-defined events (to be used in -e):

  branch-instructions OR branches                    [Hardware event]
  branch-misses                                      [Hardware event]
  bus-cycles                                         [Hardware event]
  cache-misses                                       [Hardware event]
  cache-references                                   [Hardware event]
  cpu-cycles OR cycles                               [Hardware event]
  instructions                                       [Hardware event]
  ref-cycles                                         [Hardware event]

  alignment-faults                                   [Software event]
  bpf-output                                         [Software event]
  context-switches OR cs                             [Software event]
  cpu-clock                                          [Software event]
  cpu-migrations OR migrations                       [Software event]
  dummy                                              [Software event]
  emulation-faults                                   [Software event]
  major-faults                                       [Software event]
  minor-faults                                       [Software event]
  page-faults OR faults                              [Software event]
  task-clock                                         [Software event]

  L1-dcache-load-misses                              [Hardware cache event]
...

In a VM
$ perf list
List of pre-defined events (to be used in -e):

  alignment-faults                                   [Software event]
  bpf-output                                         [Software event]
  context-switches OR cs                             [Software event]
  cpu-clock                                          [Software event]
  cpu-migrations OR migrations                       [Software event]
  dummy                                              [Software event]
  emulation-faults                                   [Software event]
  major-faults                                       [Software event]
  minor-faults                                       [Software event]
  page-faults OR faults                              [Software event]
  task-clock                                         [Software event]

  msr/smi/                                           [Kernel PMU
event]
  msr/tsc/                                           [Kernel PMU event]
.....

Is it possible that we detect at runtime whether the hardware
cpu_cycles available or not?
If available, let us do hardware one. Otherwise, skip or do the
software one? The software one does not really do nmi so it will take
the same code path in kernel as tracepoint.

>
> >                 .sample_freq = 50,
> >                 .freq = 1,
> >                 .type = PERF_TYPE_HARDWARE,
> >                 .config = PERF_COUNT_HW_CPU_CYCLES,
> > +#endif
> >         };
> >
> >         return test_send_signal_common(&attr, BPF_PROG_TYPE_PERF_EVENT, "perf_event");
> > --
> > 2.21.0
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ