lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon,  8 Jan 2018 07:15:12 -0800
From:   kan.liang@...el.com
To:     peterz@...radead.org, mingo@...hat.com, acme@...nel.org,
        linux-kernel@...r.kernel.org
Cc:     tglx@...utronix.de, jolsa@...hat.com, eranian@...gle.com,
        ak@...ux.intel.com, Kan Liang <kan.liang@...ux.intel.com>
Subject: [RESEND PATCH V2 0/4] bug fix mmap read and rdpmc read in large PEBS

From: Kan Liang <kan.liang@...ux.intel.com>

------
Just want to ping and check the status of the patch series.
But there was something wrong with my mail box. All the emails for V2 patches
were lost. I have to resend the V2 series.
Sorry for the noise.

Changes since V1:
 - Check PERF_X86_EVENT_AUTO_RELOAD before call
   intel_pmu_save_and_restore()
 - Introduce a special purpose intel_pmu_save_and_restart()
   just for AUTO_RELOAD.
 - New patch to disable userspace RDPMC usage if large PEBS is enabled.

------

There is bug when mmap read event->count with large PEBS enabled.
Here is an example.
 #./read_count
 0x71f0
 0x122c0
 0x1000000001c54
 0x100000001257d
 0x200000000bdc5

The bug is caused by two issues.
- In x86_perf_event_update, the calculation of event->count does not
  take the auto-reload values into account.
- In x86_pmu_read, it doesn't count the undrained values in large PEBS
  buffers.

The issue is introduced with the auto-reload mechanism enabled by
commit 851559e35fd5 ("perf/x86/intel: Use the PEBS auto reload
mechanism when possible")

Also, the userspace RDPMC usage is broken for large PEBS.

The issue is introduced with the large PEBS enabled by
commit b8241d20699e ("perf/x86/intel: Implement batched PEBS interrupt
handling (large PEBS interrupt threshold)")

The source code of read_count is as below.

struct cpu {
        int fd;
        struct perf_event_mmap_page *buf;
};

int perf_open(struct cpu *ctx, int cpu)
{
        struct perf_event_attr attr = {
                .type = PERF_TYPE_HARDWARE,
                .size = sizeof(struct perf_event_attr),
                .sample_period = 100000,
                .config = 0,
                .sample_type = PERF_SAMPLE_IP | PERF_SAMPLE_TID |
                                PERF_SAMPLE_TIME | PERF_SAMPLE_CPU,
                .precise_ip = 3,
                .mmap = 1,
                .comm = 1,
                .task = 1,
                .mmap2 = 1,
                .sample_id_all = 1,
                .comm_exec = 1,
        };
        ctx->buf = NULL;
        ctx->fd = syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0);
        if (ctx->fd < 0) {
                perror("perf_event_open");
                return -1;
        }
        return 0;
}

void perf_close(struct cpu *ctx)
{
        close(ctx->fd);
        if (ctx->buf)
                munmap(ctx->buf, pagesize);
}

int main(int ac, char **av)
{
        struct cpu ctx;
        u64 count;

        perf_open(&ctx, 0);

        while (1) {
                sleep(5);

                if (read(ctx.fd, &count, 8) != 8) {
                        perror("counter read");
                        break;
                }
                printf("0x%llx\n", count);

        }
        perf_close(&ctx);
}

Kan Liang (4):
  perf/x86/intel: fix event update for auto-reload
  perf/x86: introduce read function for x86_pmu
  perf/x86/intel: drain PEBS buffer in event read
  perf/x86: fix: disable userspace RDPMC usage for large PEBS

 arch/x86/events/core.c       |  5 ++-
 arch/x86/events/intel/core.c |  9 +++++
 arch/x86/events/intel/ds.c   | 79 ++++++++++++++++++++++++++++++++++++++++++--
 arch/x86/events/perf_event.h |  3 ++
 4 files changed, 93 insertions(+), 3 deletions(-)

-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ