lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 13 May 2021 22:50:36 -0500
From:   Rob Herring <robh@...nel.org>
To:     "Liang, Kan" <kan.liang@...ux.intel.com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Andi Kleen <ak@...ux.intel.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        Andy Lutomirski <luto@...capital.net>,
        Stephane Eranian <eranian@...gle.com>,
        Namhyung Kim <namhyung@...nel.org>
Subject: Re: [PATCH V7 2/2] perf/x86: Reset the dirty counter to prevent the
 leak for an RDPMC task

On Thu, May 13, 2021 at 5:14 PM Liang, Kan <kan.liang@...ux.intel.com> wrote:
>
>
>
> On 5/13/2021 11:02 AM, Peter Zijlstra wrote:
> > On Thu, May 13, 2021 at 07:23:02AM -0700, kan.liang@...ux.intel.com wrote:
> >
> >> +    if (x86_pmu.sched_task && event->hw.target) {
> >> +            atomic_inc(&event->pmu->sched_cb_usage);
> >> +            local_irq_save(flags);
> >> +            x86_pmu_clear_dirty_counters();
> >> +            local_irq_restore(flags);
> >> +    }
> >
> > So what happens if our mmap() happens after we've already created two
> > (or more) threads in the process, all of who already have a counter (or
> > more) on?
> >
> > Shouldn't this be something like?
>
> That's not enough.
>
> I implemented a test case as below:
> - The main thread A creates a new thread B.
> - Bind the thread A to CPU 0. Then the thread A opens a event, mmap,
> enable the event, and sleep.
> - Bind the thread B to CPU 1. Wait until the event in the thread A is
> enabled. Then RDPMC can read the counters on CPU 1.
>
> In the x86_pmu_event_mapped(), we do on_each_cpu_mask(mm_cpumask(mm),
> cr4_update_pce, NULL, 1);
> The RDPMC from thread B on CPU 1 is not forbidden.

You want RDPMC disabled since the counters are not cleared? If you had
a cpu bound event for CPU1, then you'd want RDPMC enabled, right?

> Since the counter is not created in thread B, the sched_task() never
> gets a chance to be invoked. The dirty counter is not cleared.
>
> To fix it, I think we have to move the cr4_update_pce() to the context
> switch, and update it only when the RDPMC task is scheduled. But it
> probably brings some overhead.

I'm trying to do a similar approach (if I understand what you mean)
using sched_task() without a switch_mm hook or IPIs. The current
branch is here[1]. I have things working for task bound events, but I
don't think cpu bound events are handled for similar reasons as above.
I'm not too sure that enabling user access for cpu bound events is
really all that useful? Maybe for Arm we should just keep user access
for cpu bound events disabled.

Note for now I'm not doing lazy clearing of counters for simplicity.

Rob

[1] git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git
user-perf-event-v8

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ