lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 9 Nov 2020 14:52:02 -0500
From:   "Liang, Kan" <kan.liang@...ux.intel.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     mingo@...nel.org, linux-kernel@...r.kernel.org,
        namhyung@...nel.org, eranian@...gle.com, irogers@...gle.com,
        gmx@...gle.com, acme@...nel.org, jolsa@...hat.com,
        ak@...ux.intel.com
Subject: Re: [PATCH 1/3] perf/core: Flush PMU internal buffers for per-CPU
 events



On 11/9/2020 12:33 PM, Peter Zijlstra wrote:
> On Mon, Nov 09, 2020 at 09:49:31AM -0500, Liang, Kan wrote:
>>> Maybe we can frob x86_pmu_enable()...
>>
>> Could you please elaborate?
> 
> Something horrible like this. It will detect the first time we enable
> the PMU on a new task (IOW we did a context switch) and wipe the
> counters when user RDPMC is on...
>

Oh, you mean the RDPMC patch. It should be doable, but I think 
sched_task() may be a better place, especially with the new optimization 
(patch 3). We can set PERF_SCHED_CB_SW_IN bit for the case, so we only 
do the check for per-task events in sched in.

It looks like the below patch has to unconditionally do the check (even 
for the non-RDPMC cases), which should be unnecessary.

Anyway, I think the RDPMC patch should depend on the implementation of 
the sched_task(). We may have further discussion when the design of 
sched_task() is finalized.


Thanks,
Kan

> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 77b963e5e70a..d862927baaef 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -1289,6 +1289,15 @@ static void x86_pmu_enable(struct pmu *pmu)
>   		perf_events_lapic_init();
>   	}
>   
> +	if (cpuc->current != current) {
> +		struct mm_struct *mm = current->mm;
> +
> +		cpuc->current = current;
> +
> +		if (mm && atomic_read(&mm->context.perf_rdpmc_allowed))
> +			wipe_dirty_counters();
> +	}
> +
>   	cpuc->enabled = 1;
>   	barrier();
>   
> diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
> index 7895cf4c59a7..d16118cb3bd0 100644
> --- a/arch/x86/events/perf_event.h
> +++ b/arch/x86/events/perf_event.h
> @@ -248,6 +248,8 @@ struct cpu_hw_events {
>   	unsigned int		txn_flags;
>   	int			is_fake;
>   
> +	void			*current;
> +
>   	/*
>   	 * Intel DebugStore bits
>   	 */
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ