lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190208200731.GN32511@hirez.programming.kicks-ass.net>
Date:   Fri, 8 Feb 2019 21:07:31 +0100
From:   Peter Zijlstra <peterz@...radead.org>
To:     kan.liang@...ux.intel.com
Cc:     acme@...nel.org, tglx@...utronix.de, mingo@...hat.com,
        linux-kernel@...r.kernel.org, eranian@...gle.com, jolsa@...hat.com,
        namhyung@...nel.org, ak@...ux.intel.com, luto@...capital.net,
        vbabka@...e.cz, will.deacon@....com, kirill@...temov.name
Subject: Re: [PATCH V5 02/14] perf/x86: Add perf_get_page_size support

On Fri, Feb 08, 2019 at 09:54:57AM -0800, kan.liang@...ux.intel.com wrote:
> From: Kan Liang <kan.liang@...ux.intel.com>
> 
> Implement a x86 specific version of perf_get_page_size(), which do full
> page-table walk of a given virtual address to retrieve page size.
> For x86, disabling IRQs over the walk is sufficient to prevent any tear
> down of the page tables.
> 
> The new sample type requires collecting the virtual address. The virtual
> address will not be output unless SAMPLE_ADDR is applied.
> 
> The large PEBS will be disabled with this sample type. Because we need
> to track munmap to flush the PEBS buffer for large PEBS. Perf doesn't
> support munmap tracking yet. The large PEBS can be enabled later
> separately when munmap tracking is supported.
> 
> Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
> ---
> 
> Changes since V4
> - Split patch 1 of V4 into two patches.
>   This patch add the x86 implementation
> 
>  arch/x86/events/core.c     | 31 +++++++++++++++++++++++++++++++
>  arch/x86/events/intel/ds.c |  3 ++-
>  2 files changed, 33 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 374a197..229a73b 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -2578,3 +2578,34 @@ void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap)
>  	cap->events_mask_len	= x86_pmu.events_mask_len;
>  }
>  EXPORT_SYMBOL_GPL(perf_get_x86_pmu_capability);
> +
> +u64 perf_get_page_size(u64 virt)
> +{
> +	unsigned long flags;
> +	unsigned int level;
> +	pte_t *pte;
> +
> +	if (!virt)
> +		return 0;
> +
> +	/*
> +	 * Interrupts are disabled, so it prevents any tear down
> +	 * of the page tables.
> +	 * See the comment near struct mmu_table_batch.
> +	 */
> +	local_irq_save(flags);
> +	if (virt >= TASK_SIZE)
> +		pte = lookup_address(virt, &level);
> +	else {
> +		if (current->mm) {
> +			pte = lookup_address_in_pgd(pgd_offset(current->mm, virt),
> +						    virt, &level);
> +		} else
> +			level = PG_LEVEL_NUM;
> +	}
> +	local_irq_restore(flags);
> +	if (level >= PG_LEVEL_NUM)
> +		return 0;
> +
> +	return (u64)page_level_size(level);
> +}

Full NAK on a pure x86 implementation.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ