lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 29 May 2017 09:46:36 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Alexey Budankov <alexey.budankov@...ux.intel.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Andi Kleen <ak@...ux.intel.com>,
        Kan Liang <kan.liang@...el.com>,
        Dmitri Prokhorov <Dmitry.Prohorov@...el.com>,
        Valery Cherepennikov <valery.cherepennikov@...el.com>,
        David Carrillo-Cisneros <davidcc@...gle.com>,
        Stephane Eranian <eranian@...gle.com>,
        Mark Rutland <mark.rutland@....com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2]: perf/core: addressing 4x slowdown during
 per-process, profiling of STREAM benchmark on Intel Xeon Phi

On Sat, May 27, 2017 at 02:19:51PM +0300, Alexey Budankov wrote:
> @@ -571,6 +587,27 @@ struct perf_event {
>  	 * either sufficies for read.
>  	 */
>  	struct list_head		group_entry;
> +	/*
> +	 * Node on the pinned or flexible tree located at the event context;
> +	 * the node may be empty in case its event is not directly attached
> +	 * to the tree but to group_list list of the event directly
> +	 * attached to the tree;
> +	 */
> +	struct rb_node			group_node;
> +	/*
> +	 * List keeps groups allocated for the same cpu;
> +	 * the list may be empty in case its event is not directly
> +	 * attached to the tree but to group_list list of the event directly
> +	 * attached to the tree;
> +	 */
> +	struct list_head		group_list;
> +	/*
> +	 * Entry into the group_list list above;
> +	 * the entry may be attached to the self group_list list above
> +	 * in case the event is directly attached to the pinned or
> +	 * flexible tree;
> +	 */
> +	struct list_head		group_list_entry;
>  	struct list_head		sibling_list;
> 
>  	/*

> @@ -742,7 +772,17 @@ struct perf_event_context {
> 
>  	struct list_head		active_ctx_list;
>  	struct list_head		pinned_groups;
> +	/*
> +	 * Cpu tree for pinned groups; keeps event's group_node nodes
> +	 * of attached flexible groups;
> +	 */
> +	struct rb_root			pinned_tree;
>  	struct list_head		flexible_groups;
> +	/*
> +	 * Cpu tree for flexible groups; keeps event's group_node nodes
> +	 * of attached flexible groups;
> +	 */
> +	struct rb_root			flexible_tree;
>  	struct list_head		event_list;
>  	int				nr_events;
>  	int				nr_active;
> @@ -758,6 +798,7 @@ struct perf_event_context {
>  	 */
>  	u64				time;
>  	u64				timestamp;
> +	struct perf_event_tstamp	tstamp_data;
> 
>  	/*
>  	 * These fields let us detect when two contexts have both


So why do we now have a list _and_ a tree for the same entries?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ