lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cef8113c-0da8-d99e-ac37-7d21af78c0e0@linux.intel.com>
Date:   Tue, 20 Jun 2017 18:22:56 +0300
From:   Alexey Budankov <alexey.budankov@...ux.intel.com>
To:     Mark Rutland <mark.rutland@....com>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Andi Kleen <ak@...ux.intel.com>,
        Kan Liang <kan.liang@...el.com>,
        Dmitri Prokhorov <Dmitry.Prohorov@...el.com>,
        Valery Cherepennikov <valery.cherepennikov@...el.com>,
        David Carrillo-Cisneros <davidcc@...gle.com>,
        Stephane Eranian <eranian@...gle.com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/n] perf/core: addressing 4x slowdown during
 per-process profiling of STREAM benchmark on Intel Xeon Phi

On 20.06.2017 16:36, Mark Rutland wrote:
> On Mon, Jun 19, 2017 at 11:31:59PM +0300, Alexey Budankov wrote:
>> On 15.06.2017 22:56, Mark Rutland wrote:
>>> On Thu, Jun 15, 2017 at 08:41:42PM +0300, Alexey Budankov wrote:
>>>> +static int
>>>> +perf_cpu_tree_iterate(struct rb_root *tree,
>>>> +		perf_cpu_tree_callback_t callback, void *data)
>>>> +{
>>>> +	int ret = 0;
>>>> +	struct rb_node *node;
>>>> +	struct perf_event *event;
>>>> +
>>>> +	WARN_ON_ONCE(!tree);
>>>> +
>>>> +	for (node = rb_first(tree); node; node = rb_next(node)) {
>>>> +		struct perf_event *node_event = container_of(node,
>>>> +				struct perf_event, group_node);
>>>> +
>>>> +		list_for_each_entry(event, &node_event->group_list,
>>>> +				group_list_entry) {
>>>> +			ret = callback(event, data);
>>>> +			if (ret)
>>>> +				return ret;
>>>> +		}
>>>> +	}
>>>> +
>>>> +	return 0;
>>>>   }
>>>
>>> If you need to iterate over every event, you can use the list that
>>> threads the whole tree.
>>
>> Could you please explain more on that?
> 
> In Peter's original suggestion, we'd use a threaded tree rather than a
> tree of lists.
> 
> i.e. you'd have something like:
> 
> struct threaded_rb_node {
> 	struct rb_node   node;
> 	struct list_head head;
> };

Is this for every group leader? Which objects does the head keep?

> 
> ... with the tree and list covering all nodes, in the same order:
> 
> Tree:
> 
>       3
>      / \
>     /   \
>    1     5
>   / \   / \
> 0   2 4   6
> 
> List:
> 
> 0 - 1 - 2 - 3 - 4 - 5 - 6
> 
> ... that way you can search using the tree, and iterate using the list,
> even when you wan to iterate over sub-lists.
> 
> Thanks,
> Mark.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ