lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170529074506.brvayqdp6xcxbhs7@hirez.programming.kicks-ass.net>
Date:   Mon, 29 May 2017 09:45:06 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Alexey Budankov <alexey.budankov@...ux.intel.com>
Cc:     Ingo Molnar <mingo@...hat.com>,
        Arnaldo Carvalho de Melo <acme@...nel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        Andi Kleen <ak@...ux.intel.com>,
        Kan Liang <kan.liang@...el.com>,
        Dmitri Prokhorov <Dmitry.Prohorov@...el.com>,
        Valery Cherepennikov <valery.cherepennikov@...el.com>,
        David Carrillo-Cisneros <davidcc@...gle.com>,
        Stephane Eranian <eranian@...gle.com>,
        Mark Rutland <mark.rutland@....com>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2]: perf/core: addressing 4x slowdown during
 per-process, profiling of STREAM benchmark on Intel Xeon Phi

On Sat, May 27, 2017 at 02:19:51PM +0300, Alexey Budankov wrote:
> Solution:
> 
> cpu indexed trees for perf_event_context::pinned_groups and
> perf_event_context::flexible_groups lists are introduced. Every tree node
> keeps a list of groups allocated for the same cpu. A tree references only
> groups located at the appropriate group list. The tree provides capability
> to iterate over groups allocated for a specific cpu only, what is exactly
> required by multiplexing timer interrupt handler. The handler runs per-cpu
> and enables/disables groups using group_sched_in()/group_sched_out() that
> call event_filter_match() function filtering out groups allocated for cpus
> different from the one executing the handler. Additionally for every
> filtered out group group_sched_out() updates tstamps values to the current
> interrupt time. This updating work is now done only once by
> update_context_time() called by ctx_sched_out() before cpu groups
> iteration. For this trick to work it is required that tstamps of filtered
> out events would point to perf_event_context::tstamp_data object instead
> of perf_event::tstamp_data ones, as it is initialized from an event
> allocation. tstamps references are switched by
> group_sched_in()/group_sched_out() every time a group is checked for its
> suitability for currently running cpu. When a thread enters some cpu on
> a context switch a long run through pinned and flexible groups is
> performed by perf_event_sched_in(, mux=0) with new parameter mux set to 0
> and filtered out groups tstamps are switched to
> perf_event_context::tstamp_data object. Then a series of multiplexing
> interrupts happens and the handler rotates the flexible groups calling
> ctx_sched_out(,mux=1)/perf_event_sched_in(,mux=1) iterating over the cpu
> tree lists only and avoiding long runs through the complete group lists.
> This is where speedup comes from. Eventually when the thread leaves the cpu
> ctx_sched_out(,mux=0) is called restoring tstamps pointers to the events'
> perf_event::tstamp_data objects.

This is unreadable.. Please use whitespace.


> Added API:
> 
> Objects:
> 1. struct perf_event_tstamp:
>    - enabled
>    - running
>    - stopped
> 2. struct perf_event:
>    - group_node
>    - group_list
>    - group_list_entry
>    - tstamp
>    - tstamp_data
> 3. struct perf_event_context:
>    - pinned_tree
>    - flexible_tree
>    - tstamp_data
> 
> Functions:
> 1. insert a group into a tree using event->cpu as a key:
> 	int perf_cpu_tree_insert(struct rb_root *tree,
> 		struct perf_event *event);
> 2. delete a group from a tree, if the group is directly attached
>    to a tree it also detaches all groups on the groups
>    group_list list:
> 	int perf_cpu_tree_delete(struct rb_root *tree,
> 		struct perf_event *event);
> 3. find group_list list by a cpu key:
>         struct list_head * perf_cpu_tree_find(struct rb_root *tree,
> 		int cpu);
> 4. enable a pinned group on a cpu:
>         void ctx_sched_in_flexible_group(struct perf_event_context *ctx,
> 		struct perf_cpu_context *cpuctx,
> 		struct perf_event *group, int *can_add_hw);
> 5. enable a flexible group on a cpu; calls ctx_sched_in_flexible_group
>    and updates group->state to ERROR in case of failure:
> 	void ctx_sched_in_pinned_group(struct perf_event_context *ctx,
> 		struct perf_cpu_context *cpuctx,
> 		struct perf_event *group);
> 6. enable per-cpu pinned tree's groups on a cpu:
> 	void ctx_pinned_sched_in_groups(struct perf_event_context *ctx,
> 		struct perf_cpu_context *cpuctx,
> 		struct list_head *groups);
> 7. enable per-cpu flexible tree's groups on a cpu:
> 	void ctx_flexible_sched_in_groups(
> 		struct perf_event_context *ctx,
> 		struct perf_cpu_context *cpuctx,
> 		struct list_head *groups);
> 8. disable per-cpu tree's groups on a cpu:
> 	void ctx_sched_out_groups(struct perf_event_context *ctx,
> 		struct perf_cpu_context *cpuctx,
> 		struct list_head *groups);
> 9. get a group tree based on event->attr.pinned attribute value:
> 	struct rb_root * ctx_cpu_tree(struct perf_event *event,
> 		struct perf_event_context *ctx);
> 
> Modified API:
> 
> Objects:
> 1. struct perf_event
> 2. struct perf_event_context
> 
> Functions:
> 1. perf_event_alloc()
> 2. add_event_to_ctx()
> 3. perf_event_enable_on_exec()
> 4. __perf_install_in_context()
> 5. __perf_event_init_context()
> 6. __perf_event_mark_enabled()
> 7. __perf_event_enable()
> 8. __perf_event_task_sched_out()
> 9. ctx_group_list()
> 10. list_add_event()
> 11. list_del_event()
> 12. perf_event_context_sched_in()
> 13. cpu_ctx_sched_in()
> 14. cpu_ctx_sched_out()
> 15. ctx_sched_in()
> 16. ctx_sched_out()
> 17. ctx_resched()
> 18. ctx_pinned_sched_in()
> 19. ctx_flexible_sched_in()
> 20. group_sched_in()
> 21. event_sched_in()
> 22. event_sched_out()
> 23. rotate_ctx()
> 24. perf_rotate_context()
> 25. update_context_time()
> 26. update_event_times()
> 27. calc_timer_values()
> 28. perf_cgroup_switch()
> 29. perf_cgroup_mark_enabled()

Yeah, this doesn't go into a changelog. Have you _ever_ seen a changelog
with such crud in?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ