lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20150109095603.GB22386@worktop.programming.kicks-ass.net>
Date:	Fri, 9 Jan 2015 10:56:03 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Vaibhav Shinde <v.bhav.shinde@...il.com>
Cc:	linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>, tj@...nel.org,
	Davidlohr Bueso <dave@...olabs.net>,
	Steven Rostedt <rostedt@...dmis.org>
Subject: Re: Preemptionirqsoff latency issue

On Fri, Jan 09, 2015 at 02:31:35PM +0530, Vaibhav Shinde wrote:
> Hi All,
> 
> I have been checking preemption/irqs off latency with ftrace to find out
> the reason for scheduling latency for my application -
> 
> echo 0 > options/function-trace
> echo preemptirqsoff > current_tracer
> echo 1 > tracing_on
> echo 0 > tracing_max_latency
> ./a.out                          /* Running my application */
> echo 0 > tracing_on
> 
> #
> # preemptirqsoff latency trace v1.1.5 on 3.10.33-g36cc57c-dirty
> # --------------------------------------------------------------------
> # latency: 5184 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:1)
> #    -----------------
> #    | task: kworker/0:3-67 (uid:0 nice:0 policy:0 rt_prio:0)
> #    -----------------
> #  => started at: __mutex_lock_slowpath
> #  => ended at:   schedule_preempt_disabled
> #
> #
> #                  _------=> CPU#
> #                 / _-----=> irqs-off
> #                | / _----=> need-resched
> #                || / _---=> hardirq/softirq
> #                ||| / _--=> preempt-depth
> #                |||| /     delay
> #  cmd     pid   ||||| time  |   caller
> #     \   /      |||||  \    |   /
> kworker/-67      0...1    1us!: __mutex_lock_slowpath
> kworker/-67      0.N.1 5183us+: schedule_preempt_disabled
> kworker/-67      0.N.1 5186us+: trace_preempt_on <-schedule_preempt_disabled
> kworker/-67      0.N.1 5223us : <stack trace>
> => sub_preempt_count
> => schedule_preempt_disabled
> => __mutex_lock_slowpath
> => mutex_lock
> 
> 
> - As per the traces, I am getting a preemption off in mutex_lock for
> 5184ms, which is really big value.
> - The mutext lock is called from my code,
> - what could be the issue that caused such a big latency ?

mutex_lock will spin wait for the owner of the lock IFF the owner is on
(another) CPU and the current CPU doesn't need to resched. And from the
trace we can see N (need_resched) is only set at the 5ms and then we
call schedule_preempt_disable() almost immediately.

If you're interested in scheduling latency you should be looking at the
delay between tasks waking up and tasks getting on the CPU. The above
few lines simply do not provide enough information.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ