lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 02 Apr 2015 13:51:27 -0500
From:	Chris J Arges <chris.j.arges@...onical.com>
To:	Ingo Molnar <mingo@...nel.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
CC:	Rafael David Tinoco <inaddy@...ntu.com>,
	Peter Anvin <hpa@...or.com>,
	Jiang Liu <jiang.liu@...ux.intel.com>,
	Peter Zijlstra <peterz@...radead.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Jens Axboe <axboe@...nel.dk>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Gema Gomez <gema.gomez-solano@...onical.com>,
	the arch/x86 maintainers <x86@...nel.org>
Subject: Re: smp_call_function_single lockups

On 04/02/2015 01:26 PM, Ingo Molnar wrote:
> 
> * Linus Torvalds <torvalds@...ux-foundation.org> wrote:
> 
>> So unless we find a real clear signature of the bug (I was hoping 
>> that the ISR bit would be that sign), I don't think trying to bisect 
>> it based on how quickly you can reproduce things is worthwhile.
> 
> So I'm wondering (and I might have missed some earlier report that 
> outlines just that), now that the possible location of the bug is 
> again sadly up to 15+ million lines of code, I have no better idea 
> than to debug by symptoms again: what kind of effort was made to 
> examine the locked up state itself?
>

Ingo,

Rafael did some analysis when I was out earlier here:
https://lkml.org/lkml/2015/2/23/234

My reproducer setup is as follows:
L0 - 8-way CPU, 48 GB memory
L1 - 2-way vCPU, 4 GB memory
L2 - 1-way vCPU, 1 GB memory

Stress is only run in the L2 VM, and running top on L0/L1 doesn't show
excessive load.

> Softlockups always have some direct cause, which task exactly causes 
> scheduling to stop altogether, why does it lock up - or is it not a 
> clear lockup, just a very slow system?
> 
> Thanks,
> 
> 	Ingo
> 

Whenever we look through the crashdump we see csd_lock_wait waiting for
CSD_FLAG_LOCK bit to be cleared.  Usually the signature leading up to
that looks like the following (in the openstack tempest on openstack and
nested VM stress case)

(qemu-system-x86 task)
kvm_sched_in
 -> kvm_arch_vcpu_load
  -> vmx_vcpu_load
   -> loaded_vmcs_clear
    -> smp_call_function_single

(ksmd task)
pmdp_clear_flush
 -> flush_tlb_mm_range
  -> native_flush_tlb_others
    -> smp_call_function_many

--chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ