lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120215171459.GA8337@phenom.dumpdata.com>
Date:	Wed, 15 Feb 2012 12:14:59 -0500
From:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:	Steven Noonan <steven@...inklabs.net>,
	Jeremy Fitzhardinge <jeremy@...p.org>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Ben Guthro <ben@...hro.net>, linux-kernel@...r.kernel.org,
	Paul Mackerras <paulus@...ba.org>, Ingo Molnar <mingo@...e.hu>,
	Arnaldo Carvalho de Melo <acme@...stprotocols.net>,
	Jeremy Fitzhardinge <jeremy@...p.org>
Subject: Re: bisected: 'perf top' causing soft lockups under Xen

On Wed, Feb 15, 2012 at 01:32:04AM -0800, Steven Noonan wrote:
> On Wed, Feb 15, 2012 at 10:25:44AM +0100, Peter Zijlstra wrote:
> > On Wed, 2012-02-15 at 00:57 -0800, Steven Noonan wrote:
> > > It seems to me that there are two options for fixing this, but I'm
> > > probably lacking the necessary context (or experience with Xen). Either:
> > > 
> > > - The patch provided by Ben needs to have additional work to specially
> > >   handle IRQ_WORK_VECTOR, since it seems to be a special case where
> > >   there's no event channel attached for it. Perhaps adding an event
> > >   channel for this is the fix? Seems high-overhead, but I lack a good
> > >   understanding of how interrupts are handled in Xen.
> > 
> > So that's a self-IPI, is Xen failing to implement this?
> 
> Yes.
> 
> Ben's patch implements it, but it explodes (NULL pointer dereference)
> when it can't find an event channel for IRQ_WORK_VECTOR.

Actually there is an existing self-IPI framework so that any of the smp_call_*
end up IPI-ing other CPUs and that seems to work OK.

 274:        700          0          0          0          0          0  xen-percpu-ipi       callfunc0
 276:      15184          0          0          0          0          0  xen-percpu-ipi       callfuncsingle0
 279:          0        275          0          0          0          0  xen-percpu-ipi       callfunc1
 281:          0       8686          0          0          0          0  xen-percpu-ipi       callfuncsingle1
 284:          0          0        754          0          0          0  xen-percpu-ipi       callfunc2
 286:          0          0       4968          0          0          0  xen-percpu-ipi       callfuncsingle2
 289:          0          0          0        751          0          0  xen-percpu-ipi       callfunc3
 291:          0          0          0      19224          0          0  xen-percpu-ipi       callfuncsingle3
 294:          0          0          0          0        761          0  xen-percpu-ipi       callfunc4
 296:          0          0          0          0      21893          0  xen-percpu-ipi       callfuncsingle4
 299:          0          0          0          0          0        750  xen-percpu-ipi       callfunc5
 301:          0          0          0          0          0      10362  xen-percpu-ipi       callfuncsingle5
 CAL:      15886       8981       5724      19977      22657      11114   Function call interrupts

So even without Ben's patch it should have worked. I am not actually sure
why it decided to just sit there. Looking at it, the IPI on both native
and xen would end up calling: generic_smp_call_function_single_interrupt()
from their respective single-IPI interrupt handlers.

Jeremy, was there any difficulities when IPI oneself?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ