[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1312889155.26263.106.camel@zakaz.uk.xensource.com>
Date: Tue, 9 Aug 2011 12:25:55 +0100
From: Ian Campbell <Ian.Campbell@...citrix.com>
To: Olaf Hering <olaf@...fle.de>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Jeremy Fitzhardinge" <jeremy@...p.org>,
Konrad <konrad.wilk@...cle.com>,
"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>
Subject: Re: [Xen-devel] [PATCH 2/3] xen/pv-on-hvm kexec: rebind virqs to
existing eventchannel ports
On Tue, 2011-08-09 at 10:29 +0100, Olaf Hering wrote:
> On Tue, Aug 09, Ian Campbell wrote:
>
> > On Thu, 2011-08-04 at 17:20 +0100, Olaf Hering wrote:
> > > During a kexec boot some virqs such as timer and debugirq were already
> > > registered by the old kernel. The hypervisor will return -EEXISTS from
> > > the new EVTCHNOP_bind_virq request and the BUG in bind_virq_to_irq()
> > > triggers. Catch the -EEXISTS error and loop through all possible ports to find
> > > what port belongs to the virq/cpu combo.
> >
> > Would it be better to proactively just query the status of all event
> > channels early on, like you do in find_virq, and setup the irq info
> > structures as appropriate? Rather than waiting for an -EEXISTS I mean.
>
> Doing one hypercall in the common case is cheaper than doing a dozen in
> the kexec case.
It's actually up to NR_EVENT_CHANNELS*NR_VIRQS (since you will do this
each time you find a VIRQ which is already bound) which is 24,576 on 32
bit and 98,304 hypercalls on 64 bit. If you just do it in the common
case (i.e. query each VIRQ once) then it is "only" 1024 in the 32 bit
case and 4096 in the 64 bit case.
I guess there's probably no way to detect if/when we need to do this? I
suppose we could scan all VIRQs on on the first -EEXISTS?
> If you prefer I will rearrange this part and query first.
>
> Olaf
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists