[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150522171335.GG957@mail-itl>
Date: Fri, 22 May 2015 19:13:35 +0200
From: Marek Marczykowski-Górecki
<marmarek@...isiblethingslab.com>
To: David Vrabel <david.vrabel@...rix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
netdev@...r.kernel.org, xen-devel <xen-devel@...ts.xen.org>
Subject: Re: [Xen-devel] xen-netfront crash when detaching network while some
network activity
On Fri, May 22, 2015 at 05:58:41PM +0100, David Vrabel wrote:
> On 22/05/15 17:42, Marek Marczykowski-Górecki wrote:
> > On Fri, May 22, 2015 at 05:25:44PM +0100, David Vrabel wrote:
> >> On 22/05/15 12:49, Marek Marczykowski-Górecki wrote:
> >>> Hi all,
> >>>
> >>> I'm experiencing xen-netfront crash when doing xl network-detach while
> >>> some network activity is going on at the same time. It happens only when
> >>> domU has more than one vcpu. Not sure if this matters, but the backend
> >>> is in another domU (not dom0). I'm using Xen 4.2.2. It happens on kernel
> >>> 3.9.4 and 4.1-rc1 as well.
> >>>
> >>> Steps to reproduce:
> >>> 1. Start the domU with some network interface
> >>> 2. Call there 'ping -f some-IP'
> >>> 3. Call 'xl network-detach NAME 0'
> >>
> >> I tried this about 10 times without a crash. How reproducible is it?
> >>
> >> I used a 4.1-rc4 frontend and a 4.0 backend.
> >
> > It happens every time for me... Do you have at least two vcpus in that
> > domU? With one vcpu it doesn't crash. The IP for ping I've used one in
> > backend domU, but it shouldn't matter.
> >
> > Backend is 3.19.6 here. I don't see any changes there between rc1 and
> > rc4, so stayed with rc1. With 4.1-rc1 backend it also crashes for me.
>
> Doesn't repro for me with 4 VCPU PV or PVHVM guests.
I've tried with exactly 2 vcpus in frontend domU (PV), but I guess it
shouldn't matter. Backend is also PV.
> Is your guest
> kernel vanilla or does it have some qubes specific patches on top?
This one was from vanilla - both frontend and backend (just qubes
config).
Maybe something about device configuration? Here is xenstore dump:
frontend:
0 = ""
backend = "/local/domain/66/backend/vif/69/0"
backend-id = "66"
state = "4"
handle = "0"
mac = "00:16:3e:5e:6c:07"
multi-queue-num-queues = "2"
queue-0 = ""
tx-ring-ref = "1280"
rx-ring-ref = "1281"
event-channel-tx = "19"
event-channel-rx = "20"
queue-1 = ""
tx-ring-ref = "1282"
rx-ring-ref = "1283"
event-channel-tx = "21"
event-channel-rx = "22"
request-rx-copy = "1"
feature-rx-notify = "1"
feature-sg = "1"
feature-gso-tcpv4 = "1"
feature-gso-tcpv6 = "1"
feature-ipv6-csum-offload = "1"
backend:
69 = ""
0 = ""
frontend = "/local/domain/69/device/vif/0"
frontend-id = "69"
online = "1"
state = "4"
script = "/etc/xen/scripts/vif-route-qubes"
mac = "00:16:3e:5e:6c:07"
ip = "10.137.3.9"
handle = "0"
type = "vif"
feature-sg = "1"
feature-gso-tcpv4 = "1"
feature-gso-tcpv6 = "1"
feature-ipv6-csum-offload = "1"
feature-rx-copy = "1"
feature-rx-flip = "0"
feature-split-event-channels = "1"
multi-queue-max-queues = "2"
hotplug-status = "connected"
--
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
Content of type "application/pgp-signature" skipped
Powered by blists - more mailing lists