[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <41022efa-141a-01d9-3084-8460b5017592@redhat.com>
Date: Mon, 30 Jul 2018 15:43:45 +0800
From: Xiao Liang <xiliang@...hat.com>
To: Boris Ostrovsky <boris.ostrovsky@...cle.com>,
netdev@...r.kernel.org, xen-devel@...ts.xenproject.org,
davem@...emloft.net, jgross@...e.com
Cc: linux-kernel@...r.kernel.org
Subject: Re: [Xen-devel] [PATCH] xen-netfront: wait xenbus state change when
load module manually
Thanks, Boris
Please see my reply inline.
On 07/28/2018 02:40 AM, Boris Ostrovsky wrote:
> On 07/27/2018 05:56 AM, Xiao Liang wrote:
>> When loading module manually, after call xenbus_switch_state to initializes
>> the state of the netfront device, the driver state did not change so fast
>> that may lead no dev created in latest kernel. This patch adds wait to make
>> sure xenbus knows the driver is not in closed/unknown state.
>>
>> Current state:
>> [vm]# ethtool eth0
>> Settings for eth0:
>> Link detected: yes
>> [vm]# modprobe -r xen_netfront
>> [vm]# modprobe xen_netfront
>> [vm]# ethtool eth0
>> Settings for eth0:
>> Cannot get device settings: No such device
>> Cannot get wake-on-lan settings: No such device
>> Cannot get message level: No such device
>> Cannot get link status: No such device
>> No data available
>>
>> With the patch installed.
>> [vm]# ethtool eth0
>> Settings for eth0:
>> Link detected: yes
>> [vm]# modprobe -r xen_netfront
>> [vm]# modprobe xen_netfront
>> [vm]# ethtool eth0
>> Settings for eth0:
>> Link detected: yes
>>
>> Signed-off-by: Xiao Liang <xiliang@...hat.com>
>> ---
>> drivers/net/xen-netfront.c | 6 ++++++
>> 1 file changed, 6 insertions(+)
>>
>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> index a57daecf1d57..2d8812dd1534 100644
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
>> @@ -87,6 +87,7 @@ struct netfront_cb {
>> /* IRQ name is queue name with "-tx" or "-rx" appended */
>> #define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
>>
>> +static DECLARE_WAIT_QUEUE_HEAD(module_load_q);
>> static DECLARE_WAIT_QUEUE_HEAD(module_unload_q);
>>
>> struct netfront_stats {
>> @@ -1330,6 +1331,11 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
>> netif_carrier_off(netdev);
>>
>> xenbus_switch_state(dev, XenbusStateInitialising);
>> + wait_event(module_load_q,
>> + xenbus_read_driver_state(dev->otherend) !=
>> + XenbusStateClosed &&
>> + xenbus_read_driver_state(dev->otherend) !=
>> + XenbusStateUnknown);
>> return netdev;
>>
>> exit:
>
> Should we have a wake_up somewhere?
In my understanding, netback_changed handles it if dev state is in
XenbusStateInitialising and otherend is in XenbusStateInitWait, and then
create connection to backend.
But in most cases, it breaks out as dev->state not in
XenbusStateInitialising. So I added a wait here.
> And what about other states --- is,
> for example, XenbusStateClosing a valid reason to continue?
I think XenbusStateClosing should not be a valid reason to continue.
My purpose is waiting otherend status to be XenbusStateInitWait(after
new dev created).To avoid unnecessary impact, I only check it leaves
closed and unknow state in this patch.
In my testing, hotplug vifs from guest in host or load/unload module in
guest over 100 times, only waiting XenbusStateInitWait or as this patch
does, both are working.
vifs can be created each time successfully.
Thanks,
Xiao Liang
>
>
> -boris
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@...ts.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
Powered by blists - more mailing lists