lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E1EAEE9.80605@oracle.com>
Date:	Thu, 14 Jul 2011 16:55:05 +0800
From:	Joe Jin <joe.jin@...cle.com>
To:	Ian Campbell <Ian.Campbell@...citrix.com>
CC:	Daniel Stodden <Daniel.Stodden@...rix.com>,
	Jens Axboe <jaxboe@...ionio.com>,
	"annie.li@...cle.com" <annie.li@...cle.com>,
	Jeremy Fitzhardinge <Jeremy.Fitzhardinge@...rix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
	Kurt C Hackel <KURT.HACKEL@...cle.com>,
	Greg Marsden <greg.marsden@...cle.com>,
	"xen-devel@...ts.xensource.com" <xen-devel@...ts.xensource.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"stable@...nel.org" <stable@...nel.org>
Subject: Re: [PATCH resubmit] xen-blkfront: Don't send closing  notification
 to backend in blkfront_closing()

On 07/14/11 16:13, Ian Campbell wrote:
> On Wed, 2011-07-13 at 01:47 +0100, Joe Jin wrote:
>> When we do block device attach/detach test with below steps, umount hang and the
>> guest unable to shutdown:
>>
>> 1. start guest with the latest kernel.
>> 2. attach new block device by xm block-attach in Dom0
>> 3. mount new disk in guest
>> 4. execute xm block-detach to detach the block device in dom0 until timeout
>> 5. try to unmount the disk in guest, umount hung. at here, any IOs to the 
>>    device will hang.
>>
>> Checking the code found when 'xm block-detach' set backend device's state to
>> 'XenbusStateClosing', frontend received the notification and blkfront_closing()
>> be called, at the moment, the disk still using by guest, so frontend refused
>> to close. In the blkfront_closing(), frontend send a notification to backend
>> said that the its state switched to 'Closing', when backend got the
>> event, it will disconnect from real device, at here any IO request will
>> be stuck, even tried to release the disk by umount.
>>
>> Per our test, below patch fix this issue.
> 
> It's worth mentioning here that the change to xbdev->state is picked up
> in blkif_release() when the device is closed and the disconnect happens
> at that point instead.

This is right, thanks for the suggestions.

> 
> I'm wondering if we might not be better off deferring the disconnect on
> the backend side until the frontend enters XenbusStateClosed instead of
> doing it in closing.

Yes this fix from backend side works too, also this looks reasonable than 
fix in frontend.

Konrad, any advice?

Thanks,
Joe

> 
> Ian
> 
>>
>> Signed-off-by: Joe Jin <joe.jin@...cle.com>
>> Signed-off-by: Annie Li <annie.li@...cle.com>
>> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
>> Cc: Jens Axboe <jaxboe@...ionio.com>
>> Cc: stable@...nel.org
>>
>> ---
>>  xen-blkfront.c |    2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
>> index b536a9c..f6d8ac2 100644
>> --- a/drivers/block/xen-blkfront.c
>> +++ b/drivers/block/xen-blkfront.c
>> @@ -1088,7 +1088,7 @@ blkfront_closing(struct blkfront_info *info)
>>  	if (bdev->bd_openers) {
>>  		xenbus_dev_error(xbdev, -EBUSY,
>>  				 "Device in use; refusing to close");
>> -		xenbus_switch_state(xbdev, XenbusStateClosing);
>> +		xbdev->state = XenbusStateClosing;
>>  	} else {
>>  		xlvbd_release_gendisk(info);
>>  		xenbus_frontend_closed(xbdev);
> 
> 


-- 
Oracle <http://www.oracle.com>
Joe Jin | Team Leader, Software Development | +8610.6106.5624
ORACLE | Linux and Virtualization
No. 24 Zhongguancun Software Park, Haidian District | 100193 Beijing 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ