lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <54C22D41.9040301@citrix.com>
Date:	Fri, 23 Jan 2015 12:15:13 +0100
From:	Roger Pau Monné <roger.pau@...rix.com>
To:	"Ouyang Zhaowei (Charles)" <ouyangzhaowei@...wei.com>
CC:	<linux-kernel@...r.kernel.org>, <suoben@...wei.com>,
	<liuyingdong@...wei.com>, <weiping.ding@...wei.com>,
	xen-devel <xen-devel@...ts.xenproject.org>,
	David Vrabel <david.vrabel@...rix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
	Boris Ostrovsky <boris.ostrovsky@...cle.com>
Subject: Re: xen-blkfront: weird behavior of "iostat" after VM live-migrate
 which xen-blkfront module has indirect descriptors

Hello,

El 23/01/15 a les 8.59, Ouyang Zhaowei (Charles) ha escrit:
> Hi Roger,
> 
> We are testing the indirect feature of xen-blkfront module these days.
> And we found that, after VM live-migrate a couple of times, the "%util" of iostat keeps being 100%, and there are several requests stock in "avgqu-sz".
> We have checked some later version of Linux, and it happens on Ubuntu 14.04, Ubuntu 14.10 and RHEL 7.0.
> 
> The iostat shows like below:
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>            0.00    0.00    0.00    0.00    0.00  100.00
> 
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
> xvda              0.00     0.00    0.00    0.00     0.00     0.00     0.00     4.00    0.00    0.00    0.00   0.00 100.00
> dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
> dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
> 
> Could you tell us that why is this happening, is this a bug?

It is a bug indeed, thanks for reporting it. The problem seems to be 
that blk_put_request (which is used to discard the old requests before 
requeuing them) doesn't update the queue statistics. The following 
patch solves the problem for me, could you try it and report back?

---
commit bb4317c051ca81a2906edb7ccc505cbd6d1d80c7
Author: Roger Pau Monne <roger.pau@...rix.com>
Date:   Fri Jan 23 12:10:51 2015 +0100

    xen-blkfront: fix accounting of reqs when migrating
    
    Current migration code uses blk_put_request in order to finish a request
    before requeuing it. This function doesn't update the statistics of the
    queue, which completely screws accounting. Use blk_end_request_all instead
    which properly updates the statistics of the queue.
    
    Signed-off-by: Roger Pau Monné <roger.pau@...rix.com>

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 5ac312f..aac41c1 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1493,7 +1493,7 @@ static int blkif_recover(struct blkfront_info *info)
 		merge_bio.tail = copy[i].request->biotail;
 		bio_list_merge(&bio_list, &merge_bio);
 		copy[i].request->bio = NULL;
-		blk_put_request(copy[i].request);
+		blk_end_request_all(copy[i].request, 0);
 	}
 
 	kfree(copy);
@@ -1516,7 +1516,7 @@ static int blkif_recover(struct blkfront_info *info)
 		req->bio = NULL;
 		if (req->cmd_flags & (REQ_FLUSH | REQ_FUA))
 			pr_alert("diskcache flush request found!\n");
-		__blk_put_request(info->rq, req);
+		__blk_end_request_all(req, 0);
 	}
 	spin_unlock_irq(&info->io_lock);
 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ