lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100420202519.GB9220@phenom.dumpdata.com>
Date:	Tue, 20 Apr 2010 16:25:19 -0400
From:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:	Tracy Reed <treed@...raviolet.org>,
	Pasi Kärkkäinen <pasik@....fi>,
	xen-devel@...ts.xensource.com,
	Aoetools-discuss@...ts.sourceforge.net,
	linux-kernel@...r.kernel.org
Subject: Re: [Xen-devel] domU is causing misaligned disk writes

On Tue, Apr 20, 2010 at 01:00:04PM -0700, Tracy Reed wrote:
> On Tue, Apr 20, 2010 at 11:49:55AM +0300, Pasi Kärkkäinen spake thusly:
> > Are you using filesystems on normal partitions, or LVM in the domU? 
> > I'm pretty sure this is a domU partitioning problem.
> 
> Also: What changes in the view of the partitioning between domU and
> dom0? Wouldn't a partitioning error manifest itself in tests in the
> dom0 as well as in the domU?
> 
> BTW: The dd from the last time in my last email finally finished:
> 
> # dd if=/dev/zero of=/dev/xvdg1 bs=4096 count=3000000
> 3000000+0 records in
> 3000000+0 records out
> 12288000000 bytes (12 GB) copied, 734.714 seconds, 16.7 MB/s
> 
> If I run that very same dd as above (the last test in my previous

The DomU disk from the Dom0 perspective is using 'phy' which means
there is no caching in Dom0 for that disk (but it is in DomU).
Caching should be done in DomU in that case - which begs the question -
how much memory do you have in your DomU? What happens if you
give to both Dom0 and DomU the same amount of memory?

> email) with the same partition setup again but this time from the
> dom0:
> 
> # dd if=/dev/zero of=/dev/etherd/e6.1 bs=4096 count=3000000
> 3000000+0 records in
> 3000000+0 records out
> 12288000000 bytes (12 GB) copied, 107.352 seconds, 114 MB/s

OK. That is possibly caused by the fact that you are caching the data.
Look at your buffers cache (and  drop the cache before this) and see
how it grows.

> 
> # /sbin/sfdisk -d /dev/etherd/e6.1 
> # partition table of /dev/etherd/e6.1
> unit: sectors
> 
> /dev/etherd/e6.1p1 : start=       64, size=566226926, Id=83

How do you know this is a mis-aligned sectors issue? Is this what your
AOE vendor is telling you ?

I was thinking of first eliminating caching from the picture and seeing
the speeds you get when you do direct IO to the spindles. You can do this using
a tool called 'fio' or 'dd' with the oflag=direct. Try doing that from
both Dom0 and DomU and see what the speeds are.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ