lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BA101C5.9040406@redhat.com>
Date:	Wed, 17 Mar 2010 18:22:29 +0200
From:	Avi Kivity <avi@...hat.com>
To:	Chris Webb <chris@...chsys.com>
CC:	balbir@...ux.vnet.ibm.com,
	KVM development list <kvm@...r.kernel.org>,
	Rik van Riel <riel@...riel.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Christoph Hellwig <hch@....de>, Kevin Wolf <kwolf@...hat.com>
Subject: Re: [PATCH][RF C/T/D] Unmapped page cache control - via boot parameter

On 03/17/2010 05:24 PM, Chris Webb wrote:
> Avi Kivity<avi@...hat.com>  writes:
>
>    
>> On 03/15/2010 10:23 PM, Chris Webb wrote:
>>
>>      
>>> Wasteful duplication of page cache between guest and host notwithstanding,
>>> turning on cache=writeback is a spectacular performance win for our guests.
>>>        
>> Is this with qcow2, raw file, or direct volume access?
>>      
> This is with direct access to logical volumes. No file systems or qcow2 in
> the stack. Our typical host has a couple of SATA disks, combined in md
> RAID1, chopped up into volumes with LVM2 (really just dm linear targets).
> The performance measured outside qemu is excellent, inside qemu-kvm is fine
> too until multiple guests are trying to access their drives at once, but
> then everything starts to grind badly.
>
>    

OK.

>> I can understand it for qcow2, but for direct volume access this
>> shouldn't happen.  The guest schedules as many writes as it can,
>> followed by a sync.  The host (and disk) can then reschedule them
>> whether they are in the writeback cache or in the block layer, and
>> must sync in the same way once completed.
>>      
> I don't really understand what's going on here, but I wonder if the
> underlying problem might be that all the O_DIRECT/O_SYNC writes from the
> guests go down into the same block device at the bottom of the device mapper
> stack, and thus can't be reordered with respect to one another.

They should be reorderable.  Otherwise host filesystems on several 
volumes would suffer the same problems.

Whether the filesystem is in the host or guest shouldn't matter.

> For our
> purposes,
>
>    Guest AA   Guest BB       Guest AA   Guest BB       Guest AA   Guest BB
>    write A1                  write A1                             write B1
>               write B1       write A2                  write A1
>    write A2                             write B1       write A2
>
> are all equivalent, but the system isn't allowed to reorder in this way
> because there isn't a separate request queue for each logical volume, just
> the one at the bottom. (I don't know whether nested request queues would
> behave remotely reasonably either, though!)
>
> Also, if my guest kernel issues (say) three small writes, one at the start
> of the disk, one in the middle, one at the end, and then does a flush, can
> virtio really express this as one non-contiguous O_DIRECT write (the three
> components of which can be reordered by the elevator with respect to one
> another) rather than three distinct O_DIRECT writes which can't be permuted?
> Can qemu issue a write like that? cache=writeback + flush allows this to be
> optimised by the block layer in the normal way.
>    

Guest side virtio will send this as three requests followed by a flush.  
Qemu will issue these as three distinct requests and then flush.  The 
requests are marked, as Christoph says, in a way that limits their 
reorderability, and perhaps if we fix these two problems performance 
will improve.

Something that comes to mind is merging of flush requests.  If N guests 
issue one write and one flush each, we should issue N writes and just 
one flush - a flush for the disk applies to all volumes on that disk.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ