lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20141217164712.GG6414@laptop.dumpdata.com>
Date:	Wed, 17 Dec 2014 11:47:12 -0500
From:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:	David Vrabel <david.vrabel@...rix.com>
Cc:	Bob Liu <bob.liu@...cle.com>, xen-devel@...ts.xen.org,
	linux-kernel@...r.kernel.org,
	Roger Pau Monné <roger.pau@...rix.com>
Subject: Re: [Xen-devel] [PATCH] xen/blkfront: increase the default value of
 xen_blkif_max_segments

On Wed, Dec 17, 2014 at 04:34:41PM +0000, David Vrabel wrote:
> On 17/12/14 16:13, Konrad Rzeszutek Wilk wrote:
> > On Wed, Dec 17, 2014 at 04:18:58PM +0800, Bob Liu wrote:
> >>
> >> On 12/16/2014 06:32 PM, Roger Pau Monné wrote:
> >>> El 16/12/14 a les 11.11, Bob Liu ha escrit:
> >>>> The default maximum value of segments in indirect requests was 32, IO
> >>>> operations with bigger block size(>32*4k) would be split and performance start
> >>>> to drop.
> >>>>
> >>>> Nowadays backend device usually support 512k max_sectors_kb on desktop, and
> >>>> may larger on server machines with high-end storage system.
> >>>> The default size 128k was not very appropriate, this patch increases the default
> >>>> maximum value to 128(128*4k=512k).
> >>>
> >>> This looks fine, do you have any data/graphs to backup your reasoning?
> >>>
> >>
> >> I only have some results for 1M block size FIO test but I think that's
> >> enough.
> >>
> >> xen_blkfront.max 	Rate (MB/s) 	Percent of Dom-0
> >> 32 	11.1 	31.0%
> >> 48 	15.3 	42.7%
> >> 64 	19.8 	55.3%
> >> 80 	19.9 	55.6%
> >> 96 	23.0 	64.2%
> >> 112 	23.7 	66.2%
> >> 128 	31.6 	88.3%
> >>
> >> The rates above are compared against the dom-0 rate of 35.8 MB/s.
> >>
> >>> I would also add to the commit message that this change implies we can
> >>> now have 32*128+32 = 4128 in-flight grants, which greatly surpasses the
> >>
> >> The number could be larger if using more pages as the
> >> xen-blkfront/backend ring based on Wei Liu's patch "xenbus_client:
> >> extend interface to suppurt multi-page ring", it helped improve the IO
> >> performance a lot on our system connected with high-end storage.
> >> I'm preparing resend related patches.
> > 
> > Or potentially making the request and response be seperate rings - and the
> > response ring entries not tied in to the request. As in right now if we
> > have an request at say slot 1,5, and 7, we expect the response to be at
> > slot 1,5, and 7 as well.
> 
> No. Responses are placed in the first available slot.  The response is
> associated with the original request by the ID field.
> 
> See make_response().

You are right! Thank you for the update.
> 
> David
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ