lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 20 Sep 2012 15:13:42 +0100
From:	Oliver Chick <oliver.chick@...rix.com>
To:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
CC:	Jan Beulich <JBeulich@...e.com>,
	David Vrabel <david.vrabel@...rix.com>,
	"xen-devel@...ts.xen.org" <xen-devel@...ts.xen.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [Xen-devel] [PATCH] Persistent grant maps for xen blk drivers

On Thu, 2012-09-20 at 14:49 +0100, Konrad Rzeszutek Wilk wrote:
> On Thu, Sep 20, 2012 at 12:48:41PM +0100, Jan Beulich wrote:
> > >>> On 20.09.12 at 13:30, Oliver Chick <oliver.chick@...rix.com> wrote:
> > > The memory overhead, and fallback mode points are related:
> > > -Firstly, it turns out that the overhead is actually 2.75MB, not 11MB
> > > per device. I made a mistake (pointed out by Jan) as the maximum number
> > > of requests that can fit into a single-page ring is 64, not 256.
> > > -Clearly, this still scales linearly. So the problem of memory footprint
> > > will occur with more VMs, or block devices.
> > > -Whilst 2.75MB per device is probably acceptable (?), if we start using
> > > multipage rings, then we might not want to have
> > > BLKIF_MAX_PERS_REQUESTS_PER_DEVICE==__RING_SIZE, as this will cause the
> > > memory overhead to increase. This is why I have implemented the
> > > 'fallback' mode. With a multipage ring, it seems reasonable to want the
> > > first $x$ grefs seen by blkback to be treated as persistent, and any
> > > later ones to be non-persistent. Does that seem sensible?
> > 
> > From a resource usage pov, perhaps. But this will get the guest
> > entirely unpredictable performance. Plus I don't think 11Mb of
> 
> Wouldn't it fall back to the older performance?

I guess it would be a bit more complex than that. It would be worse than
the new performance because the grefs that get processed by the
'fallback' mode will cause TLB shootdowns. But any early grefs will
still be processed by the persistent mode, so won't have shootdowns.
Therefore, depending on the ratio of {persistent grants}:{non-persistent
grants), allocated by blkfront, the performance will be somewhere
inbetween the two extremes.

I guess that the choice is between
1) Compiling blk{front,back} with a pre-determined number of persistent
grants, and failing if this limit is exceeded. This seems rather
unflexible, as blk{front,back} must then both both use the same version,
or you will get failures.
2 (current setup)) Have a recommended maximum number of
persistently-mapped pages, and going into a 'fallback' mode if blkfront
exceeds this limit.
3) Having blkback inform blkfront on startup as to how many grefs it is
willing to persistently-map. We then hit the same question again though:
what should be do if blkfront ignores this limit?

> > _virtual_ space is unacceptable overhead in a 64-bit kernel. If
> > you really want/need this in a 32-bit one, then perhaps some
> > other alternatives would be needed (and persistent grants may
> > not be the right approach there in the first place).
> > 
> > Jan
> > 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ