lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141119194350.GA18117@laptop.dumpdata.com>
Date:	Wed, 19 Nov 2014 14:43:50 -0500
From:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:	Juergen Gross <jgross@...e.com>
Cc:	linux-kernel@...r.kernel.org, xen-devel@...ts.xensource.com,
	david.vrabel@...rix.com, boris.ostrovsky@...cle.com,
	x86@...nel.org, tglx@...utronix.de, mingo@...hat.com, hpa@...or.com
Subject: Re: [PATCH V3 2/8] xen: Delay remapping memory of pv-domain

On Fri, Nov 14, 2014 at 06:14:06PM +0100, Juergen Gross wrote:
> On 11/14/2014 05:47 PM, Konrad Rzeszutek Wilk wrote:
> >On Fri, Nov 14, 2014 at 05:53:19AM +0100, Juergen Gross wrote:
> >>On 11/13/2014 08:56 PM, Konrad Rzeszutek Wilk wrote:
> >>>>>>+	mfn_save = virt_to_mfn(buf);
> >>>>>>+
> >>>>>>+	while (xen_remap_mfn != INVALID_P2M_ENTRY) {
> >>>>>
> >>>>>So the 'list' is constructed by going forward - that is from low-numbered
> >>>>>PFNs to higher numbered ones. But the 'xen_remap_mfn' is going the
> >>>>>other way - from the highest PFN to the lowest PFN.
> >>>>>
> >>>>>Won't that mean we will restore the chunks of memory in the wrong
> >>>>>order? That is we will still restore them in chunks size, but the
> >>>>>chunks will be in descending order instead of ascending?
> >>>>
> >>>>No, the information where to put each chunk is contained in the chunk
> >>>>data. I can add a comment explaining this.
> >>>
> >>>Right, the MFNs in a "chunks" are going to be restored in the right order.
> >>>
> >>>I was thinking that the "chunks" (so a set of MFNs) will be restored in
> >>>the opposite order that they are written to.
> >>>
> >>>And oddly enough the "chunks" are done in 512-3 = 509 MFNs at once?
> >>
> >>More don't fit on a single page due to the other info needed. So: yes.
> >
> >But you could use two pages - one for the structure and the other
> >for the list of MFNs. That would fix the problem of having only
> >509 MFNs being contingous per chunk when restoring.
> 
> That's no problem (see below).
> 
> >Anyhow the point I had that I am worried is that we do not restore the
> >MFNs in the same order. We do it in "chunk" size which is OK (so the 509 MFNs
> >at once)- but the order we traverse the restoration process is the opposite of
> >the save process. Say we have 4MB of contingous MFNs, so two (err, three)
> >chunks. The first one we iterate is from 0->509, the second is 510->1018, the
> >last is 1019->1023. When we restore (remap) we start with the last 'chunk'
> >so we end up restoring them: 1019->1023, 510->1018, 0->509 order.
> 
> No. When building up the chunks we save in each chunk where to put it
> on remap. So in your example 0-509 should be mapped at <dest>+0,
> 510-1018 at <dest>+510, and 1019-1023 at <dest>+1019.
> 
> When remapping we map 1019-1023 to <dest>+1019, 510-1018 at <dest>+510
> and last 0-509 at <dest>+0. So we do the mapping in reverse order, but
> to the correct pfns.

Excellent! Could a condensed version of that explanation be put in the code ?

> 
> Juergen
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ