lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141113195605.GA13039@laptop.dumpdata.com>
Date:	Thu, 13 Nov 2014 14:56:06 -0500
From:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:	Juergen Gross <jgross@...e.com>
Cc:	linux-kernel@...r.kernel.org, xen-devel@...ts.xensource.com,
	david.vrabel@...rix.com, boris.ostrovsky@...cle.com,
	x86@...nel.org, tglx@...utronix.de, mingo@...hat.com, hpa@...or.com
Subject: Re: [PATCH V3 2/8] xen: Delay remapping memory of pv-domain

> >>+	mfn_save = virt_to_mfn(buf);
> >>+
> >>+	while (xen_remap_mfn != INVALID_P2M_ENTRY) {
> >
> >So the 'list' is constructed by going forward - that is from low-numbered
> >PFNs to higher numbered ones. But the 'xen_remap_mfn' is going the
> >other way - from the highest PFN to the lowest PFN.
> >
> >Won't that mean we will restore the chunks of memory in the wrong
> >order? That is we will still restore them in chunks size, but the
> >chunks will be in descending order instead of ascending?
> 
> No, the information where to put each chunk is contained in the chunk
> data. I can add a comment explaining this.

Right, the MFNs in a "chunks" are going to be restored in the right order.

I was thinking that the "chunks" (so a set of MFNs) will be restored in
the opposite order that they are written to. 

And oddly enough the "chunks" are done in 512-3 = 509 MFNs at once?

> 
> >
> >>+		/* Map the remap information */
> >>+		set_pte_mfn(buf, xen_remap_mfn, PAGE_KERNEL);
> >>+
> >>+		BUG_ON(xen_remap_mfn != xen_remap_buf.mfns[0]);
> >>+
> >>+		free = 0;
> >>+		pfn = xen_remap_buf.target_pfn;
> >>+		for (i = 0; i < xen_remap_buf.size; i++) {
> >>+			mfn = xen_remap_buf.mfns[i];
> >>+			if (!released && xen_update_mem_tables(pfn, mfn)) {
> >>+				remapped++;
> >
> >If we fail 'xen_update_mem_tables' we will on the next chunk (so i+1) keep on
> >freeing pages instead of trying to remap. Is that intentional? Could we
> >try to remap?
> 
> Hmm, I'm not sure this is worth the effort. What could lead to failure
> here? I suspect we could even just BUG() on failure. What do you think?

I was hoping that this question would lead to making this loop a bit
simpler as you would have to spread some of the code in the loop
into functions.

And keep 'remmaped' and 'released' reset every loop.

However, if it makes the code more complex - then please
forget my question.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ