lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86802c440809150134w3cd3f5baq2e99d2bb61b31a16@mail.gmail.com>
Date:	Mon, 15 Sep 2008 01:34:49 -0700
From:	"Yinghai Lu" <yhlu.kernel@...il.com>
To:	"Jan Beulich" <jbeulich@...ell.com>
Cc:	mingo@...e.hu, tglx@...utronix.de, linux-kernel@...r.kernel.org,
	hpa@...or.com
Subject: Re: [PATCH] x86-64: fix combining of regions in init_memory_mapping()

On Mon, Sep 15, 2008 at 1:17 AM, Jan Beulich <jbeulich@...ell.com> wrote:
>>>> "Yinghai Lu" <yhlu.kernel@...il.com> 14.09.08 20:20 >>>
>>On Fri, Sep 12, 2008 at 7:43 AM, Jan Beulich <jbeulich@...ell.com> wrote:
>>> When nr_range gets decremented, the same slot must be considered for
>>> coalescing with its new successor again.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@...ell.com>
>>>
>>> ---
>>>  arch/x86/mm/init_64.c |    2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> --- linux-2.6.27-rc6/arch/x86/mm/init_64.c      2008-08-29 10:53:00.000000000 +0200
>>> +++ 2.6.27-rc6-x86_64-mr-coalesce/arch/x86/mm/init_64.c 2008-09-12 11:58:45.000000000 +0200
>>> @@ -636,7 +636,7 @@ unsigned long __init_refok init_memory_m
>>>                old_start = mr[i].start;
>>>                memmove(&mr[i], &mr[i+1],
>>>                         (nr_range - 1 - i) * sizeof (struct map_range));
>>> -               mr[i].start = old_start;
>>> +               mr[i--].start = old_start;
>>>                nr_range--;
>>>        }
>>>
>>
>>this patch seems not right.
>>Ingo, please don't apply it.
>>
>>original code:
>>        /* try to merge same page size and continuous */
>>        for (i = 0; nr_range > 1 && i < nr_range - 1; i++) {
>>                unsigned long old_start;
>>                if (mr[i].end != mr[i+1].start ||
>>                    mr[i].page_size_mask != mr[i+1].page_size_mask)
>>                        continue;
>>                /* move it */
>>                old_start = mr[i].start;
>>                memmove(&mr[i], &mr[i+1],
>>                         (nr_range - 1 - i) * sizeof (struct map_range));
>>                mr[i].start = old_start;
>>                nr_range--;
>>        }
>>
>>so it save old_start and first, and move entries forward (so old one
>>is overwriten), and put back old_start ...
>
> Old and new code are not different in any way in this respect - both
> overwrite the old entry at index i with the entry at index i+1 and then
> set the start of the i-th entry back to what it was before the overwrite,
> effectively combining them. The patch just makes sure that the index
> isn't being updated at the same time as nr_range (because if you update
> both you effectively skip one).
>
> The issue is apparently pretty benign to native code, but surfaces as a
> boot time crash in our forward ported Xen tree (where the page table
> setup overall works differently than in native). Since the underlying
> issue was present in native (and since I assume if there is an attempt
> to merge subsequent regions, then it should work right), I nevertheless
> submitted the patch for native inclusion.

yes. your patch fixed the skip...

Acked-by: Yinghai Lu <yhlu.kernel@...il.com>

YH
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ