lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081009070634.GA12715@sli10-desk.sh.intel.com>
Date:	Thu, 9 Oct 2008 15:06:34 +0800
From:	Shaohua Li <shaohua.li@...el.com>
To:	Yinghai Lu <yinghai@...nel.org>
Cc:	lkml <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Ingo Molnar <mingo@...hat.com>
Subject: Re: [patch]x86: arch_add_memory round up address

On Thu, Oct 09, 2008 at 02:40:34PM +0800, Yinghai Lu wrote:
> On Wed, Oct 8, 2008 at 11:28 PM, Shaohua Li <shaohua.li@...el.com> wrote:
> > On Thu, Oct 09, 2008 at 02:22:50PM +0800, Yinghai Lu wrote:
> >> On Wed, Oct 8, 2008 at 11:08 PM, Shaohua Li <shaohua.li@...el.com> wrote:
> >> > On Thu, 2008-10-09 at 14:04 +0800, Yinghai Lu wrote:
> >> >> On Wed, Oct 8, 2008 at 10:31 PM, Shaohua Li <shaohua.li@...el.com> wrote:
> >> >> > Round up address to a page, otherwise the last page isn't mapped.
> >> >> >
> >> >> > Signed-off-by: Shaohua Li <shaohua.li@...el.com>
> >> >> > ---
> >> >> >  arch/x86/mm/init_64.c |    3 ++-
> >> >> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >> >> >
> >> >> > Index: linux/arch/x86/mm/init_64.c
> >> >> > ===================================================================
> >> >> > --- linux.orig/arch/x86/mm/init_64.c    2008-10-09 11:42:33.000000000 +0800
> >> >> > +++ linux/arch/x86/mm/init_64.c 2008-10-09 11:43:22.000000000 +0800
> >> >> > @@ -721,7 +721,8 @@ int arch_add_memory(int nid, u64 start,
> >> >> >        unsigned long nr_pages = size >> PAGE_SHIFT;
> >> >> >        int ret;
> >> >> >
> >> >> > -       last_mapped_pfn = init_memory_mapping(start, start + size-1);
> >> >> > +       last_mapped_pfn = init_memory_mapping(start,
> >> >> > +               round_up(start + size-1, PAGE_SIZE));
> >> >> >        if (last_mapped_pfn > max_pfn_mapped)
> >> >> >                max_pfn_mapped = last_mapped_pfn;
> >> >>
> >> >> should use
> >> >>
> >> >> last_mapped_pfn = init_memory_mapping(start, start + size);
> >> > No, this still can't guarantee page aligned, though this works in my
> >> > test
> >>
> >> who will call arch_add_memory? that should be start and size already
> >> be page aligned.
> > It's memory hotplug. Doing a round up is always ok and safe even it might be already aligned.
> >
> 
> it seems rounding up in that case is wrong...
> 
> if the caller call that funtion with extra half page, you don't need
> map that half page, because you can not use it.
shouldn't we mark such page as reserved and so it will not be used? This is the way we handle hole.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ