lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1295547087.9039.694.camel@nimitz>
Date:	Thu, 20 Jan 2011 10:11:27 -0800
From:	Dave Hansen <dave@...ux.vnet.ibm.com>
To:	Russell King - ARM Linux <linux@....linux.org.uk>
Cc:	KyongHo Cho <pullip.cho@...sung.com>,
	Kukjin Kim <kgene.kim@...sung.com>,
	KeyYoung Park <keyyoung.park@...sung.com>,
	linux-kernel@...r.kernel.org, Ilho Lee <ilho215.lee@...sung.com>,
	linux-mm@...ck.org, linux-samsung-soc@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH] ARM: mm: Regarding section when dealing with meminfo

On Thu, 2011-01-20 at 18:01 +0000, Russell King - ARM Linux wrote:
> > The x86 version of show_mem() actually manages to do this without any
> > #ifdefs, and works for a ton of configuration options.  It uses
> > pfn_valid() to tell whether it can touch a given pfn.
> 
> x86 memory layout tends to be very simple as it expects memory to
> start at the beginning of every region described by a pgdat and extend
> in one contiguous block.  I wish ARM was that simple.

x86 memory layouts can be pretty funky and have been that way for a long
time.  That's why we *have* to handle holes in x86's show_mem().  My
laptop even has a ~1GB hole in its ZONE_DMA32:

[    0.000000] Zone PFN ranges:
[    0.000000]   DMA      0x00000010 -> 0x00001000
[    0.000000]   DMA32    0x00001000 -> 0x00100000
[    0.000000]   Normal   0x00100000 -> 0x0013c000

But:

Node 0, zone    DMA32
  pages free     82877
        min      12783
        low      15978
        high     19174
        scanned  0
        spanned  1044480
        present  765672

See how the present is ~1GB less than spanned?  That's because of an I/O
hole from ~3-4GB:

        # cat /proc/iomem  | grep RAM
        00010000-0009d7ff : System RAM
        00100000-bf6affff : System RAM
        100000000-13bffffff : System RAM

I guess if we were being really smart we'd just shrink the DMA32 zone
down and not even cover that area.  But, we don't.

Memory hotplug was the original reason that we put sparsemem in place,
but we also have plenty of configurations where there are holes in the
middle of zones and pgdats.  

-- Dave

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ