lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3e8340490909112037p3b4d4f32p2dc6dda01cfcb8ea@mail.gmail.com>
Date:	Fri, 11 Sep 2009 23:37:35 -0400
From:	Bryan Donlan <bdonlan@...il.com>
To:	sidc7 <siddhartha.chhabra@...il.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: Controlling memory allocation

On Fri, Sep 11, 2009 at 10:23 PM, sidc7 <siddhartha.chhabra@...il.com> wrote:
>
> I had a question regarding memory allocation. On a contemporary system, the
> kernel will allocate physical frames on the DRAM based on availability. Is
> it possible for the kernel to somehow restrict frame allocation for a
> particular process to a particular address range. For e.g. Lets assume the
> DRAM ranges from 00-FF, on a contemporary system, the entire range is
> available for the kernel to allocate to the processes. Is it possible for
> the kernel to say PID 1: the frames will be allocated only in 00-A0, PID2:
> the frames will be allocated from A1 - D0 and PID3: will get frames from D1
> - FF.

The kernel has a concept of 'zones' that can be used to restrict
allocations as you say - however, user processes are the _least_
restricted. These zones are used to deal with old devices that can't
do DMAs above 16mb (so memory for their DMA buffers is allocated from
the 'DMA' zone), as well as to keep the kernel's internal
datastructures within the first 700mb or so of RAM (zone 'normal').
Memory used directly by userspace processes, including cache, can come
from any zone, including zone 'highmem'.

There's also some work on NUMA memory allocation policies - I'm not
too familiar with the details, but it does involve setting a
preference that pages for certain user processes get allocated from
certain memory banks near the CPU(s) that are executing the process.

Note that in both cases the assignments are somewhat static - zones
are set at compile time and are used solely as a workaround for
hardware limitations; exactly which are used depend on your
architecture (although the _types_ of zones available are fixed). See
include/linux/mmzones.h NUMA is determined by your hardware
configuration. And processes are never assigned just a contiguous,
exclusive range of pages to use, as there's no benefit in doing so -
what happens if something else took some of those first and it runs
out, after all?

For more information about zones, see
http://lxr.linux.no/linux+v2.6.31/include/linux/mmzone.h#L190 and the
x86 arch init code starting at
http://lxr.linux.no/linux+v2.6.31/arch/x86/mm/init_32.c#L737 etc.
For more info on NUMA, see the userspace API at
http://linux.die.net/man/3/numa and the paper at
http://www.kernel.org/pub/linux/kernel/people/christoph/pmig/numamemory.pdf
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ