[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1534314887-9202-1-git-send-email-rppt@linux.vnet.ibm.com>
Date: Wed, 15 Aug 2018 09:34:47 +0300
From: Mike Rapoport <rppt@...ux.vnet.ibm.com>
To: linux-mm@...ck.org
Cc: Jonathan Corbet <corbet@....net>,
Matthew Wilcox <willy@...radead.org>,
Michal Hocko <mhocko@...e.com>,
Vlastimil Babka <vbabka@...e.cz>, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, Hi@...av22.portsmouth.uk.ibm.com,
Mike Rapoport <rppt@...ux.vnet.ibm.com>
Subject:
As Vlastimil mentioned at [1], it would be nice to have some guide about
memory allocation. I've drafted an initial version that tries to summarize
"best practices" for allocation functions and GFP usage.
[1] https://www.spinics.net/lists/netfilter-devel/msg55542.html
>From 8027c0d4b750b8dbd687234feda63305d0d5a057 Mon Sep 17 00:00:00 2001
From: Mike Rapoport <rppt@...ux.vnet.ibm.com>
Date: Wed, 15 Aug 2018 09:10:06 +0300
Subject: [RFC PATCH] docs/core-api: add memory allocation guide
Signed-off-by: Mike Rapoport <rppt@...ux.vnet.ibm.com>
---
Documentation/core-api/gfp_mask-from-fs-io.rst | 2 +
Documentation/core-api/index.rst | 1 +
Documentation/core-api/memory-allocation.rst | 117 +++++++++++++++++++++++++
Documentation/core-api/mm-api.rst | 2 +
4 files changed, 122 insertions(+)
create mode 100644 Documentation/core-api/memory-allocation.rst
diff --git a/Documentation/core-api/gfp_mask-from-fs-io.rst b/Documentation/core-api/gfp_mask-from-fs-io.rst
index e0df8f4..e7c32a8 100644
--- a/Documentation/core-api/gfp_mask-from-fs-io.rst
+++ b/Documentation/core-api/gfp_mask-from-fs-io.rst
@@ -1,3 +1,5 @@
+.. _gfp_mask_from_fs_io:
+
=================================
GFP masks used from FS/IO context
=================================
diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst
index cdc2020..8afc0da 100644
--- a/Documentation/core-api/index.rst
+++ b/Documentation/core-api/index.rst
@@ -27,6 +27,7 @@ Core utilities
errseq
printk-formats
circular-buffers
+ memory-allocation
mm-api
gfp_mask-from-fs-io
timekeeping
diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst
new file mode 100644
index 0000000..b1f2ad5
--- /dev/null
+++ b/Documentation/core-api/memory-allocation.rst
@@ -0,0 +1,117 @@
+=======================
+Memory Allocation Guide
+=======================
+
+Linux supplies variety of APIs for memory allocation. You can allocate
+small chunks using `kmalloc` or `kmem_cache_alloc` families, large
+virtually contiguous areas using `vmalloc` and it's derivatives, or
+you can directly request pages from the page allocator with
+`__get_free_pages`. It is also possible to use more specialized
+allocators, for instance `cma_alloc` or `zs_malloc`.
+
+Most of the memory allocations APIs use GFP flags to express how that
+memory should be allocated. The GFP acronym stands for "get free
+pages", the underlying memory allocation function.
+
+Diversity of the allocation APIs combined with the numerous GFP flags
+makes the question "How should I allocate memory?" not that easy to
+answer, although very likely you should use
+
+::
+
+ kzalloc(<size>, GFP_KERNEL);
+
+Of course there are cases when other allocation APIs and different GFP
+flags must be used.
+
+Get Free Page flags
+===================
+
+The GFP flags control the allocators behavior. They tell what memory
+zones can be used, how hard the allocator should try to find a free
+memory, whether the memory can be accessed by the userspace etc. The
+:ref:`Documentation/core-api/mm-api.rst <mm-api-gfp-flags>` provides
+reference documentation for the GFP flags and their combinations and
+here we briefly outline their recommended usage:
+
+ * Most of the times ``GFP_KERNEL`` is what you need. Memory for the
+ kernel data structures, DMAable memory, inode cache, all these and
+ many other allocations types can use ``GFP_KERNEL``. Note, that
+ using ``GFP_KERNEL`` implies ``GFP_RECLAIM``, which means that
+ direct reclaim may be triggered under memory pressure; the calling
+ context must be allowed to sleep.
+ * If the allocation is performed from an atomic context, e.g
+ interrupt handler, use ``GFP_ATOMIC``.
+ * Untrusted allocations triggered from userspace should be a subject
+ of kmem accounting and must have ``__GFP_ACCOUNT`` bit set. There
+ is handy ``GFP_KERNEL_ACCOUNT`` shortcut for ``GFP_KERNEL``
+ allocations that should be accounted.
+ * Userspace allocations should use either of the ``GFP_USER``,
+ ``GFP_HIGHUSER`` and ``GFP_HIGHUSER_MOVABLE`` flags. The longer
+ the flag name the less restrictive it is.
+
+ The ``GFP_HIGHUSER_MOVABLE`` does not require that allocated
+ memory will be directly accessible by the kernel or the hardware
+ and implies that the data may move.
+
+ The ``GFP_HIGHUSER`` means that the allocated memory is not
+ movable, but it is not required to be directly accessible by the
+ kernel or the hardware. An example may be a hardware allocation
+ that maps data directly into userspace but has no addressing
+ limitations.
+
+ The ``GFP_USER`` means that the allocated memory is not movable
+ and it must be directly accessible by the kernel or the
+ hardware. It is typically used by hardware for buffers that are
+ mapped to userspace (e.g. graphics) that hardware still must DMA
+ to.
+
+You may notice that quite a few allocations in the existing code
+specify ``GFP_NOIO`` and ``GFP_NOFS``. Historically, they were used to
+prevent recursion deadlocks caused by direct memory reclaim calling
+back into the FS or IO paths and blocking on already held
+resources. Since 4.12 the preferred way to address this issue is to
+use new scope APIs described in
+:ref:`Documentation/core-api/gfp_mask-from-fs-io.rst <gfp_mask_from_fs_io>`.
+
+Another legacy GFP flags are ``GFP_DMA`` and ``GFP_DMA32``. They are
+used to ensure that the allocated memory is accessible by hardware
+with limited addressing capabilities. So unless you are writing a
+driver for a device with such restrictions, avoid using these flags.
+
+Selecting memory allocator
+==========================
+
+The most straightforward way to allocate memory is to use a function
+from the `kmalloc` family. And, to be on the safe size it's best to
+use routines that set memory to zero, like `kzalloc`. If you need to
+allocate memory for an array, there are `kmalloc_array` and `kcalloc`
+helpers.
+
+The maximal size of a chunk that can be allocated with `kmalloc` is
+limited. The actual limit depends on the hardware and the kernel
+configuration, but it is a good practice to use `kmalloc` for objects
+smaller than page size.
+
+For large allocations you can use `vmalloc` and `vzalloc`, or directly
+request pages from the page allocator. The memory allocated by
+`vmalloc` and related functions is not physically contiguous.
+
+If you are not sure whether the allocation size is too large for
+`kmalloc` it is possible to use `kvmalloc` and its derivatives. It
+will try to allocate memory with `kmalloc` and if the allocation fails
+it will be retried with `vmalloc`. There are restrictions on which GFP
+flags can be used with `kvmalloc`, please see :c:func:`kvmalloc_node`
+reference documentation. Note, that `kvmalloc` may return memory that
+is not physically contiguous.
+
+If you need to allocate many identical objects you can use slab cache
+allocator. The cache should be set up with `kmem_cache_create` before
+it can be used. Afterwards `kmem_cache_alloc` and its convenience
+wrappers can allocate memory from that cache.
+
+When the allocated memory is no longer needed it must be freed. You
+can use `kvfree` for the memory allocated with `kmalloc`, `vmalloc`
+and `kvmalloc`. The slab caches should be freed with
+`kmem_cache_free`. And don't forget to destroy the cache with
+`kmem_cache_destroy`.
diff --git a/Documentation/core-api/mm-api.rst b/Documentation/core-api/mm-api.rst
index 46ae353..5ce1ec1 100644
--- a/Documentation/core-api/mm-api.rst
+++ b/Documentation/core-api/mm-api.rst
@@ -14,6 +14,8 @@ User Space Memory Access
.. kernel-doc:: mm/util.c
:functions: get_user_pages_fast
+.. _mm-api-gfp-flags:
+
Memory Allocation Controls
==========================
--
2.7.4
Powered by blists - more mailing lists