lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Sat,  9 Apr 2022 20:49:07 +0200
From:   "Fabio M. De Francesco" <fmdefrancesco@...il.com>
To:     Jonathan Corbet <corbet@....net>, linux-doc@...r.kernel.org,
        linux-kernel@...r.kernel.org, Ira Weiny <ira.weiny@...el.com>,
        Thomas Gleixner <tglx@...utronix.de>
Cc:     "Fabio M. De Francesco" <fmdefrancesco@...il.com>
Subject: [PATCH] Documentation/vm: Extend "Temporary Virtual Mappings" in highmem.rst

Extend and rework the "Temporary Virtual Mappings" section of the highmem.rst
documentation. Do a partial rework of the paragraph related to kmap() and
add a new paragraph in order to document the set of kmap_local_*() functions.
Despite the local kmaps were introduced by Thomas Gleixner in October 2020,
documentation was still missing information about them. These additions rely
largely on Gleixner's patches, Jonathan Corbet's LWN articles and in-code
comments from ./include/linux/highmem.h.

Cc: Jonathan Corbet <corbet@....net>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Ira Weiny <ira.weiny@...el.com>
Signed-off-by: Fabio M. De Francesco <fmdefrancesco@...il.com>
---
 Documentation/vm/highmem.rst | 68 ++++++++++++++++++++++++++++--------
 1 file changed, 54 insertions(+), 14 deletions(-)

diff --git a/Documentation/vm/highmem.rst b/Documentation/vm/highmem.rst
index 0f69a9fec34d..d9ec26d921c8 100644
--- a/Documentation/vm/highmem.rst
+++ b/Documentation/vm/highmem.rst
@@ -52,25 +52,65 @@ Temporary Virtual Mappings
 
 The kernel contains several ways of creating temporary mappings:
 
-* vmap().  This can be used to make a long duration mapping of multiple
-  physical pages into a contiguous virtual space.  It needs global
-  synchronization to unmap.
+* vmap().  This can be used to make a long duration mapping of multiple physical
+  pages into a contiguous virtual space. It needs global synchronization to unmap.
 
-* kmap().  This permits a short duration mapping of a single page.  It needs
-  global synchronization, but is amortized somewhat.  It is also prone to
-  deadlocks when using in a nested fashion, and so it is not recommended for
-  new code.
+* kmap().  This can be used to make long duration mapping of a single page with
+  no restrictions on preemption or migration. It comes with an overhead as mapping
+  space is restricted and protected by a global lock for synchronization. When
+  mapping is no more needed, page must be released with kunmap().
 
-* kmap_atomic().  This permits a very short duration mapping of a single
-  page.  Since the mapping is restricted to the CPU that issued it, it
-  performs well, but the issuing task is therefore required to stay on that
-  CPU until it has finished, lest some other task displace its mappings.
+  Mapping changes must be propagated across all the CPUs. kmap() also requires
+  global TLB invalidation when the kmap's pool wraps and it might block when the
+  mapping space is fully utilized until a slot becomes available. Therefore,
+  kmap() is only callable from preemptible context.
 
-  kmap_atomic() may also be used by interrupt contexts, since it is does not
-  sleep and the caller may not sleep until after kunmap_atomic() is called.
+  All the above work is necessary if a mapping must last for a relatively long
+  time but the bulk of high-memory mappings in the kernel are short-lived and
+  only used in one place.
+
+  This means that the cost of kmap() is mostly wasted in such cases; therefore,
+  newer code is discouraged from using kmap().
 
-  It may be assumed that k[un]map_atomic() won't fail.
+* kmap_atomic().  This permits a very short duration mapping of a single page.
+  Since the mapping is restricted to the CPU that issued it, it performs well,
+  but the issuing task is therefore required to stay on that CPU until it has
+  finished, lest some other task displace its mappings.
 
+  kmap_atomic() may also be used by interrupt contexts, since it is does not
+  sleep and the caller too can not sleep until after kunmap_atomic() is called.
+  Each call of kmap_atomic() in the kernel creates a non-preemptible section and
+  disable pagefaults.
+
+  This could be a source of unwanted latency, so it should be only used if it is
+  absolutely required, otherwise the corresponding kmap_local_*() variant should
+  be used if it is feasible (see below).
+
+  On 64-bit systems, calls to kmap() and kmap_atomic() have no real work to do
+  because a 64-bit address space is more than sufficient to address all the
+  physical memory, so all of physical memory appears in the direct mapping.
+
+  It is assumed that k[un]map_atomic() won't fail.
+
+* kmap_local_*().  These provide a set of functions similar to kmap_atomic() and
+  are used to require short term mappings. They can be invoked from any context
+  (including interrupts).
+
+  The mapping can only be used in the context which acquired it, it is per thread,
+  CPU local (i.e., migration from one CPU to another is disabled - this is why
+  they are called "local"), but they don't disable preemption. It's valid to take
+  pagefaults in a local kmap region, unless the context in which the local mapping
+  is acquired does not allow it for other reasons.
+
+  If a task holding local kmaps is preempted, the maps are removed on context
+  switch and restored when the task comes back on the CPU. As the maps are strictly
+  CPU local, it is guaranteed that the task stays on the CPU and that the CPU
+  cannot be unplugged until the local kmaps are released.
+
+  Nesting kmap_local.*() and kmap_atomic.*() mappings is allowed to a certain
+  extent (up to KMAP_TYPE_NR). Nested kmap_local.*() and kunmap_local.*()
+  invocations have to be strictly ordered because the map implementation is stack
+  based.
 
 Using kmap_atomic
 =================
-- 
2.34.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ