lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ycdo1PHC9KDD8eGD@pc638.lan>
Date:   Sat, 25 Dec 2021 19:54:12 +0100
From:   Uladzislau Rezki <urezki@...il.com>
To:     Manfred Spraul <manfred@...orfullife.com>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vasily Averin <vvs@...tuozzo.com>, cgel.zte@...il.com,
        shakeelb@...gle.com, rdunlap@...radead.org, dbueso@...e.de,
        unixbhaskar@...il.com, chi.minghao@....com.cn, arnd@...db.de,
        Zeal Robot <zealci@....com.cn>, linux-mm@...ck.org,
        1vier1@....de, stable@...r.kernel.org
Subject: Re: [PATCH] mm/util.c: Make kvfree() safe for calling while holding
 spinlocks

> One codepath in find_alloc_undo() calls kvfree() while holding a spinlock.
> Since vfree() can sleep this is a bug.
> 
> Previously, the code path used kfree(), and kfree() is safe to be called
> while holding a spinlock.
> 
> Minghao proposed to fix this by updating find_alloc_undo().
> 
> Alternate proposal to fix this: Instead of changing find_alloc_undo(),
> change kvfree() so that the same rules as for kfree() apply:
> Having different rules for kfree() and kvfree() just asks for bugs.
> 
> Disadvantage: Releasing vmalloc'ed memory will be delayed a bit.
> 
I guess the issues is with "vmap_purge_lock" mutex? I think it is better
to make the vfree() call as non-blocking one, i.e. the current design is
is suffering from one drawback. It is related to purging the outstanding
lazy areas from caller context. The drain process can be time consuming
and if it is done from high-prio or RT contexts it can hog a CPU. Another
issue is what you have reported that is about calling the schedule() and
holding spinlock. The proposal is to perform a drain in a separate work:

<snip>
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d2a00ad4e1dd..7c5d9b148fa4 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1717,18 +1717,6 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
 	return true;
 }
 
-/*
- * Kick off a purge of the outstanding lazy areas. Don't bother if somebody
- * is already purging.
- */
-static void try_purge_vmap_area_lazy(void)
-{
-	if (mutex_trylock(&vmap_purge_lock)) {
-		__purge_vmap_area_lazy(ULONG_MAX, 0);
-		mutex_unlock(&vmap_purge_lock);
-	}
-}
-
 /*
  * Kick off a purge of the outstanding lazy areas.
  */
@@ -1740,6 +1728,16 @@ static void purge_vmap_area_lazy(void)
 	mutex_unlock(&vmap_purge_lock);
 }
 
+static void drain_vmap_area(struct work_struct *work)
+{
+	if (mutex_trylock(&vmap_purge_lock)) {
+		__purge_vmap_area_lazy(ULONG_MAX, 0);
+		mutex_unlock(&vmap_purge_lock);
+	}
+}
+
+static DECLARE_WORK(drain_vmap_area_work, drain_vmap_area);
+
 /*
  * Free a vmap area, caller ensuring that the area has been unmapped
  * and flush_cache_vunmap had been called for the correct range
@@ -1766,7 +1764,7 @@ static void free_vmap_area_noflush(struct vmap_area *va)
 
 	/* After this point, we may free va at any time */
 	if (unlikely(nr_lazy > lazy_max_pages()))
-		try_purge_vmap_area_lazy();
+		schedule_work(&drain_vmap_area_work);
 }
 
 /*
<snip>


--
Vlad Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ