lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150722000357.GA1834@dhcp-17-102.nay.redhat.com>
Date:	Wed, 22 Jul 2015 08:03:57 +0800
From:	Baoquan He <bhe@...hat.com>
To:	Tejun Heo <tj@...nel.org>
Cc:	cl@...ux-foundation.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] perpuc: check pcpu_first_chunk and
 pcpu_reserved_chunk to avoid handling them twice

Hi Tejun,

On 07/21/15 at 11:28am, Tejun Heo wrote:
> On Mon, Jul 20, 2015 at 10:55:29PM +0800, Baoquan He wrote:
> > In pcpu_setup_first_chunk() pcpu_reserved_chunk is assigned to point to
> > static chunk. While pcpu_first_chunk is got from below code:
> > 
> > 	pcpu_first_chunk = dchunk ?: schunk;
> > 
> > Then it could point to static chunk too if dynamic chunk doesn't exist. So
> > in this patch adding a check in percpu_init_late() to see if pcpu_first_chunk
> > is equal to pcpu_reserved_chunk. Only if they are not equal we add
> > pcpu_reserved_chunk to the target array.
> 
> So, I don't think this is actually possible.  dyn_size can't be zero
> so if reserved chunk is created, dyn chunk is also always created and
> thus first chunk can't equal reserved chunk.  It might be useful to
> add some comments explaining this or maybe WARN_ON() but I don't think
> this path is necessary.

Thanks for your reviewing.

Yes, dyn_size can't be zero. But in pcpu_setup_first_chunk(), the local
variable dyn_size could be zero caused by below code:

if (ai->reserved_size) {
                schunk->free_size = ai->reserved_size;
                pcpu_reserved_chunk = schunk;
                pcpu_reserved_chunk_limit = ai->static_size +
ai->reserved_size;
        } else {
                schunk->free_size = dyn_size;
                dyn_size = 0;                   /* dynamic area covered
*/
        }

So if no reserved_size dyn_size is assigned to zero, and is checked to
see if dchunk need be created in below code:
	/* init dynamic chunk if necessary */
        if (dyn_size) {
		...
	}

I think v1 patch is a little ugly, so made a v2 like this:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>From 0f22f04878f0779e4d9e66ae24f9bfc5321782c2 Mon Sep 17 00:00:00 2001
From: Baoquan He <bhe@...hat.com>
Date: Sun, 12 Jul 2015 19:33:26 +0800
Subject: [PATCH] perpuc: check pcpu_first_chunk and pcpu_reserved_chunk to
 avoid handling them twice

In pcpu_setup_first_chunk() pcpu_reserved_chunk is assigned to point to
static chunk. While pcpu_first_chunk is got from below code:

	pcpu_first_chunk = dchunk ?: schunk;

Then it could point to static chunk too if dynamic chunk doesn't exist.

In this patch add a new helper function percpu_install_map() to replace
the old map of chunk with newly allocated map. Then call percpu_install_map()
separately in percpu_init_late() if pcpu_first_chunk and pcpu_reserved_chunk
exist.

Signed-off-by: Baoquan He <bhe@...hat.com>
---
 mm/percpu.c | 39 ++++++++++++++++++++-------------------
 1 file changed, 20 insertions(+), 19 deletions(-)

diff --git a/mm/percpu.c b/mm/percpu.c
index a63b4d8..9d0f9f6 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -2251,6 +2251,23 @@ void __init setup_per_cpu_areas(void)
 
 #endif	/* CONFIG_SMP */
 
+static void __init percpu_install_map(struct pcpu_chunk * chunk)
+{
+	int *map;
+	unsigned long flags;
+	const size_t size = PERCPU_DYNAMIC_EARLY_SLOTS * sizeof(map[0]);
+
+	BUILD_BUG_ON(size > PAGE_SIZE);
+
+	map = pcpu_mem_zalloc(size);
+	BUG_ON(!map);
+
+	spin_lock_irqsave(&pcpu_lock, flags);
+	memcpy(map, chunk->map, size);
+	chunk->map = map;
+	spin_unlock_irqrestore(&pcpu_lock, flags);
+}
+
 /*
  * First and reserved chunks are initialized with temporary allocation
  * map in initdata so that they can be used before slab is online.
@@ -2259,26 +2276,10 @@ void __init setup_per_cpu_areas(void)
  */
 void __init percpu_init_late(void)
 {
-	struct pcpu_chunk *target_chunks[] =
-		{ pcpu_first_chunk, pcpu_reserved_chunk, NULL };
-	struct pcpu_chunk *chunk;
-	unsigned long flags;
-	int i;
-
-	for (i = 0; (chunk = target_chunks[i]); i++) {
-		int *map;
-		const size_t size = PERCPU_DYNAMIC_EARLY_SLOTS * sizeof(map[0]);
+	percpu_install_map(pcpu_first_chunk);
 
-		BUILD_BUG_ON(size > PAGE_SIZE);
-
-		map = pcpu_mem_zalloc(size);
-		BUG_ON(!map);
-
-		spin_lock_irqsave(&pcpu_lock, flags);
-		memcpy(map, chunk->map, size);
-		chunk->map = map;
-		spin_unlock_irqrestore(&pcpu_lock, flags);
-	}
+	if (pcpu_first_chunk != pcpu_reserved_chunk)
+		percpu_install_map(pcpu_reserved_chunk);
 }
 
 /*
-- 
1.9.3



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ