lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 24 Jul 2017 19:02:00 -0400
From:   Dennis Zhou <dennisz@...com>
To:     Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...ux.com>,
        Josef Bacik <josef@...icpanda.com>
CC:     <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
        <kernel-team@...com>, Dennis Zhou <dennisszhou@...il.com>
Subject: [PATCH v2 03/23] percpu: remove has_reserved from pcpu_chunk

From: "Dennis Zhou (Facebook)" <dennisszhou@...il.com>

Prior this variable was used to manage statistics when the first chunk
had a reserved region. The previous patch introduced start_offset to
keep track of the offset by value rather than boolean. Therefore,
has_reserved can be removed.

Signed-off-by: Dennis Zhou <dennisszhou@...il.com>
---
 mm/percpu-internal.h | 5 -----
 mm/percpu-stats.c    | 2 +-
 mm/percpu.c          | 3 ---
 3 files changed, 1 insertion(+), 9 deletions(-)

diff --git a/mm/percpu-internal.h b/mm/percpu-internal.h
index 92fc012..c876b5b 100644
--- a/mm/percpu-internal.h
+++ b/mm/percpu-internal.h
@@ -23,11 +23,6 @@ struct pcpu_chunk {
 	void			*data;		/* chunk data */
 	int			first_free;	/* no free below this */
 	bool			immutable;	/* no [de]population allowed */
-	bool			has_reserved;	/* Indicates if chunk has reserved space
-						   at the beginning. Reserved chunk will
-						   contain reservation for static chunk.
-						   Dynamic chunk will contain reservation
-						   for static and reserved chunks. */
 	int			start_offset;	/* the overlap with the previous
 						   region to have a page aligned
 						   base_addr */
diff --git a/mm/percpu-stats.c b/mm/percpu-stats.c
index 44e561d..32f3550 100644
--- a/mm/percpu-stats.c
+++ b/mm/percpu-stats.c
@@ -58,7 +58,7 @@ static void chunk_map_stats(struct seq_file *m, struct pcpu_chunk *chunk,
 	int cur_min_alloc = 0, cur_med_alloc = 0, cur_max_alloc = 0;
 
 	alloc_sizes = buffer;
-	s_index = chunk->has_reserved ? 1 : 0;
+	s_index = (chunk->start_offset) ? 1 : 0;
 
 	/* find last allocation */
 	last_alloc = -1;
diff --git a/mm/percpu.c b/mm/percpu.c
index e94f0d1..470e1a0 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -727,7 +727,6 @@ static struct pcpu_chunk *pcpu_alloc_chunk(void)
 	chunk->map[0] = 0;
 	chunk->map[1] = pcpu_unit_size | 1;
 	chunk->map_used = 1;
-	chunk->has_reserved = false;
 
 	INIT_LIST_HEAD(&chunk->list);
 	INIT_LIST_HEAD(&chunk->map_extend_list);
@@ -1704,7 +1703,6 @@ int __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
 	schunk->map[1] = schunk->start_offset;
 	schunk->map[2] = (ai->static_size + schunk->free_size) | 1;
 	schunk->map_used = 2;
-	schunk->has_reserved = true;
 
 	/* init dynamic chunk if necessary */
 	if (dyn_size) {
@@ -1724,7 +1722,6 @@ int __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
 		dchunk->map[1] = dchunk->start_offset;
 		dchunk->map[2] = (dchunk->start_offset + dchunk->free_size) | 1;
 		dchunk->map_used = 2;
-		dchunk->has_reserved = true;
 	}
 
 	/* link the first chunk in */
-- 
2.9.3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ