[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <878tjeh96m.fsf@yhuang-dev.intel.com>
Date: Mon, 24 Jul 2017 10:15:29 +0800
From: "Huang\, Ying" <ying.huang@...el.com>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Ying Huang <ying.huang@...el.com>,
Wenwei Tao <wenwei.tww@...baba-inc.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, Minchan Kim <minchan@...nel.org>,
"Rik van Riel" <riel@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
"Johannes Weiner" <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Hillf Danton <hillf.zj@...baba-inc.com>
Subject: Re: [PATCH 2/2] mm/swap: Remove lock_initialized flag from swap_slots_cache
Hi, Tim,
Tim Chen <tim.c.chen@...ux.intel.com> writes:
> We will only reach the lock initialization code
> in alloc_swap_slot_cache when the cpu's swap_slots_cache's slots
> have not been allocated and swap_slots_cache has not been initialized
> previously. So the lock_initialized check is redundant and unnecessary.
> Remove lock_initialized flag from swap_slots_cache to save memory.
Is there a race condition with CPU offline/online when preempt is enabled?
CPU A CPU B
----- -----
get_swap_page()
get cache[B], cache[B]->slots != NULL
preempted and moved to CPU A
be offlined
be onlined
alloc_swap_slot_cache()
mutex_lock(cache[B]->alloc_lock)
mutex_init(cache[B]->alloc_lock) !!!
The cache[B]->alloc_lock will be reinitialized when it is still held.
Best Regards,
Huang, Ying
> Reported-by: Wenwei Tao <wenwei.tww@...baba-inc.com>
> Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
> ---
> include/linux/swap_slots.h | 1 -
> mm/swap_slots.c | 9 ++++-----
> 2 files changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/swap_slots.h b/include/linux/swap_slots.h
> index 6ef92d1..a75c30b 100644
> --- a/include/linux/swap_slots.h
> +++ b/include/linux/swap_slots.h
> @@ -10,7 +10,6 @@
> #define THRESHOLD_DEACTIVATE_SWAP_SLOTS_CACHE (2*SWAP_SLOTS_CACHE_SIZE)
>
> struct swap_slots_cache {
> - bool lock_initialized;
> struct mutex alloc_lock; /* protects slots, nr, cur */
> swp_entry_t *slots;
> int nr;
> diff --git a/mm/swap_slots.c b/mm/swap_slots.c
> index 4c5457c..c039e6c 100644
> --- a/mm/swap_slots.c
> +++ b/mm/swap_slots.c
> @@ -140,11 +140,10 @@ static int alloc_swap_slot_cache(unsigned int cpu)
> if (cache->slots || cache->slots_ret)
> /* cache already allocated */
> goto out;
> - if (!cache->lock_initialized) {
> - mutex_init(&cache->alloc_lock);
> - spin_lock_init(&cache->free_lock);
> - cache->lock_initialized = true;
> - }
> +
> + mutex_init(&cache->alloc_lock);
> + spin_lock_init(&cache->free_lock);
> +
> cache->nr = 0;
> cache->cur = 0;
> cache->n_ret = 0;
Powered by blists - more mailing lists