lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87sf2ceoks.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Thu, 01 Feb 2024 13:33:07 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Chris Li <chrisl@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
  linux-kernel@...r.kernel.org,  linux-mm@...ck.org,  Wei Xu
 <weixugc@...gle.com>,  Yu Zhao <yuzhao@...gle.com>,
  Greg Thelen
 <gthelen@...gle.com>,  Chun-Tse Shao <ctshao@...gle.com>,  Suren
 Baghdasaryan <surenb@...gle.com>,  Yosry Ahmed <yosryahmed@...gle.com>,
  Brain Geffon <bgeffon@...gle.com>,  Minchan Kim <minchan@...nel.org>,
  Michal Hocko <mhocko@...e.com>,  Mel Gorman
 <mgorman@...hsingularity.net>,  Nhat Pham <nphamcs@...il.com>,  Johannes
 Weiner <hannes@...xchg.org>,  Kairui Song <kasong@...cent.com>,  Zhongkun
 He <hezhongkun.hzk@...edance.com>,  Kemeng Shi
 <shikemeng@...weicloud.com>,  Barry Song <v-songbaohua@...o.com>
Subject: Re: [PATCH v2] mm: swap: async free swap slot cache entries

Chris Li <chrisl@...nel.org> writes:

> We discovered that 1% swap page fault is 100us+ while 50% of
> the swap fault is under 20us.
>
> Further investigation show that a large portion of the time
> spent in the free_swap_slots() function for the long tail case.
>
> The percpu cache of swap slots is freed in a batch of 64 entries
> inside free_swap_slots(). These cache entries are accumulated
> from previous page faults, which may not be related to the current
> process.
>
> Doing the batch free in the page fault handler causes longer
> tail latencies and penalizes the current process.
>
> Move free_swap_slots() outside of the swapin page fault handler into an
> async work queue to avoid such long tail latencies.
>
> The batch free of swap slots is typically at 100us level. Such a

Running ~100us operation in an asynchronous task appears overkill for me
too.

Can you try to move some operations out of swapcache_free_entries() to
check whether that can resolve your issue?

--
Best Regards,
Huang, Ying

> short time will not have a significant impact on the CPU accounting.
> Notice that the previous swap slot batching behavior already performs
> a delayed batch free. It waits for the entries accumulated to 64.
> Adding the async scheduling time does not change the original
> free timing significantly.
>
> Testing:
>
> Chun-Tse did some benchmark in chromebook, showing that
> zram_wait_metrics improve about 15% with 80% and 95% confidence.
>
> I recently ran some experiments on about 1000 Google production
> machines. It shows swapin latency drops in the long tail
> 100us - 500us bucket dramatically.
>
> platform	(100-500us)	 	(0-100us)
> A		1.12% -> 0.36%		98.47% -> 99.22%
> B		0.65% -> 0.15%		98.96% -> 99.46%
> C		0.61% -> 0.23%		98.96% -> 99.38%
>
> Signed-off-by: Chris Li <chrisl@...nel.org>
> To: Andrew Morton <akpm@...ux-foundation.org>
> Cc: linux-kernel@...r.kernel.org
> Cc: linux-mm@...ck.org
> Cc: Wei Xu<weixugc@...gle.com>
> Cc: Yu Zhao<yuzhao@...gle.com>
> Cc: Greg Thelen <gthelen@...gle.com>
> Cc: Chun-Tse Shao <ctshao@...gle.com>
> Cc: Suren Baghdasaryan<surenb@...gle.com>
> Cc: Yosry Ahmed <yosryahmed@...gle.com>
> Cc: Brain Geffon <bgeffon@...gle.com>
> Cc: Minchan Kim <minchan@...nel.org>
> Cc: Michal Hocko <mhocko@...e.com>
> Cc: Mel Gorman <mgorman@...hsingularity.net>
> Cc: Huang Ying <ying.huang@...el.com>
> Cc: Nhat Pham <nphamcs@...il.com>
> Cc: Johannes Weiner <hannes@...xchg.org>
> Cc: Kairui Song <kasong@...cent.com>
> Cc: Zhongkun He <hezhongkun.hzk@...edance.com>
> Cc: Kemeng Shi <shikemeng@...weicloud.com>
> Cc: Barry Song <v-songbaohua@...o.com>
>
> remove create_work queue
>
> remove another work queue usage
>
> ---
> Changes in v2:
> - Add description of the impact of time changing suggest by Ying.
> - Remove create_workqueue() and use schedule_work()
> - Link to v1: https://lore.kernel.org/r/20231221-async-free-v1-1-94b277992cb0@kernel.org
> ---
>  include/linux/swap_slots.h |  1 +
>  mm/swap_slots.c            | 29 +++++++++++++++++++++--------
>  2 files changed, 22 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/swap_slots.h b/include/linux/swap_slots.h
> index 15adfb8c813a..67bc8fa30d63 100644
> --- a/include/linux/swap_slots.h
> +++ b/include/linux/swap_slots.h
> @@ -19,6 +19,7 @@ struct swap_slots_cache {
>  	spinlock_t	free_lock;  /* protects slots_ret, n_ret */
>  	swp_entry_t	*slots_ret;
>  	int		n_ret;
> +	struct work_struct async_free;
>  };
>  
>  void disable_swap_slots_cache_lock(void);
> diff --git a/mm/swap_slots.c b/mm/swap_slots.c
> index 0bec1f705f8e..71d344564e55 100644
> --- a/mm/swap_slots.c
> +++ b/mm/swap_slots.c
> @@ -44,6 +44,7 @@ static DEFINE_MUTEX(swap_slots_cache_mutex);
>  static DEFINE_MUTEX(swap_slots_cache_enable_mutex);
>  
>  static void __drain_swap_slots_cache(unsigned int type);
> +static void swapcache_async_free_entries(struct work_struct *data);
>  
>  #define use_swap_slot_cache (swap_slot_cache_active && swap_slot_cache_enabled)
>  #define SLOTS_CACHE 0x1
> @@ -149,6 +150,7 @@ static int alloc_swap_slot_cache(unsigned int cpu)
>  		spin_lock_init(&cache->free_lock);
>  		cache->lock_initialized = true;
>  	}
> +	INIT_WORK(&cache->async_free, swapcache_async_free_entries);
>  	cache->nr = 0;
>  	cache->cur = 0;
>  	cache->n_ret = 0;
> @@ -269,6 +271,20 @@ static int refill_swap_slots_cache(struct swap_slots_cache *cache)
>  	return cache->nr;
>  }
>  
> +static void swapcache_async_free_entries(struct work_struct *data)
> +{
> +	struct swap_slots_cache *cache;
> +
> +	cache = container_of(data, struct swap_slots_cache, async_free);
> +	spin_lock_irq(&cache->free_lock);
> +	/* Swap slots cache may be deactivated before acquiring lock */
> +	if (cache->slots_ret) {
> +		swapcache_free_entries(cache->slots_ret, cache->n_ret);
> +		cache->n_ret = 0;
> +	}
> +	spin_unlock_irq(&cache->free_lock);
> +}
> +
>  void free_swap_slot(swp_entry_t entry)
>  {
>  	struct swap_slots_cache *cache;
> @@ -282,17 +298,14 @@ void free_swap_slot(swp_entry_t entry)
>  			goto direct_free;
>  		}
>  		if (cache->n_ret >= SWAP_SLOTS_CACHE_SIZE) {
> -			/*
> -			 * Return slots to global pool.
> -			 * The current swap_map value is SWAP_HAS_CACHE.
> -			 * Set it to 0 to indicate it is available for
> -			 * allocation in global pool
> -			 */
> -			swapcache_free_entries(cache->slots_ret, cache->n_ret);
> -			cache->n_ret = 0;
> +			spin_unlock_irq(&cache->free_lock);
> +			schedule_work(&cache->async_free);
> +			goto direct_free;
>  		}
>  		cache->slots_ret[cache->n_ret++] = entry;
>  		spin_unlock_irq(&cache->free_lock);
> +		if (cache->n_ret >= SWAP_SLOTS_CACHE_SIZE)
> +			schedule_work(&cache->async_free);
>  	} else {
>  direct_free:
>  		swapcache_free_entries(&entry, 1);
>
> ---
> base-commit: eacce8189e28717da6f44ee492b7404c636ae0de
> change-id: 20231216-async-free-bef392015432
>
> Best regards,

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ