[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161024103133.7c1a8f83@lwn.net>
Date: Mon, 24 Oct 2016 10:31:33 -0600
From: Jonathan Corbet <corbet@....net>
To: Tim Chen <tim.c.chen@...ux.intel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"Huang, Ying" <ying.huang@...el.com>, dave.hansen@...el.com,
ak@...ux.intel.com, aaron.lu@...el.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Hugh Dickins <hughd@...gle.com>,
Shaohua Li <shli@...nel.org>, Minchan Kim <minchan@...nel.org>,
Rik van Riel <riel@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Hillf Danton <hillf.zj@...baba-inc.com>
Subject: Re: [PATCH v2 2/8] mm/swap: Add cluster lock
On Thu, 20 Oct 2016 16:31:41 -0700
Tim Chen <tim.c.chen@...ux.intel.com> wrote:
> From: "Huang, Ying" <ying.huang@...el.com>
>
> This patch is to reduce the lock contention of swap_info_struct->lock
> via using a more fine grained lock in swap_cluster_info for some swap
> operations. swap_info_struct->lock is heavily contended if multiple
> processes reclaim pages simultaneously. Because there is only one lock
> for each swap device. While in common configuration, there is only one
> or several swap devices in the system. The lock protects almost all
> swap related operations.
So I'm looking at this a bit. Overall it seems like a good thing to do
(from my limited understanding of this area) but I have a probably silly
question...
> struct swap_cluster_info {
> - unsigned int data:24;
> - unsigned int flags:8;
> + unsigned long data;
> };
> -#define CLUSTER_FLAG_FREE 1 /* This cluster is free */
> -#define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */
> +#define CLUSTER_COUNT_SHIFT 8
> +#define CLUSTER_FLAG_MASK ((1UL << CLUSTER_COUNT_SHIFT) - 1)
> +#define CLUSTER_COUNT_MASK (~CLUSTER_FLAG_MASK)
> +#define CLUSTER_FLAG_FREE 1 /* This cluster is free */
> +#define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */
> +/* cluster lock, protect cluster_info contents and sis->swap_map */
> +#define CLUSTER_FLAG_LOCK_BIT 2
> +#define CLUSTER_FLAG_LOCK (1 << CLUSTER_FLAG_LOCK_BIT)
Why the roll-your-own locking and data structures here? To my naive
understanding, it seems like you could do something like:
struct swap_cluster_info {
spinlock_t lock;
atomic_t count;
unsigned int flags;
};
Then you could use proper spinlock operations which, among other things,
would make the realtime folks happier. That might well help with the
cache-line sharing issues as well. Some of the count manipulations could
perhaps be done without the lock entirely; similarly, atomic bitops might
save you the locking for some of the flag tweaks - though I'd have to look
more closely to be really sure of that.
The cost, of course, is the growth of this structure, but you've already
noted that the overhead isn't all that high; seems like it could be worth
it.
I assume that I'm missing something obvious here?
Thanks,
jon
Powered by blists - more mailing lists