lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 28 Dec 2016 11:34:01 +0800
From:   huang ying <huang.ying.caritas@...il.com>
To:     "Huang, Ying" <ying.huang@...el.com>
Cc:     Jonathan Corbet <corbet@....net>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        dave.hansen@...el.com, Andi Kleen <ak@...ux.intel.com>,
        Aaron Lu <aaron.lu@...el.com>, linux-mm@...ck.org,
        LKML <linux-kernel@...r.kernel.org>,
        Hugh Dickins <hughd@...gle.com>, Shaohua Li <shli@...nel.org>,
        Minchan Kim <minchan@...nel.org>,
        Rik van Riel <riel@...hat.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>,
        Hillf Danton <hillf.zj@...baba-inc.com>
Subject: Re: [PATCH v2 2/8] mm/swap: Add cluster lock

Hi, Jonathan,

On Tue, Oct 25, 2016 at 10:05 AM, Huang, Ying <ying.huang@...el.com> wrote:
> Hi, Jonathan,
>
> Thanks for review.
>
> Jonathan Corbet <corbet@....net> writes:
>
>> On Thu, 20 Oct 2016 16:31:41 -0700
>> Tim Chen <tim.c.chen@...ux.intel.com> wrote:
>>
>>> From: "Huang, Ying" <ying.huang@...el.com>
>>>
>>> This patch is to reduce the lock contention of swap_info_struct->lock
>>> via using a more fine grained lock in swap_cluster_info for some swap
>>> operations.  swap_info_struct->lock is heavily contended if multiple
>>> processes reclaim pages simultaneously.  Because there is only one lock
>>> for each swap device.  While in common configuration, there is only one
>>> or several swap devices in the system.  The lock protects almost all
>>> swap related operations.
>>
>> So I'm looking at this a bit.  Overall it seems like a good thing to do
>> (from my limited understanding of this area) but I have a probably silly
>> question...
>>
>>>  struct swap_cluster_info {
>>> -    unsigned int data:24;
>>> -    unsigned int flags:8;
>>> +    unsigned long data;
>>>  };
>>> -#define CLUSTER_FLAG_FREE 1 /* This cluster is free */
>>> -#define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */
>>> +#define CLUSTER_COUNT_SHIFT         8
>>> +#define CLUSTER_FLAG_MASK           ((1UL << CLUSTER_COUNT_SHIFT) - 1)
>>> +#define CLUSTER_COUNT_MASK          (~CLUSTER_FLAG_MASK)
>>> +#define CLUSTER_FLAG_FREE           1 /* This cluster is free */
>>> +#define CLUSTER_FLAG_NEXT_NULL              2 /* This cluster has no next cluster */
>>> +/* cluster lock, protect cluster_info contents and sis->swap_map */
>>> +#define CLUSTER_FLAG_LOCK_BIT               2
>>> +#define CLUSTER_FLAG_LOCK           (1 << CLUSTER_FLAG_LOCK_BIT)
>>
>> Why the roll-your-own locking and data structures here?  To my naive
>> understanding, it seems like you could do something like:
>>
>>   struct swap_cluster_info {
>>       spinlock_t lock;
>>       atomic_t count;
>>       unsigned int flags;
>>   };
>>
>> Then you could use proper spinlock operations which, among other things,
>> would make the realtime folks happier.  That might well help with the
>> cache-line sharing issues as well.  Some of the count manipulations could
>> perhaps be done without the lock entirely; similarly, atomic bitops might
>> save you the locking for some of the flag tweaks - though I'd have to look
>> more closely to be really sure of that.
>>
>> The cost, of course, is the growth of this structure, but you've already
>> noted that the overhead isn't all that high; seems like it could be worth
>> it.
>
> Yes.  The data structure you proposed is much easier to be used than the
> current one.  The main concern is the RAM usage.  The size of the data
> structure you proposed is about 80 bytes, while that of the current one
> is about 8 bytes.  There will be one struct swap_cluster_info for every
> 1MB swap space, so for 1TB swap space, the total size will be 80M
> compared with 8M of current implementation.

Sorry, I turned on the lockdep when measure the size change, so the
previous size change data is wrong.  The size of the data structure
you proposed is 12 bytes.  While that of the current one is 8 bytes on
64 bit platform and 4 bytes on 32 bit platform.

Best Regards,
Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ