lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c7cc5df1-5a2d-15d2-4fa7-0d289fcda2fa@oracle.com>
Date:   Tue, 6 Feb 2018 12:47:54 -0500
From:   Daniel Jordan <daniel.m.jordan@...cle.com>
To:     Laurent Dufour <ldufour@...ux.vnet.ibm.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Cc:     aaron.lu@...el.com, ak@...ux.intel.com, akpm@...ux-foundation.org,
        Dave.Dice@...cle.com, dave@...olabs.net,
        khandual@...ux.vnet.ibm.com, mgorman@...e.de, mhocko@...nel.org,
        pasha.tatashin@...cle.com, steven.sistare@...cle.com,
        yossi.lev@...cle.com
Subject: Re: [RFC PATCH v1 12/13] mm: split up release_pages into non-sentinel
 and sentinel passes

On 02/02/2018 12:00 PM, Laurent Dufour wrote:
> On 02/02/2018 15:40, Laurent Dufour wrote:
>>
>>
>> On 01/02/2018 00:04, daniel.m.jordan@...cle.com wrote:
>>> A common case in release_pages is for the 'pages' list to be in roughly
>>> the same order as they are in their LRU.  With LRU batch locking, when a
>>> sentinel page is removed, an adjacent non-sentinel page must be promoted
>>> to a sentinel page to follow the locking scheme.  So we can get behavior
>>> where nearly every page in the 'pages' array is treated as a sentinel
>>> page, hurting the scalability of this approach.
>>>
>>> To address this, split up release_pages into non-sentinel and sentinel
>>> passes so that the non-sentinel pages can be locked with an LRU batch
>>> lock before the sentinel pages are removed.
>>>
>>> For the prototype, just use a bitmap and a temporary outer loop to
>>> implement this.
>>>
>>> Performance numbers from a single microbenchmark at this point in the
>>> series are included in the next patch.
>>>
>>> Signed-off-by: Daniel Jordan <daniel.m.jordan@...cle.com>
>>> ---
>>>   mm/swap.c | 20 +++++++++++++++++++-
>>>   1 file changed, 19 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/swap.c b/mm/swap.c
>>> index fae766e035a4..a302224293ad 100644
>>> --- a/mm/swap.c
>>> +++ b/mm/swap.c
>>> @@ -731,6 +731,7 @@ void lru_add_drain_all(void)
>>>   	put_online_cpus();
>>>   }
>>>
>>> +#define LRU_BITMAP_SIZE	512
>>>   /**
>>>    * release_pages - batched put_page()
>>>    * @pages: array of pages to release
>>> @@ -742,16 +743,32 @@ void lru_add_drain_all(void)
>>>    */
>>>   void release_pages(struct page **pages, int nr)
>>>   {
>>> -	int i;
>>> +	int h, i;
>>>   	LIST_HEAD(pages_to_free);
>>>   	struct pglist_data *locked_pgdat = NULL;
>>>   	spinlock_t *locked_lru_batch = NULL;
>>>   	struct lruvec *lruvec;
>>>   	unsigned long uninitialized_var(flags);
>>> +	DECLARE_BITMAP(lru_bitmap, LRU_BITMAP_SIZE);
>>> +
>>> +	VM_BUG_ON(nr > LRU_BITMAP_SIZE);
>>
>> While running your series rebased on v4.15-mmotm-2018-01-31-16-51, I'm
>> hitting this VM_BUG sometimes on a ppc64 system where page size is set to 64K.
> 
> I can't see any link between nr and LRU_BITMAP_SIZE, caller may pass a
> larger list of pages which is not relative to the LRU list.

You're correct, I used the hard-coded size to quickly prototype, just to 
see how this approach performs.  That's unfortunate that it bit you.
  > To move forward seeing the benefit of this series with the SPF one, I
> declared the bit map based on nr. This is still not a valid option but this
> at least allows to process all the passed pages.

Yes, the bitmap's not for the final version.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ