lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1615974423.0rc8elykcq.astroid@bobo.none>
Date:   Wed, 17 Mar 2021 20:02:18 +1000
From:   Nicholas Piggin <npiggin@...il.com>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Anton Blanchard <anton@...abs.org>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v2] Increase page and bit waitqueue hash size

Excerpts from Ingo Molnar's message of March 17, 2021 6:38 pm:
> 
> * Nicholas Piggin <npiggin@...il.com> wrote:
> 
>> The page waitqueue hash is a bit small (256 entries) on very big systems. A
>> 16 socket 1536 thread POWER9 system was found to encounter hash collisions
>> and excessive time in waitqueue locking at times. This was intermittent and
>> hard to reproduce easily with the setup we had (very little real IO
>> capacity). The theory is that sometimes (depending on allocation luck)
>> important pages would happen to collide a lot in the hash, slowing down page
>> locking, causing the problem to snowball.
>> 
>> An small test case was made where threads would write and fsync different
>> pages, generating just a small amount of contention across many pages.
>> 
>> Increasing page waitqueue hash size to 262144 entries increased throughput
>> by 182% while also reducing standard deviation 3x. perf before the increase:
>> 
>>   36.23%  [k] _raw_spin_lock_irqsave                -      -
>>               |
>>               |--34.60%--wake_up_page_bit
>>               |          0
>>               |          iomap_write_end.isra.38
>>               |          iomap_write_actor
>>               |          iomap_apply
>>               |          iomap_file_buffered_write
>>               |          xfs_file_buffered_aio_write
>>               |          new_sync_write
>> 
>>   17.93%  [k] native_queued_spin_lock_slowpath      -      -
>>               |
>>               |--16.74%--_raw_spin_lock_irqsave
>>               |          |
>>               |           --16.44%--wake_up_page_bit
>>               |                     iomap_write_end.isra.38
>>               |                     iomap_write_actor
>>               |                     iomap_apply
>>               |                     iomap_file_buffered_write
>>               |                     xfs_file_buffered_aio_write
>> 
>> This patch uses alloc_large_system_hash to allocate a bigger system hash
>> that scales somewhat with memory size. The bit/var wait-queue is also
>> changed to keep code matching, albiet with a smaller scale factor.
>> 
>> A very small CONFIG_BASE_SMALL option is also added because these are two
>> of the biggest static objects in the image on very small systems.
>> 
>> This hash could be made per-node, which may help reduce remote accesses
>> on well localised workloads, but that adds some complexity with indexing
>> and hotplug, so until we get a less artificial workload to test with,
>> keep it simple.
>> 
>> Signed-off-by: Nicholas Piggin <npiggin@...il.com>
>> ---
>>  kernel/sched/wait_bit.c | 30 +++++++++++++++++++++++-------
>>  mm/filemap.c            | 24 +++++++++++++++++++++---
>>  2 files changed, 44 insertions(+), 10 deletions(-)
>> 
>> diff --git a/kernel/sched/wait_bit.c b/kernel/sched/wait_bit.c
>> index 02ce292b9bc0..dba73dec17c4 100644
>> --- a/kernel/sched/wait_bit.c
>> +++ b/kernel/sched/wait_bit.c
>> @@ -2,19 +2,24 @@
>>  /*
>>   * The implementation of the wait_bit*() and related waiting APIs:
>>   */
>> +#include <linux/memblock.h>
>>  #include "sched.h"
>>  
>> -#define WAIT_TABLE_BITS 8
>> -#define WAIT_TABLE_SIZE (1 << WAIT_TABLE_BITS)
> 
> Ugh, 256 entries is almost embarrassingly small indeed.
> 
> I've put your patch into sched/core, unless Andrew is objecting.

Thanks. Andrew and Linux might have some opinions on it, but if it's 
just in a testing branch for now that's okay.


> 
>> -	for (i = 0; i < WAIT_TABLE_SIZE; i++)
>> +	if (!CONFIG_BASE_SMALL) {
>> +		bit_wait_table = alloc_large_system_hash("bit waitqueue hash",
>> +							sizeof(wait_queue_head_t),
>> +							0,
>> +							22,
>> +							0,
>> +							&bit_wait_table_bits,
>> +							NULL,
>> +							0,
>> +							0);
>> +	}
>> +	for (i = 0; i < BIT_WAIT_TABLE_SIZE; i++)
>>  		init_waitqueue_head(bit_wait_table + i);
> 
> 
> Meta suggestion: maybe the CONFIG_BASE_SMALL ugliness could be folded 
> into alloc_large_system_hash() itself?

I don't like the ugliness and that's a good suggestion in some ways, but 
having the constant size and table is nice for the small system. I don't 
know, maybe we need to revise the alloc_large_system_hash API slightly.

Having some kind of DEFINE_LARGE_ARRAY perhaps then you could have both
static and dynamic? I'll think about it.

> 
>> --- a/mm/filemap.c
>> +++ b/mm/filemap.c
> 
>>  static wait_queue_head_t *page_waitqueue(struct page *page)
>>  {
>> -	return &page_wait_table[hash_ptr(page, PAGE_WAIT_TABLE_BITS)];
>> +	return &page_wait_table[hash_ptr(page, page_wait_table_bits)];
>>  }
> 
> I'm wondering whether you've tried to make this NUMA aware through 
> page->node?
> 
> Seems like another useful step when having a global hash ...

Yes I have patches for that on the back burner. Just wanted to try one
step at a time, but I think we should be able to justify it (a well 
NUMAified workload will tend to store mostly to local page waitqueue so 
keep cacheline contention within the node). We need to get some access 
to a big system again and try get some more IO on it at some point, so 
stay tuned for that.

We actually used to have similar to this, but Linux removed it with
9dcb8b685fc30.

The difference now is that the page waitqueue has been split out from
the bit waitqueue. Doing the page waitqueue is much easier because we
don't have the vmalloc problem to deal with. But still it's some
complexity.

We also do have the page contention bit that Linus refers to which takes 
pressure off the waitqueues (which is probably why 256 entries has held 
up surprisingly well), but as we can see we do need larger at the high
end.

Thanks,
Nick

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ