lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFymeC-s6rkGk4==3RjZu6nyyj2R9c5TBzpwTwJd4yjf2A@mail.gmail.com>
Date:   Tue, 15 Aug 2017 12:41:01 -0700
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     Tim Chen <tim.c.chen@...ux.intel.com>
Cc:     Andi Kleen <ak@...ux.intel.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...e.hu>, Kan Liang <kan.liang@...el.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
        linux-mm <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] sched/wait: Break up long wake list walk

On Tue, Aug 15, 2017 at 12:05 PM, Tim Chen <tim.c.chen@...ux.intel.com> wrote:
>
> We have a test case but it is a customer workload.  We'll try to get
> a bit more info.

Ok. Being a customer workload is lovely in the sense that it is
actually a real load, not just a microbecnhmark.

But yeah, it makes it harder to describe and show what's going on.

But you do have access to that workload internally at Intel, and can
at least test things out that way, I assume?

> I agree that dynamic sizing makes a lot of sense.  We'll check to
> see if additional size to the hash table helps, assuming that the
> waiters are distributed among different pages for our test case.

One more thing: it turns out that there are two very different kinds
of users of the page waitqueue.

There's the "wait_on_page_bit*()" users - people waiting for a page to
unlock or stop being under writeback etc.

Those *should* generally be limited to just one wait-queue per waiting
thread, I think.

Then there is the "cachefiles" use, which ends up adding a lot of
waitqueues to a lot of paghes to monitor their state.

Honestly, I think that second use a horrible hack. It basically adds a
waitqueue to each page in order to get a callback when it is ready,
and then copies it.

And it does this for things like cachefiles_read_backing_file(), so
you might have a huge list of pages for copying a large file, and it
adds a callback for every single one of those all at once.

The fix for the cachefiles behavior might be very different from the
fix to the "normal" operations. But making the wait queue hash tables
bigger _should_ help both cases.

We might also want to hash based on the actual bit we're waiting for.
Right now we just do a

        wait_queue_head_t *q = page_waitqueue(page);

but I think the actual bit is always explicit (well, the cachefiles
interface doesn't have that, but looking at the callback for that, it
really only cares about PG_locked, so it *should* make the bit it is
waiting for explicit).

So if we have unnecessarily collisions because we have waiters looking
at different bits of the same page, we could just hash in the bit
number that we're waiting for too.

               Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ