lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFzPt401xpRzd6Qu-WuDNGneR_m7z25O=0YspNi+cLRb8w@mail.gmail.com>
Date:   Tue, 22 Aug 2017 12:30:19 -0700
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     "Liang, Kan" <kan.liang@...el.com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Mel Gorman <mgorman@...e.de>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        Ingo Molnar <mingo@...e.hu>, Andi Kleen <ak@...ux.intel.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Johannes Weiner <hannes@...xchg.org>, Jan Kara <jack@...e.cz>,
        linux-mm <linux-mm@...ck.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] sched/wait: Break up long wake list walk

On Tue, Aug 22, 2017 at 12:08 PM, Peter Zijlstra <peterz@...radead.org> wrote:
>
> So that migration stuff has a filter on, we need two consecutive numa
> faults from the same page_cpupid 'hash', see
> should_numa_migrate_memory().

Hmm. That is only called for MPOL_F_MORON.

We don't actually know what policy the problem space uses, since tthis
is some specialized load.

I could easily see somebody having set MPOL_PREFERRED with
MPOL_F_LOCAL and then touch it from every single node. Isn't that even
the default?

> And since this appears to be anonymous memory (no THP) this is all a
> single address space. However, we don't appear to invalidate TLBs when
> we upgrade the PTE protection bits (not strictly required of course), so
> we can have multiple CPUs trip over the same 'old' NUMA PTE.
>
> Still, generating such a migration storm would be fairly tricky I think.

Well, Mel seems to have been unable to generate a load that reproduces
the long page waitqueues. And I don't think we've had any other
reports of this either.

So "quite tricky" may well be exactly what it needs.

Likely also with a user load that does something that the people
involved in the automatic numa migration would have considered
completely insane and never tested or even thought about.

Users sometimes do completely insane things. It may have started as a
workaround for some particular case where they did something wrong "on
purpose", and then they entirely forgot about it, and five years later
it's running their whole infrastructure and doing insane things
because the "particular case" it was tested with was on some broken
preproduction machine with totally broken firmware tables for memory
node layout.

             Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ