lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 30 Jan 2017 23:25:22 -0800
From:   John Hubbard <jhubbard@...dia.com>
To:     Dave Hansen <dave.hansen@...el.com>,
        Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
        <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
CC:     <mhocko@...e.com>, <vbabka@...e.cz>, <mgorman@...e.de>,
        <minchan@...nel.org>, <aneesh.kumar@...ux.vnet.ibm.com>,
        <bsingharora@...il.com>, <srikar@...ux.vnet.ibm.com>,
        <haren@...ux.vnet.ibm.com>, <jglisse@...hat.com>,
        <dan.j.williams@...el.com>
Subject: Re: [RFC V2 03/12] mm: Change generic FALLBACK zonelist creation
 process

On 01/30/2017 05:57 PM, Dave Hansen wrote:
> On 01/30/2017 05:36 PM, Anshuman Khandual wrote:
>>> Let's say we had a CDM node with 100x more RAM than the rest of the
>>> system and it was just as fast as the rest of the RAM.  Would we still
>>> want it isolated like this?  Or would we want a different policy?
>>
>> But then the other argument being, dont we want to keep this 100X more
>> memory isolated for some special purpose to be utilized by specific
>> applications ?
>
> I was thinking that in this case, we wouldn't even want to bother with
> having "system RAM" in the fallback lists.  A device who got its memory
> usage off by 1% could start to starve the rest of the system.  A sane
> policy in this case might be to isolate the "system RAM" from the device's.

I also don't like having these policies hard-coded, and your 100x example above 
helps clarify what can go wrong about it. It would be nicer if, instead, we could 
better express the "distance" between nodes (bandwidth, latency, relative to sysmem, 
perhaps), and let the NUMA system figure out the Right Thing To Do.

I realize that this is not quite possible with NUMA just yet, but I wonder if that's 
a reasonable direction to go with this?

thanks,
john h

>
>>> Why do we need this hard-coded along with the cpuset stuff later in the
>>> series.  Doesn't taking a node out of the cpuset also take it out of the
>>> fallback lists?
>>
>> There are two mutually exclusive approaches which are described in
>> this patch series.
>>
>> (1) zonelist modification based approach
>> (2) cpuset restriction based approach
>>
>> As mentioned in the cover letter,
>
> Well, I'm glad you coded both of them up, but now that we have them how
> to we pick which one to throw to the wolves?  Or, do we just merge both
> of them and let one bitrot? ;)
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@...ck.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@...ck.org"> email@...ck.org </a>
>

Powered by blists - more mailing lists