lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <580E62AB.8040303@intel.com>
Date:   Mon, 24 Oct 2016 12:36:11 -0700
From:   Dave Hansen <dave.hansen@...el.com>
To:     David Nellans <dnellans@...dia.com>,
        Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Cc:     mhocko@...e.com, js1304@...il.com, vbabka@...e.cz, mgorman@...e.de,
        minchan@...nel.org, akpm@...ux-foundation.org,
        aneesh.kumar@...ux.vnet.ibm.com, bsingharora@...il.com
Subject: Re: [RFC 0/8] Define coherent device memory node

On 10/24/2016 11:32 AM, David Nellans wrote:
> On 10/24/2016 01:04 PM, Dave Hansen wrote:
>> If you *really* don't want a "cdm" page to be migrated, then why isn't
>> that policy set on the VMA in the first place?  That would keep "cdm"
>> pages from being made non-cdm.  And, why would autonuma ever make a
>> non-cdm page and migrate it in to cdm?  There will be no NUMA access
>> faults caused by the devices that are fed to autonuma.
>>
> Pages are desired to be migrateable, both into (starting cpu zone
> movable->cdm) and out of (starting cdm->cpu zone movable) but only
> through explicit migration, not via autonuma.

OK, and is there a reason that the existing mbind code plus NUMA
policies fails to give you this behavior?

Does autonuma somehow override strict NUMA binding?

>  other pages in the same
> VMA should still be migrateable between CPU nodes via autonuma however.

That's not the way the implementation here works, as I understand it.
See the VM_CDM patch and my responses to it.

> Its expected a lot of these allocations are going to end up in THPs. 
> I'm not sure we need to explicitly disallow hugetlbfs support but the
> identified use case is definitely via THPs not tlbfs.

I think THP and hugetlbfs are implementations, not use cases. :)

Is it too hard to support hugetlbfs that we should complicate its code
to exclude it from this type of memory?  Why?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ