lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 28 Nov 2016 04:41:30 -0500 (EST)
From:   Jerome Glisse <jglisse@...hat.com>
To:     Anshuman Khandual <khandual@...ux.vnet.ibm.com>
Cc:     akpm@...ux-foundation.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, John Hubbard <jhubbard@...dia.com>,
        Jatin Kumar <jakumar@...dia.com>,
        Mark Hairgrove <mhairgrove@...dia.com>,
        Sherry Cheung <SCheung@...dia.com>,
        Subhash Gutti <sgutti@...dia.com>
Subject: Re: [HMM v13 08/18] mm/hmm: heterogeneous memory management (HMM
 for short)

> On 11/27/2016 06:40 PM, Jerome Glisse wrote:
> > On Wed, Nov 23, 2016 at 09:33:35AM +0530, Anshuman Khandual wrote:
> >> On 11/18/2016 11:48 PM, Jérôme Glisse wrote:
> > 
> > [...]
> > 
> >>> + *
> >>> + *      hmm_vma_migrate(vma, start, end, ops);
> >>> + *
> >>> + * With ops struct providing 2 callback alloc_and_copy() which allocated
> >>> the
> >>> + * destination memory and initialize it using source memory. Migration
> >>> can fail
> >>> + * after this step and thus last callback finalize_and_map() allow the
> >>> device
> >>> + * driver to know which page were successfully migrated and which were
> >>> not.
> >>
> >> So we have page->pgmap->free_devpage() to release the individual page back
> >> into the device driver management during migration and also we have this
> >> ops
> >> based finalize_and_mmap() to check on the failed instances inside a single
> >> migration context which can contain set of pages at a time.
> >>
> >>> + *
> >>> + * This can easily be use outside of HMM intended use case.
> >>
> >> Where you think this can be used outside of HMM ?
> > 
> > Well on the radar is new memory hierarchy that seems to be on every CPU
> > designer
> > roadmap. Where you have a fast small HBM like memory package with the CPU
> > and then
> > you have the regular memory.
> > 
> > In the embedded world they want to migrate active process to fast CPU
> > memory and
> > shutdown the regular memory to save power.
> > 
> > In the HPC world they want to migrate hot data of hot process to this fast
> > memory.
> > 
> > In both case we are talking about process base memory migration and in case
> > of
> > embedded they also have DMA engine they can use to offload the copy
> > operation
> > itself.
> > 
> > This are the useful case i have in mind but other people might see that
> > code and
> > realise they could also use it for their own specific corner case.
> 
> If there are plans for HBM or specialized type of memory which will be
> packaged inside the CPU (without any other device accessing it like in
> the case of GPU or Network Card), then I think in that case using HMM
> is not ideal. CPU will be the only thing accessing this memory and
> there is never going to be any other device or context which can access
> this outside of CPU. Hence role of a device driver is redundant, it
> should be initialized and used as a basic platform component.

AFAIK no CPU can saturate the bandwidth of this memory and thus they only
make sense when there is something like a GPU on die. So in my mind this
kind of memory is always use preferably by a GPU but could still be use by
CPU. In that context you also always have a DMA engine to offload memory
from CPU. I was more selling the HMM migration code in that context :)

 
> In that case what we need is a core VM managed memory with certain kind
> of restrictions around the allocation and a way of explicit allocation
> into it if required. Representing these memory like a cpu less restrictive
> coherent device memory node is a better solution IMHO. These RFCs what I
> have posted regarding CDM representation are efforts in this direction.
> 
> [RFC Specialized Zonelists]    https://lkml.org/lkml/2016/10/24/19
> [RFC Restrictive mems_allowed] https://lkml.org/lkml/2016/11/22/339
> 
> I believe both HMM and CDM have their own use cases and will complement
> each other.

Yes how this memory is represented probably better be represented by something
like CDM. 

Cheers,
Jérôme

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ