lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170717142743.GA9420@gmail.com>
Date:   Mon, 17 Jul 2017 10:27:44 -0400
From:   Jerome Glisse <j.glisse@...il.com>
To:     Yisheng Xie <xieyisheng1@...wei.com>
Cc:     Jean-Philippe Brucker <jean-philippe.brucker@....com>,
        "Wuzongyong (Cordius Wu, Euler Dept)" <wuzongyong1@...wei.com>,
        "iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "Wanzongshun (Vincent)" <wanzongshun@...wei.com>,
        "oded.gabbay@....com" <oded.gabbay@....com>, liubo95@...wei.com
Subject: Re: What differences and relations between SVM, HSA, HMM and Unified
 Memory?

On Mon, Jul 17, 2017 at 07:57:23PM +0800, Yisheng Xie wrote:
> Hi Jean-Philippe,
> 
> On 2017/6/12 19:37, Jean-Philippe Brucker wrote:
> > Hello,
> > 
> > On 10/06/17 05:06, Wuzongyong (Cordius Wu, Euler Dept) wrote:
> >> Hi,
> >>
> >> Could someone explain differences and relations between the SVM(Shared
> >> Virtual Memory, by Intel), HSA(Heterogeneous System Architecture, by AMD),
> >> HMM(Heterogeneous Memory Management, by Glisse) and UM(Unified Memory, by
> >> NVIDIA) ? Are these in the substitutional relation?
> >>
> >> As I understand it, these aim to solve the same thing, sharing pointers
> >> between CPU and GPU(implement with ATS/PASID/PRI/IOMMU support). So far,
> >> SVM and HSA can only be used by integrated gpu. And, Intel declare that
> >> the root ports doesn’t not have the required TLP prefix support, resulting
> >>  that SVM can’t be used by discrete devices. So could someone tell me the
> >> required TLP prefix means what specifically?>
> >> With HMM, we can use allocator like malloc to manage host and device
> >> memory. Does this mean that there is no need to use SVM and HSA with HMM,
> >> or HMM is the basis of SVM and HAS to implement Fine-Grained system SVM
> >> defined in the opencl spec?
> > 
> > I can't provide an exhaustive answer, but I have done some work on SVM.
> > Take it with a grain of salt though, I am not an expert.
> > 
> > * HSA is an architecture that provides a common programming model for CPUs
> > and accelerators (GPGPUs etc). It does have SVM requirement (I/O page
> > faults, PASID and compatible address spaces), though it's only a small
> > part of it.
> > 
> > * Similarly, OpenCL provides an API for dealing with accelerators. OpenCL
> > 2.0 introduced the concept of Fine-Grained System SVM, which allows to
> > pass userspace pointers to devices. It is just one flavor of SVM, they
> > also have coarse-grained and non-system. But they might have coined the
> > name, and I believe that in the context of Linux IOMMU, when we talk about
> > "SVM" it is OpenCL's fine-grained system SVM.
> > [...]
> > 
> > While SVM is only about virtual address space,
> As you mentioned, SVM is only about virtual address space, I'd like to know how to
> manage the physical address especially about device's RAM, before HMM?
> 
> When OpenCL alloc a SVM pointer like:
>     void* p = clSVMAlloc (
>         context, // an OpenCL context where this buffer is available
>         CL_MEM_READ_WRITE | CL_MEM_SVM_FINE_GRAIN_BUFFER,
>         size, // amount of memory to allocate (in bytes)
>         0 // alignment in bytes (0 means default)
>     );
> 
> where this RAM come from, device RAM or host RAM?
> 

For SVM using ATS/PASID with FINE_GRAIN your allocation can only
be inside the system memory (host RAM). You need a special system
bus like CAPI or CCIX which both are step further than ATS/PASID
to be able to allow fine grain to use device memory.

However that is where HMM can be usefull as HMM is a software
solution to this problem. So with HMM and a device that can work
with HMM, you can get fine grain allocation to also use device
memory however any CPU access will happen in host RAM.

Jérôme

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ