lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 12 Oct 2021 12:03:22 -0700 From: Andrew Morton <akpm@...ux-foundation.org> To: Jason Gunthorpe <jgg@...dia.com> Cc: Alex Sierra <alex.sierra@....com>, Felix.Kuehling@....com, linux-mm@...ck.org, rcampbell@...dia.com, linux-ext4@...r.kernel.org, linux-xfs@...r.kernel.org, amd-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org, hch@....de, jglisse@...hat.com, apopple@...dia.com Subject: Re: [PATCH v1 00/12] MEMORY_DEVICE_COHERENT for CPU-accessible coherent device memory On Tue, 12 Oct 2021 15:56:29 -0300 Jason Gunthorpe <jgg@...dia.com> wrote: > > To what other uses will this infrastructure be put? > > > > Because I must ask: if this feature is for one single computer which > > presumably has a custom kernel, why add it to mainline Linux? > > Well, it certainly isn't just "one single computer". Overall I know of > about, hmm, ~10 *datacenters* worth of installations that are using > similar technology underpinnings. > > "Frontier" is the code name for a specific installation but as the > technology is proven out there will be many copies made of that same > approach. > > The previous program "Summit" was done with NVIDIA GPUs and PowerPC > CPUs and also included a very similar capability. I think this is a > good sign that this coherently attached accelerator will continue to > be a theme in computing going foward. IIRC this was done using out of > tree kernel patches and NUMA localities. > > Specifically with CXL now being standardized and on a path to ubiquity > I think we will see an explosion in deployments of coherently attached > accelerator memory. This is the high end trickling down to wider > usage. > > I strongly think many CXL accelerators are going to want to manage > their on-accelerator memory in this way as it makes universal sense to > want to carefully manage memory access locality to optimize for > performance. Thanks. Can we please get something like the above into the [0/n] changelog? Along with any other high-level info which is relevant? It's rather important. "why should I review this", "why should we merge this", etc.
Powered by blists - more mailing lists