lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <db82e52c-0159-777d-8fa9-7b5cf93eca7f@redhat.com> Date: Sat, 31 Oct 2020 11:05:45 +0100 From: David Hildenbrand <david@...hat.com> To: Christoph Hellwig <hch@...radead.org>, Mike Rapoport <rppt@...nel.org> Cc: Sudarshan Rajagopalan <sudaraja@...eaurora.org>, Mark Rutland <mark.rutland@....com>, Catalin Marinas <catalin.marinas@....com>, Anshuman Khandual <anshuman.khandual@....com>, linux-kernel@...r.kernel.org, Steven Price <steven.price@....com>, Suren Baghdasaryan <surenb@...gle.com>, Greg Kroah-Hartman <gregkh@...gle.com>, Will Deacon <will@...nel.org>, linux-arm-kernel@...ts.infradead.org, Pratik Patel <pratikp@...eaurora.org> Subject: Re: mm/memblock: export memblock_{start/end}_of_DRAM On 31.10.20 10:18, Christoph Hellwig wrote: > On Fri, Oct 30, 2020 at 10:38:42AM +0200, Mike Rapoport wrote: >> >> What do you mean by "system memory block"? There could be a lot of >> interpretations if you take into account memory hotplug, "mem=" option, >> reserved and firmware memory. >> >> I'd suggest you to describe the entire use case in more detail. Having >> the complete picture would help finding a proper solution. > > I think we need the code for the driver trying to do this as an RFC > submission. Everything else is rather pointless. Sharing RFCs is most probably not what people want when developing advanced hypervisor features :) @Sudarshan, I recommend looking at the slides of the KVM Forum talk from yesterday https://kvmforum2020.sched.com/event/eE40/towards-an-alternative-memory-architecture-joao-martins-oracle?iframe=no It contains a nice summary of the state of art, and how "mem=", devdax, and dax_hmat can be used to tackle the issue in a hypervisor. -- Thanks, David / dhildenb
Powered by blists - more mailing lists