lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 22 Jan 2022 11:18:14 +0100
From:   Thomas Schoebel-Theuer <tst@...oebel-theuer.de>
To:     Matthew Wilcox <willy@...radead.org>,
        "Longpeng (Mike, Cloud Infrastructure Service Product Dept.)" 
        <longpeng2@...wei.com>
Cc:     Khalid Aziz <khalid.aziz@...cle.com>,
        Barry Song <21cnbao@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Arnd Bergmann <arnd@...db.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        David Hildenbrand <david@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>, Mike Rapoport <rppt@...nel.org>,
        Suren Baghdasaryan <surenb@...gle.com>
Subject: Re: [RFC PATCH 0/6] Add support for shared PTEs across processes

On 1/22/22 2:41 AM, Matthew Wilcox wrote:
> On Sat, Jan 22, 2022 at 01:39:46AM +0000, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) wrote:
>>>> Our use case is that we have some very large files stored on persistent
>>>> memory which we want to mmap in thousands of processes.  So the first
>> The memory overhead of PTEs would be significantly saved if we use
>> hugetlbfs in this case, but why not?
> Because we want the files to be persistent across reboots.

100% agree. There is another use case: geo-redundancy.

My view is publicly documented at 
https://github.com/schoebel/mars/tree/master/docu and click at 
architecture-guide-geo-redundancy.pdf

In some scenarios, migration or (temporary) co-existence of block 
devices from/between hardware architecture A to/between hardware 
architecture B might become a future requirement for me.

The currrent implementation does not yet use hugetlbfs and/or its 
proposed / low-overhead / more fine-grained and/or less 
hardware-architecture specific (future) alternatives.

For me, all of these are future options. In particular, when (1) 
abstractable for reduction of architectural dependencies, and hopefully 
(2) usable from both kernelspace and userspace.

It would be great if msharefs is not only low-footprint, but also would 
be usable from kernelspace.

Reduction (or getting rid) of preallocation strategies would be also a 
valuable feature for me.

Of course, I cannot decide what I will prefer in future for any future 
requirements. But some kind of mutual awareness and future collaboration 
would be great.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ