lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 25 Feb 2020 12:31:07 -0800 (PST)
From:   Hugh Dickins <hughd@...gle.com>
To:     David Hildenbrand <david@...hat.com>
cc:     Hugh Dickins <hughd@...gle.com>,
        Yang Shi <yang.shi@...ux.alibaba.com>,
        kirill.shutemov@...ux.intel.com, aarcange@...hat.com,
        akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [v2 PATCH] mm: shmem: allow split THP when truncating THP
 partially

On Tue, 25 Feb 2020, David Hildenbrand wrote:
> > 
> > I notice that this thread has veered off into QEMU ballooning
> > territory: which may indeed be important, but there's nothing at all
> > that I can contribute on that.  I certainly do not want to slow down
> > anything important, but remain convinced that the correct filesystem
> > implementation for punching a hole is to punch a hole.
> 
> I am not completely sure I follow all the shmem details (sorry!). But
> trying to "punch a partial hole punch" into a hugetlbfs page will result
> in the very same behavior as with shmem as of now, no?

I believe so.

> 
> FALLOC_FL_PUNCH_HOLE: "Within the specified range, partial filesystem
> blocks are zeroed, and whole filesystem blocks are removed from the
> file." ... After a successful call, subsequent reads from this range
> will return zeros."
> 
> So, as long as we are talking about partial blocks the documented
> behavior seems to be to only zero the memory.
> 
> Does this patch fix "FALLOC_FL_PUNCH_HOLE does not free blocks if called
> in block granularity on shmem" (which would be a valid fix),

Yes. The block size of tmpfs is (talking x86_64 for simplicity) 4KiB;
but when mounted huge, it transparently takes advantage of 2MiB extents
when it can.  Rather like a disk-based filesystem always presenting a
4KiB block interface, but stored on disk in multisector extents.

Whereas hugetlbfs is a different filesystem, which is and always has
been limited to supporting only certain larger block sizes.

> or does it
> try to implement something that is not documented? (removing partial
> blocks when called in sub-block granularity)

No.

> 
> I assume the latter, in which case I would interpret "punching a hole is
> to punch a hole" as "punching sub-blocks will not free blocks".
> 
> (if somebody could enlighten me which important piece I am missing or
> messing up, that would be great :) )
> 
> -- 
> Thanks,
> 
> David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ