lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <pxeazv32rilu75pxzfw6ksx3fmlij5brih2247d73c3vq6mokc@qcnuqbqs4buj>
Date: Tue, 16 Jul 2024 13:11:23 +0000
From: Daniel Gomez <da.gomez@...sung.com>
To: Ryan Roberts <ryan.roberts@....com>
CC: David Hildenbrand <david@...hat.com>, Baolin Wang
	<baolin.wang@...ux.alibaba.com>, Matthew Wilcox <willy@...radead.org>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>, "hughd@...gle.com"
	<hughd@...gle.com>, "wangkefeng.wang@...wei.com"
	<wangkefeng.wang@...wei.com>, "ying.huang@...el.com" <ying.huang@...el.com>,
	"21cnbao@...il.com" <21cnbao@...il.com>, "shy828301@...il.com"
	<shy828301@...il.com>, "ziy@...dia.com" <ziy@...dia.com>,
	"ioworker0@...il.com" <ioworker0@...il.com>, Pankaj Raghav
	<p.raghav@...sung.com>, "linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v5 0/6] add mTHP support for anonymous shmem

On Tue, Jul 09, 2024 at 09:28:48AM GMT, Ryan Roberts wrote:
> On 07/07/2024 17:39, Daniel Gomez wrote:
> > On Fri, Jul 05, 2024 at 10:59:02AM GMT, David Hildenbrand wrote:
> >> On 05.07.24 10:45, Ryan Roberts wrote:
> >>> On 05/07/2024 06:47, Baolin Wang wrote:
> >>>>
> >>>>
> >>>> On 2024/7/5 03:49, Matthew Wilcox wrote:
> >>>>> On Thu, Jul 04, 2024 at 09:19:10PM +0200, David Hildenbrand wrote:
> >>>>>> On 04.07.24 21:03, David Hildenbrand wrote:
> >>>>>>>> shmem has two uses:
> >>>>>>>>
> >>>>>>>>      - MAP_ANONYMOUS | MAP_SHARED (this patch set)
> >>>>>>>>      - tmpfs
> >>>>>>>>
> >>>>>>>> For the second use case we don't want controls *at all*, we want the
> >>>>>>>> same heiristics used for all other filesystems to apply to tmpfs.
> >>>>>>>
> >>>>>>> As discussed in the MM meeting, Hugh had a different opinion on that.
> >>>>>>
> >>>>>> FWIW, I just recalled that I wrote a quick summary:
> >>>>>>
> >>>>>> https://lkml.kernel.org/r/f1783ff0-65bd-4b2b-8952-52b6822a0835@redhat.com
> >>>>>>
> >>>>>> I believe the meetings are recorded as well, but never looked at recordings.
> >>>>>
> >>>>> That's not what I understood Hugh to mean.  To me, it seemed that Hugh
> >>>>> was expressing an opinion on using shmem as shmem, not as using it as
> >>>>> tmpfs.
> >>>>>
> >>>>> If I misunderstood Hugh, well, I still disagree.  We should not have
> >>>>> separate controls for this.  tmpfs is just not that special.
> >>>
> >>> I wasn't at the meeting that's being referred to, but I thought we previously
> >>> agreed that tmpfs *is* special because in some configurations its not backed by
> >>> swap so is locked in ram?
> >>
> >> There are multiple things to that, like:
> >>
> >> * Machines only having limited/no swap configured
> >> * tmpfs can be configured to never go to swap
> >> * memfd/tmpfs files getting used purely for mmap(): there is no real
> >>   difference to MAP_ANON|MAP_SHARE besides the processes we share that
> >>   memory with.
> >>
> >> Especially when it comes to memory waste concerns and access behavior in
> >> some cases, tmpfs behaved much more like anonymous memory. But there are for
> >> sure other use cases where tmpfs is not that special.
> > 
> > Having controls to select the allowable folio order allocations for
> > tmpfs does not address any of these issues. The suggested filesystem
> > approach [1] involves allocating orders in larger chunks, but always
> > the same size you would allocate when using order-0 folios. 
> 
> Well you can't know that you will never allocate more. If you allocate a 2M

In the fs large folio approach implementation [1], the allocation of a 2M (or
any non order-0) occurs when the size of the write/fallocate is 2M (and index
is aligned).

> block, you probably have some good readahead data that tells you you are likely
> to keep reading sequentially, but you don't know for sure that the application
> won't stop after just 4K.

Is shmem_file_read_iter() getting readahead data to perform the read? or what do
you mean exactly?

In [1], read is perform in chunks of 4k, so I think this does not apply.

> 
> > So,
> > it's a conservative approach. Using mTHP knobs in tmpfs would cause:
> > * Over allocation when using mTHP and/ord THP under the 'always' flag.
> > * Allocate in bigger chunks in a non optimal way, when
> > not all mTHP and THP orders are enabled.
> > * Operate in a similar manner as in [1] when all mTHP and THP orders
> > are enabled and 'within_size' flag is used (assuming we use patch 11
> > from [1]).
> 
> Large folios may still be considered scarce resources even if the amount of
> memory allocated is still the same. And if shmem isn't backed by swap then once
> you have allocated a large folio for shmem, it is stuck in shmem, even if it
> would be better used somewhere else.

Is that true for tmpfs as well? We have shmem_unused_huge_shrink() that will
reclaim unused large folios (when ENOSPC and free_cached_objects()). Can't we
reuse that when the system is under memory pressure?

> 
> And it's possible (likely even, in my opinion) that allocating lots of different
> folio sizes will exacerbate memory fragmentation, leading to more order-0
> fallbacks, which would hurt the overall system performance in the long run, vs
> restricting to a couple of folio sizes.

Since we are transitioning to large folios in other filesystems, the impact
of restricting the order here will only depend on the extent of tmpfs usage
relative to the rest of the system. Luis discussed the topic of mm fragmentation
and measurment in a session at LSFMM this year [2].

[2] https://lore.kernel.org/all/ZkUOXQvVjXP1T6Nk@bombadil.infradead.org/

> 
> I'm starting some work to actually measure how limiting the folio sizes
> allocated for page cache memory can help reduce large folio allocation failure

It would be great to hear more about that effort.

> overall. My hypothesis is that the data will show us that in an environment like
> Android, where memory pressure is high, limiting everything to order-0 and
> order-4 will significantly improve the allocation success rate of order-4. Let's
> see.
> 
> > 
> > [1] Last 3 patches of these series:
> > https://lore.kernel.org/all/20240515055719.32577-1-da.gomez@samsung.com/
> > 
> > My understanding of why mTHP was preferred is to raise awareness in
> > user space and allow tmpfs mounts used at boot time to operate in
> > 'safe' mode (no large folios). Does it make more sense to have a large
> > folios enable flag to control order allocation as in [1], instead of
> > every single order possible?
> 
> My intuition is towards every order possible, as per above. Let's see what the
> data tells us.
> 
> > 
> >>
> >> My opinion is that we need to let people configure orders (if you feel like
> >> it, configure all), but *select* the order to allocate based on readahead
> >> information -- in contrast to anonymous memory where we start at the highest
> >> order and don't have readahead information available.
> >>
> >> Maybe we need different "order allcoation" logic for read/write vs. fault,
> >> not sure.
> > 
> > I would suggest [1] the file size of the write for the write
> > and fallocate paths. But when does make sense to use readahead
> > information? Maybe when swap is involved?
> > 
> >>
> >> But I don't maintain that code, so I can only give stupid suggestions and
> >> repeat what I understood from the meeting with Hugh and Kirill :)
> >>
> >> -- 
> >> Cheers,
> >>
> >> David / dhildenb
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ