[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <eb93777b-cb74-41d9-80e9-a700bf693d4e@redhat.com>
Date: Fri, 16 May 2025 20:56:30 +0200
From: David Hildenbrand <david@...hat.com>
To: Claudio Imbrenda <imbrenda@...ux.ibm.com>
Cc: linux-kernel@...r.kernel.org, linux-s390@...r.kernel.org,
kvm@...r.kernel.org, linux-mm@...ck.org,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Janosch Frank <frankja@...ux.ibm.com>, Heiko Carstens <hca@...ux.ibm.com>,
Vasily Gorbik <gor@...ux.ibm.com>, Alexander Gordeev
<agordeev@...ux.ibm.com>, Sven Schnelle <svens@...ux.ibm.com>,
Thomas Huth <thuth@...hat.com>, Matthew Wilcox <willy@...radead.org>,
Zi Yan <ziy@...dia.com>, Sebastian Mitterle <smitterl@...hat.com>
Subject: Re: [PATCH v1 0/3] s390/uv: handle folios that cannot be split while
dirty
On 16.05.25 19:17, Claudio Imbrenda wrote:
> On Fri, 16 May 2025 14:39:43 +0200
> David Hildenbrand <david@...hat.com> wrote:
>
>> From patch #3:
>>
>> "
>> Currently, starting a PV VM on an iomap-based filesystem with large
>> folio support, such as XFS, will not work. We'll be stuck in
>> unpack_one()->gmap_make_secure(), because we can't seem to make progress
>> splitting the large folio.
>>
>> The problem is that we require a writable PTE but a writable PTE under such
>> filesystems will imply a dirty folio.
>>
>> So whenever we have a writable PTE, we'll have a dirty folio, and dirty
>> iomap folios cannot currently get split, because
>> split_folio()->split_huge_page_to_list_to_order()->filemap_release_folio()
>> will fail in iomap_release_folio().
>>
>> So we will not make any progress splitting such large folios.
>> "
>>
>> Let's fix one related problem during unpack first, to then handle such
>> folios by triggering writeback before immediately trying to split them
>> again.
>>
>> This makes it work on XFS with large folios again.
>>
>> Long-term, we should cleanly supporting splitting such folios even
>> without writeback, but that's a bit harder to implement and not a quick
>> fix.
>
> picked for 6.16, I think it will survive the CI without issues, since
> I assume you tested this thoroughly
I did test what was known to be broken, but our QE did not run a bigger
test on it. So giving it some soaking time + waiting for a bit for more
review might be a good idea!
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists