[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e809a1f3-cf80-87f9-4337-03e92af73bc9@shipmail.org>
Date: Fri, 27 Sep 2019 11:27:06 +0200
From: Thomas Hellström (VMware)
<thomas_os@...pmail.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
dri-devel <dri-devel@...ts.freedesktop.org>,
Linux-MM <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Matthew Wilcox <willy@...radead.org>
Subject: Re: Ack to merge through DRM? WAS Re: [PATCH v2 1/5] mm: Add
write-protect and clean utilities for address space ranges
On 9/27/19 7:55 AM, Thomas Hellström (VMware) wrote:
> On 9/27/19 12:20 AM, Linus Torvalds wrote:
>> On Thu, Sep 26, 2019 at 1:55 PM Thomas Hellström (VMware)
>> <thomas_os@...pmail.org> wrote:
>>> Well, we're working on supporting huge puds and pmds in the graphics
>>> VMAs, although in the write-notify cases we're looking at here, we
>>> would
>>> probably want to split them down to PTE level.
>> Well, that's what the existing walker code does if you don't have that
>> "pud_entry()" callback.
>>
>> That said, I assume you would *not* want to do that if the huge
>> pud/pmd is already clean and read-only, but just continue.
>>
>> So you may want to have a special pud_entry() that handles that case.
>> Eventually. Maybe. Although honestly, if you're doing dirty tracking,
>> I doubt it makes much sense to use largepages.
>
> The approach we're looking at in this case is to keep huge entries
> write-protected and split them in the wp_huge_xxx() code's fallback
> path with the mmap_sem held. This means that there will actually be
> huge entries in the page-walking code soon, but as you say, only
> entries that we want to ignore and not split. So we'd also need a way
> to avoid the pagewalk splitting for the situation when someone faults
> a huge entry in just before the call to split_huge_xxx.
>
>>
>>> Looking at zap_pud_range() which when called from unmap_mapping_pages()
>>> uses identical locking (no mmap_sem), it seems we should be able to get
>>> away with i_mmap_lock(), making sure the whole page table doesn't
>>> disappear under us. So it's not clear to me why the mmap_sem is
>>> strictly
>>> needed here. Better to sort those restrictions out now rather than when
>>> huge entries start appearing.
>> zap_pud_range()actually does have that
>>
>> VM_BUG_ON_VMA(!rwsem_is_locked(&tlb->mm->mmap_sem), vma);
>>
>> exactly for the case where it might have to split the pud entry.
>
> Yes. My take on this is that locking the PUD ptl can be done either
> with the mmap_sem or the i_mmap_lock if present and that we should
> update the asserts in xxx_trans_huge_lock to reflect that. But when
> actually splitting transhuge pages you don't want to race with
> khugepaged, so you need the mmap_sem. For the graphics VMAs
> (MIXEDMAP), khugepaged never touches them. Yet.
>
>>
>> It's why they've never gotten translated to use the generic walker code.
>
> OK. Yes there are a number of various specialized pagewalks all over
> the mm code.
>
> But another thing that worries me is that the page-table modifications
> that happen in the callback use functionality that is not guaranteed
> to be exported, and that mm people don't want them to be exported
> because you don't want the drivers to go hacking around in page
> tables, which means that the two callbacks used here would need to be
> a set of core helpers anyway.
>
> So I figure what I would end up with would actually be an extern
> __walk_page_range anyway, and slightly modified asserts. Do you think
> that could be acceptible?
Actually, I'll give your original suggestion a try and see what I come
up with.
Thanks,
Thomas
Powered by blists - more mailing lists