[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <2CED2F72-4D1C-4DBC-AC03-4B246E1673C2@gmail.com>
Date: Tue, 12 Oct 2021 10:31:45 -0700
From: Nadav Amit <nadav.amit@...il.com>
To: Peter Xu <peterx@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Andrew Cooper <andrew.cooper3@...rix.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Will Deacon <will@...nel.org>, Yu Zhao <yuzhao@...gle.com>,
Nick Piggin <npiggin@...il.com>, x86@...nel.org
Subject: Re: [PATCH 1/2] mm/mprotect: use mmu_gather
> On Oct 12, 2021, at 3:16 AM, Peter Xu <peterx@...hat.com> wrote:
>
> On Sat, Sep 25, 2021 at 01:54:22PM -0700, Nadav Amit wrote:
>> @@ -338,25 +344,25 @@ static unsigned long change_protection_range(struct vm_area_struct *vma,
>> struct mm_struct *mm = vma->vm_mm;
>> pgd_t *pgd;
>> unsigned long next;
>> - unsigned long start = addr;
>> unsigned long pages = 0;
>> + struct mmu_gather tlb;
>>
>> BUG_ON(addr >= end);
>> pgd = pgd_offset(mm, addr);
>> flush_cache_range(vma, addr, end);
>> inc_tlb_flush_pending(mm);
>> + tlb_gather_mmu(&tlb, mm);
>> + tlb_start_vma(&tlb, vma);
>
> Pure question:
>
> I actually have no idea why tlb_start_vma() is needed here, as protection range
> can be just a single page, but anyway.. I do see that tlb_start_vma() contains
> a whole-vma flush_cache_range() when the arch needs it, then does it mean that
> besides the inc_tlb_flush_pending() to be dropped, so as to the other call to
> flush_cache_range() above?
Good point.
tlb_start_vma() and tlb_end_vma() are required since some archs do not
batch TLB flushes across VMAs (e.g., ARM). I am not sure whether that’s
the best behavior for all archs, but I do not want to change it.
Anyhow, you make a valid point that the flush_cache_range() should be
dropped as well. I will do so for next version.
Regards,
Nadav
Powered by blists - more mailing lists