[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrXftGB02iTtmkEe2gdjeRdkU9ZZCDmON_4W0+psr1FLpw@mail.gmail.com>
Date: Tue, 9 May 2017 15:54:49 -0700
From: Andy Lutomirski <luto@...nel.org>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Andy Lutomirski <luto@...nel.org>, X86 ML <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Borislav Petkov <bpetkov@...e.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Rik van Riel <riel@...hat.com>,
Nadav Amit <namit@...are.com>, Michal Hocko <mhocko@...e.com>,
Sasha Levin <sasha.levin@...cle.com>
Subject: Re: [RFC 03/10] x86/mm: Make the batched unmap TLB flush API more generic
On Tue, May 9, 2017 at 10:13 AM, Dave Hansen <dave.hansen@...el.com> wrote:
> On 05/09/2017 06:02 AM, Andy Lutomirski wrote:
>> On Mon, May 8, 2017 at 8:34 AM, Dave Hansen <dave.hansen@...el.com> wrote:
>>> On 05/07/2017 05:38 AM, Andy Lutomirski wrote:
>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>> index f6838015810f..2e568c82f477 100644
>>>> --- a/mm/rmap.c
>>>> +++ b/mm/rmap.c
>>>> @@ -579,25 +579,12 @@ void page_unlock_anon_vma_read(struct anon_vma *anon_vma)
>>>> void try_to_unmap_flush(void)
>>>> {
>>>> struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc;
>>>> - int cpu;
>>>>
>>>> if (!tlb_ubc->flush_required)
>>>> return;
>>>>
>>>> - cpu = get_cpu();
>>>> -
>>>> - if (cpumask_test_cpu(cpu, &tlb_ubc->cpumask)) {
>>>> - count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
>>>> - local_flush_tlb();
>>>> - trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL);
>>>> - }
>>>> -
>>>> - if (cpumask_any_but(&tlb_ubc->cpumask, cpu) < nr_cpu_ids)
>>>> - flush_tlb_others(&tlb_ubc->cpumask, NULL, 0, TLB_FLUSH_ALL);
>>>> - cpumask_clear(&tlb_ubc->cpumask);
>>>> tlb_ubc->flush_required = false;
>>>> tlb_ubc->writable = false;
>>>> - put_cpu();
>>>> }
>>>>
>>>> /* Flush iff there are potentially writable TLB entries that can race with IO */
>>>> @@ -613,7 +600,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable)
>>>> {
>>>> struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc;
>>>>
>>>> - cpumask_or(&tlb_ubc->cpumask, &tlb_ubc->cpumask, mm_cpumask(mm));
>>>> + arch_tlbbatch_add_mm(&tlb_ubc->arch, mm);
>>>> tlb_ubc->flush_required = true;
>>>>
>>>> /*
>>>
>>> Looking at this patch in isolation, how can this be safe? It removes
>>> TLB flushes from the generic code. Do other patches in the series fix
>>> this up?
>>
>> Hmm? Unless I totally screwed this up, this patch just moves the
>> flushes around -- it shouldn't remove any flushes.
>
> This takes a flush out of try_to_unmap_flush(). It adds code for
> arch_tlbbatch_flush(), but not *calls* to arch_tlbbatch_flush() that I
> can see.
>
> I actually don't see _any_ in the whole series in a quick grepping. Am
> I just missing them?
Oops! I must have stared at that function for so long that I started
seeing the invisible call. I'll fix that.
Powered by blists - more mailing lists