[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <533AB741.5080508@redhat.com>
Date: Tue, 01 Apr 2014 08:55:29 -0400
From: Rik van Riel <riel@...hat.com>
To: Ingo Molnar <mingo@...nel.org>
CC: linux-kernel@...r.kernel.org, linux-mm@...ck.org, shli@...nel.org,
akpm@...ux-foundation.org, hughd@...gle.com, mgorman@...e.de,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH] x86,mm: delay TLB flush after clearing accessed bit
On 04/01/2014 06:53 AM, Ingo Molnar wrote:
>
> The speedup looks good to me!
>
> I have one major concern (see the last item), plus a few minor nits:
I will address all the minor issues. Let me explain the major one :)
>> @@ -196,6 +201,13 @@ static inline void reset_lazy_tlbstate(void)
>> this_cpu_write(cpu_tlbstate.active_mm, &init_mm);
>> }
>>
>> +static inline void tlb_set_force_flush(int cpu)
>> +{
>> + struct tlb_state *percputlb= &per_cpu(cpu_tlbstate, cpu);
>
> s/b= /b = /
>
>> + if (percputlb->force_flush == false)
>> + percputlb->force_flush = true;
>> +}
>> +
>> #endif /* SMP */
This code does a test before the set, so each cache line will only be
grabbed exclusively once, if there is heavy pageout scanning activity.
>> @@ -399,11 +400,13 @@ int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>> int ptep_clear_flush_young(struct vm_area_struct *vma,
>> unsigned long address, pte_t *ptep)
>> {
>> - int young;
>> + int young, cpu;
>>
>> young = ptep_test_and_clear_young(vma, address, ptep);
>> - if (young)
>> - flush_tlb_page(vma, address);
>> + if (young) {
>> + for_each_cpu(cpu, vma->vm_mm->cpu_vm_mask_var)
>> + tlb_set_force_flush(cpu);
>
> Hm, just to play the devil's advocate - what happens when we have a va
> that is used on a few dozen, a few hundred or a few thousand CPUs?
> Will the savings be dwarved by the O(nr_cpus_used) loop overhead?
>
> Especially as this is touching cachelines on other CPUs and likely
> creating the worst kind of cachemisses. That can really kill
> performance.
flush_tlb_page does the same O(nr_cpus_used) loop, but it sends an
IPI to each CPU every time, instead of dirtying a cache line once
per pageout run (or until the next context switch).
Does that address your concern?
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists