lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrWEGrVJj3Jcc3U38CYh01GKgGpLqW=eN_-7nMo4t=V5Mg@mail.gmail.com>
Date:   Wed, 21 Jun 2017 19:46:05 -0700
From:   Andy Lutomirski <luto@...nel.org>
To:     Borislav Petkov <bp@...en8.de>
Cc:     Andy Lutomirski <luto@...nel.org>, X86 ML <x86@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Mel Gorman <mgorman@...e.de>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        Nadav Amit <nadav.amit@...il.com>,
        Rik van Riel <riel@...hat.com>,
        Dave Hansen <dave.hansen@...el.com>,
        Arjan van de Ven <arjan@...ux.intel.com>,
        Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH v3 05/11] x86/mm: Track the TLB's tlb_gen and update the
 flushing algorithm

On Wed, Jun 21, 2017 at 11:44 AM, Borislav Petkov <bp@...en8.de> wrote:
> On Tue, Jun 20, 2017 at 10:22:11PM -0700, Andy Lutomirski wrote:
>> +     this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, next->context.ctx_id);
>> +     this_cpu_write(cpu_tlbstate.ctxs[0].tlb_gen,
>> +                    atomic64_read(&next->context.tlb_gen));
>
> Just let it stick out:
>
>         this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id,  next->context.ctx_id);
>         this_cpu_write(cpu_tlbstate.ctxs[0].tlb_gen, atomic64_read(&next->context.tlb_gen));
>
> Should be a bit better readable this way.

Done

>> +     if (local_tlb_gen == mm_tlb_gen) {
>
>         if (unlikely(...
>
> maybe?
>
> Sounds to me like the concurrent flushes case would be the
> uncommon one...

Agreed.

>> +
>> +     WARN_ON_ONCE(local_tlb_gen > mm_tlb_gen);
>> +     WARN_ON_ONCE(f->new_tlb_gen > mm_tlb_gen);
>> +
>> +     /*
>> +      * If we get to this point, we know that our TLB is out of date.
>> +      * This does not strictly imply that we need to flush (it's
>> +      * possible that f->new_tlb_gen <= local_tlb_gen), but we're
>> +      * going to need to flush in the very near future, so we might
>> +      * as well get it over with.
>> +      *
>> +      * The only question is whether to do a full or partial flush.
>> +      *
>> +      * A partial TLB flush is safe and worthwhile if two conditions are
>> +      * met:
>> +      *
>> +      * 1. We wouldn't be skipping a tlb_gen.  If the requester bumped
>> +      *    the mm's tlb_gen from p to p+1, a partial flush is only correct
>> +      *    if we would be bumping the local CPU's tlb_gen from p to p+1 as
>> +      *    well.
>> +      *
>> +      * 2. If there are no more flushes on their way.  Partial TLB
>> +      *    flushes are not all that much cheaper than full TLB
>> +      *    flushes, so it seems unlikely that it would be a
>> +      *    performance win to do a partial flush if that won't bring
>> +      *    our TLB fully up to date.
>> +      */
>> +     if (f->end != TLB_FLUSH_ALL &&
>> +         f->new_tlb_gen == local_tlb_gen + 1 &&
>> +         f->new_tlb_gen == mm_tlb_gen) {
>
> I'm certainly still missing something here:
>
> We have f->new_tlb_gen and mm_tlb_gen to control the flushing, i.e., we
> do once
>
>         bump_mm_tlb_gen(mm);
>
> and once
>
>         info.new_tlb_gen = bump_mm_tlb_gen(mm);
>
> and in both cases, the bumping is done on mm->context.tlb_gen.
>
> So why isn't that enough to do the flushing and we have to consult
> info.new_tlb_gen too?

The issue is a possible race.  Suppose we start at tlb_gen == 1 and
then two concurrent flushes happen.  The first flush is a full flush
and sets tlb_gen to 2.  The second is a partial flush and sets tlb_gen
to 3.  If the second flush gets propagated to a given CPU first and it
were to do an actual partial flush (INVLPG) and set the percpu tlb_gen
to 3, then the first flush won't do anything and we'll fail to flush
all the pages we need to flush.

My solution was to say that we're only allowed to do INVLPG if we're
making exactly the same change to the local tlb_gen that the requester
made to context.tlb_gen.

I'll add a comment to this effect.

>
>> +             /* Partial flush */
>>               unsigned long addr;
>>               unsigned long nr_pages = (f->end - f->start) >> PAGE_SHIFT;
>
> <---- newline here.

Yup.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ