[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wjocTzo+8OMwyKPX0MCVV=N4wtU8ifwSZ_qJJnDBgKJ8Q@mail.gmail.com>
Date: Thu, 7 Nov 2019 13:44:47 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Brian Gerst <brgerst@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
"the arch/x86 maintainers" <x86@...nel.org>,
Stephen Hemminger <stephen@...workplumber.org>,
Willy Tarreau <w@....eu>, Juergen Gross <jgross@...e.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [patch 5/9] x86/ioport: Reduce ioperm impact for sane usage further
On Thu, Nov 7, 2019 at 1:00 PM Brian Gerst <brgerst@...il.com> wrote:
>
> There wouldn't have to be a flush on every task switch.
No. But we'd have to flush on any switch that currently does that memcpy.
And my point is that a tlb flush (even the single-page case) is likely
more expensive than the memcpy.
> Going a step further, we could track which task is mapped to the
> current cpu like proposed above, and only flush when a different task
> needs the IO bitmap, or when the bitmap is being freed on task exit.
Well, that's exactly my "track the last task" optimization for copying
the thing.
IOW, it's the same optimization as avoiding the memcpy.
Which I think is likely very effective, but also makes it fairly
pointless to then try to be clever..
So the basic issue remains that playing VM games has almost
universally been slower and more complex than simply not playing VM
games. TLB flushes - even invlpg - tends to be pretty slow.
Of course, we probably end up invalidating the TLB's anyway, so maybe
in this case we don't care. The ioperm bitmap is _technically_
per-thread, though, so it should be flushed even if the VM isn't
flushed...
Linus
Powered by blists - more mailing lists