[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFykNULx-b6M6FmUYdK2cn-OJKKfjaPwLN5xZGK+bioGaA@mail.gmail.com>
Date: Thu, 29 Jun 2017 21:03:42 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Marcelo Tosatti <mtosatti@...hat.com>
Cc: h@....cnet, Thomas Gleixner <tglx@...utronix.de>,
Greg KH <gregkh@...uxfoundation.org>,
"Luis R. Rodriguez" <mcgrof@...nel.org>,
Martin Fuzzey <mfuzzey@...keon.com>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Dmitry Torokhov <dmitry.torokhov@...il.com>,
Daniel Wagner <wagi@...om.org>,
David Woodhouse <dwmw2@...radead.org>,
jewalt@...innovations.com, rafal@...ecki.pl,
Arend Van Spriel <arend.vanspriel@...adcom.com>,
"Rafael J. Wysocki" <rjw@...ysocki.net>,
"Li, Yi" <yi1.li@...ux.intel.com>, atull@...nel.org,
Moritz Fischer <moritz.fischer@...us.com>,
Petr Mladek <pmladek@...e.com>,
Johannes Berg <johannes.berg@...el.com>,
Emmanuel Grumbach <emmanuel.grumbach@...el.com>,
"Coelho, Luciano" <luciano.coelho@...el.com>,
Kalle Valo <kvalo@...eaurora.org>,
Andrew Lutomirski <luto@...nel.org>,
Kees Cook <keescook@...omium.org>,
"AKASHI, Takahiro" <takahiro.akashi@...aro.org>,
David Howells <dhowells@...hat.com>,
Peter Jones <pjones@...hat.com>,
Hans de Goede <hdegoede@...hat.com>,
Alan Cox <alan@...ux.intel.com>,
"Theodore Ts'o" <tytso@....edu>,
Michael Kerrisk <mtk.manpages@...il.com>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
Matthew Wilcox <mawilcox@...rosoft.com>,
Linux API <linux-api@...r.kernel.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"stable # 4 . 6" <stable@...r.kernel.org>
Subject: Re: [PATCH 2/4] swait: add the missing killable swaits
On Thu, Jun 29, 2017 at 12:15 PM, Marcelo Tosatti <mtosatti@...hat.com> wrote:
> On Thu, Jun 29, 2017 at 09:13:29AM -0700, Linus Torvalds wrote:
>>
>> swait uses special locking and has odd semantics that are not at all
>> the same as the default wait queue ones. It should not be used without
>> very strong reasons (and honestly, the only strong enough reason seems
>> to be "RT").
>
> Performance shortcut:
>
> https://lkml.org/lkml/2016/2/25/301
Yes, I know why kvm uses it, I just don't think it's necessarily the
right thing.
That kvm commit is actually a great example: it uses swake_up() from
an interrupt, and that's in fact the *reason* it uses swake_up().
But that also fundamentally means that it cannot use swake_up_all(),
so it basically *relies* on there only ever being one single entry
that needs to be woken up.
And as far as I can tell, it really is because the queue only ever has
one entry (ie it's per-vcpu, and when the vcpu is blocked, it's
blocked - so no other user will be waiting there).
So it isn't that you migth queue multiple entries and then just wake
them up one at a time. There really is just one entry at a time,
right?
And that means that swait is actuially completely the wrong thing to
do. It's more expensive and more complex than just saving the single
process pointer away and just doing "wake_up_process()".
Now, it really is entirely possible that I'm missing something, but it
does look like that to me.
We've had wake_up_process() since pretty much day #1. THAT is the
fastest and simplest direct wake-up there is, not some "simple
wait-queue".
Now, admittedly I don't know the code and really may be entirely off,
but looking at the commit (no need to go to the lkml archives - it's
commit 8577370fb0cb ("KVM: Use simple waitqueue for vcpu->wq") in
mainline), I really think the swait() use is simply not correct if
there can be multiple waiters, exactly because swake_up() only wakes
up a single entry.
So either there is only a single entry, or *all* the code like
dvcpu->arch.wait = 0;
- if (waitqueue_active(&dvcpu->wq))
- wake_up_interruptible(&dvcpu->wq);
+ if (swait_active(&dvcpu->wq))
+ swake_up(&dvcpu->wq);
is simply wrong. If there are multiple blockers, and you just cleared
"arch.wait", I think they should *all* be woken up. And that's not
what swake_up() does.
So I think that kvm_vcpu_block() could easily have instead done
vcpu->process = current;
as the "prepare_to_wait()" part, and "finish_wait()" would be to just
clear vcpu->process. No wait-queue, just a single pointer to the
single blocking thread.
(Of course, you still need serialization, so that
"wake_up_process(vcpu->process)" doesn't end up using a stale value,
but since processes are already freed with RCU because of other things
like that, the serialization is very low-cost, you only need to be
RCU-read safe when waking up).
See what I'm saying?
Note that "wake_up_process()" really is fairly widely used. It's
widely used because it's fairly obvious, and because that really *is*
the lowest-possible cost: a single pointer to the sleeping thread, and
you can often do almost no locking at all.
And unlike swake_up(), it's obvious that you only wake up a single thread.
Linus
Powered by blists - more mailing lists