lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150507150845.GA20608@gmail.com>
Date:	Thu, 7 May 2015 17:08:46 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Andy Lutomirski <luto@...capital.net>
Cc:	fweisbec@...hat.com, Paolo Bonzini <pbonzini@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>, X86 ML <x86@...nel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>,
	Heiko Carstens <heiko.carstens@...ibm.com>,
	Mike Galbraith <umgwanakikbuti@...il.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Rik van Riel <riel@...hat.com>, williams@...hat.com
Subject: Re: [PATCH 3/3] context_tracking,x86: remove extraneous irq disable
 & enable from context tracking on syscall entry


* Andy Lutomirski <luto@...capital.net> wrote:

> I think one or both of us is missing something or we're just talking 
> about different things.

That's very much possible!

I think part of the problem is that I called the 'remote CPU' the RT 
CPU, while you seem to be calling it the CPU that does the 
synchronize_rcu().

So lets start again, with calling the synchronize_rcu() the 'remote 
CPU', and the one doing the RT workload the 'RT CPU':

> If the nohz/RT cpu is about to enter user mode and stay there for a 
> long time, it does:
> 
>   this_cpu_inc(rcu_qs_ctr);
> 
> or whatever.  Maybe we add:
> 
>   this_cpu_set(rcu_state) = IN_USER;
> 
> or however it's spelled.
> 
> The remote CPU wants to check our state.  If this happens just 
> before the IN_USER write or rcu_qs_ctr increment becomes visible, 
> then it'll think we're in kernel mode.  Now it either has to poll 
> (which is fine) or try to get us to tell the RCU core when we become 
> quiescent by setting TIF_RCU_THINGY.

So do you mean:

   this_cpu_set(rcu_state) = IN_KERNEL;
   ...
   this_cpu_inc(rcu_qs_ctr);
   this_cpu_set(rcu_state) = IN_USER;

?

So in your proposal we'd have an INC and two MOVs. I think we can make 
it just two simple stores into a byte flag, one on entry and one on 
exit:

   this_cpu_set(rcu_state) = IN_KERNEL;
   ...
   this_cpu_set(rcu_state) = IN_USER;

plus the rare but magic TIF_RCU_THINGY that tells a waiting 
synchronize_rcu() about the next quiescent state.

> The problem is that I don't see how TIF_RCU_THINGY can work 
> reliably. If the remote CPU sets it, it'll be too late and we'll 
> still enter user mode without seeing it.  If it's just an 
> optimization, though, then it should be fine.

Well, after setting it, the remote CPU has to re-check whether the RT 
CPU has entered user-mode - before it goes to wait.

If it has entered user mode then the remote CPU has observed quiescent 
state and is done. If it's still in kernel mode then it waits until 
the TIF_RCU_THINGY does its completion.

> I still feel like this is all overengineered.  Shouldn't rcu_qs_ctr 
> be enough for all of the above?

How would just INC rcu_qs_ctr (without the IN_KERNEL/IN_USER flag) 
work?

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ