lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.02.1311261117100.30673@ionos.tec.linutronix.de>
Date:	Tue, 26 Nov 2013 11:51:33 +0100 (CET)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Eliezer Tamir <eliezer.tamir@...ux.intel.com>
cc:	Peter Zijlstra <peterz@...radead.org>,
	Arjan van de Ven <arjan@...ux.intel.com>, lenb@...nel.org,
	rjw@...ysocki.net, Eliezer Tamir <eliezer@...ir.org.il>,
	David Miller <davem@...emloft.net>, rui.zhang@...el.com,
	jacob.jun.pan@...ux.intel.com,
	Mike Galbraith <bitbucket@...ine.de>,
	Ingo Molnar <mingo@...nel.org>, hpa@...or.com,
	linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Subject: Re: [PATCH 6/7] sched: Clean up preempt_enable_no_resched() abuse

On Tue, 26 Nov 2013, Eliezer Tamir wrote:

> On 22/11/2013 13:30, Peter Zijlstra wrote:
> > On Fri, Nov 22, 2013 at 08:56:00AM +0200, Eliezer Tamir wrote:
> >> On 21/11/2013 15:39, Peter Zijlstra wrote:
> >>> On Thu, Nov 21, 2013 at 03:26:17PM +0200, Eliezer Tamir wrote:
> > 
> > Please use local_clock(), yes its slightly more expensive, but I doubt
> > you can actually measure the effects on sane hardware.
> 
> If we limit the discussion to sane hardware, I should mention that on
> current Intel CPUs TSC is guaranteed to be monotonic for anything up to
> 8 sockets. Even on slightly older HS TSC skew is very small and should
> not be an issue for this use case.

> Modern sane HW does not have this issue.

That's wrong to begin with. There is no such thing which qualifies as
"sane hardware". Especially not if we are talking about timers.

> The people that do busy polling typically pin tasks to cores anyway.

This is completely irrelevant. If stuff falls apart if the task is not
pinned, then you lost nevertheless.

> You need cap_net_admin to use this setting.

And how is that relevant? cap_net_admin does not change the fact, that
you violate your constraints.

> There is no real damage if the issue happens.

You'r violating the constraints which is not fatal, but not desired
either.

> This is fast-low-latency-path so we are very sensitive to adding even
> a small cost.
> Linus really didn't like adding to the cost of poll/select when busy
> polling is not being used.
 
And that justifies exposing those who do not have access to "sane"
hardware and/or did not pin their tasks to constraint violation?

> Having said that, since we need to fix the timeout issue you pointed
> out, we will test the use of local_clock() and see if it matters or
> not.

If the hardware provides an indicator that the TSC is sane to use,
then sched_clock_stable is 1, so local_clock() will not do the slow
update dance at all. So for "sane" hardware the overhead is minimal
and on crappy hardware the correctness is still ensured with more
overhead.

If you are really concerned about the minimal overhead in the
sched_clock_stable == 1 case, then you better fix that (it's doable
with some brain) instead of hacking broken crap, based on even more
broken assumptions, into the networking code.

It's not the kernels fault, that we need to deal with
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK at all. And we have to deal with it
no matter what, so we cannot make this undone by magic assumptions.

Complain to those who forced us to do this. Hint: It's only ONE CPU
vendor who thought that providing useless timestamps is a brilliant
idea.

Thanks,

	tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ