lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTi=fKQB1sYzh=rfPRMMmjXEOd9REvg+J05bfkwFv@mail.gmail.com>
Date:	Thu, 10 Feb 2011 09:24:11 -0800
From:	Venkatesh Pallipadi <venki@...gle.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Suresh Siddha <suresh.b.siddha@...el.com>,
	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
	Paul Turner <pjt@...gle.com>, Mike Galbraith <efault@....de>,
	Nick Piggin <npiggin@...il.com>
Subject: Re: Misc sd_idle related fixes

On Wed, Feb 9, 2011 at 1:29 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> On Tue, 2011-02-08 at 10:13 -0800, Venkatesh Pallipadi wrote:
>> Here are the 3 sd_idle related changes I tested with, for reference. Among
>> the three, the third patch is the one that helps us in reducing idle cycles
>> with one of our workloads and thus improves the latency response.
>
> Have you tried what happens if you simply rip all that SMT stuff out and
> simplify the code? Afaict much of the capacity stuff we have should have
> a similar effect and is less confusing, no?
>

Among the benchmarks I looked at (tbench and internal workload that
showed benefit with these fixes), I see both no sd_idle and
sd_idle+fixes have similar effect. So, I do not see any problems with
ripping out sd_idle altogether.

We may still need to change first_idle_cpu logic a bit for SMT though.
It can prevent 2 hop migrations in cases like:
{ [ (A B) (C D) ] [ (E F) (G H) ] } grouping, if B is busy, EFGH are
busy and ACD are idle;
As A happens to be first idle CPU, it will be the one bringing in the
load from socket EFGH and then C or D have to pull the load from A.
Instead if C or D is nominated to pull the task from other socket, we
can reduce one hop.
I do not see capacity logic handling this case. But, this is more of a
micro-optimization and may affect workloads like SPECjbb at low
utilization, etc. I haven't seen this affecting the workloads we care
about.

Thanks,
Venki
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ