lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100616113253.24017.qmail@science.horizon.com>
Date:	16 Jun 2010 07:32:53 -0400
From:	"George Spelvin" <linux@...izon.com>
To:	avi@...hat.com, mingo@...e.hu
Cc:	linux@...izon.com, linux-kernel@...r.kernel.org, npiggin@...e.de
Subject: Re: [PATCH 0/4] Really lazy fpu

> But on busy servers where most wakeups are IRQ based the chance of being on 
> the right CPU is 1/nr_cpus - i.e. decreasing with every new generation of 
> CPUs.

That doesn't seem right.  If the server is busy with FPU-using tasks, then
the FPU state has already been swapped out, and no IPI is necessary.

The worst-case seems to be a lot of non-FPU CPU hogs, and a few FPU-using tasks
that get bounced around the CPUs like pinballs.

It is an explicit scheduler goal to keep tasks on the same CPU across
schedules, so they get to re-use their cache state.  The IPI only happens
when that goal is not met, *and* the FPU state has not been forced out
by another FPU-using task.

Not completely trivial to arrange.


(An halfway version of this optimization whoch sould avoid the need for
an IPI would be *save* the FPU state, but mark it "clean", so the re-load
can be skipped if we're lucky.  If the code supported this as well as the
IPI alternative, you could make a heuristic guess at switch-out time
whether to save immediately or hope the odds of needing the IPI are less than
the fxsave/IPI cost ratio.)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ