lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <s5h60w48po7.wl-tiwai@suse.de>
Date:	Wed, 30 Mar 2016 08:07:04 +0200
From:	Takashi Iwai <tiwai@...e.de>
To:	Andy Lutomirski <luto@...capital.net>
Cc:	Luis Rodriguez <mcgrof@...e.com>,
	Konstantin Ozerkov <kozerkov@...allels.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	ALSA development <alsa-devel@...a-project.org>
Subject: Re: Getting rid of inside_vm in intel8x0

On Tue, 29 Mar 2016 23:37:32 +0200,
Andy Lutomirski wrote:
> 
> Would it be possible to revert:
> 
> commit 228cf79376f13b98f2e1ac10586311312757675c
> Author: Konstantin Ozerkov <kozerkov@...allels.com>
> Date:   Wed Oct 26 19:11:01 2011 +0400
> 
>     ALSA: intel8x0: Improve performance in virtual environment
> 
> Presumably one or more of the following is true:
> 
> a) The inside_vm == true case is just an optimization and should apply
> unconditionally.
> 
> b) The inside_vm == true case is incorrect and should be fixed or disabled.
> 
> c) The inside_vm == true case is a special case that makes sense then
> IO is very very slow but doesn't make sense when IO is fast.  If so,
> why not literally measure the time that the IO takes and switch over
> to the "inside VM" path when IO is slow?

More important condition is rather that the register updates of CIV
and PICB are atomic.  This is satisfied mostly only on VM, and can't
be measured easily unlike the IO read speed.

> There are a pile of nonsensical "are we in a VM" checks of various
> sorts scattered throughout the kernel, they're all a mess to maintain
> (there are lots of kinds of VMs in the world, and Linux may not even
> know it's a guest), and, in most cases, it appears that the correct
> solution is to delete the checks.  I just removed a nasty one in the
> x86_32 entry asm, and this one is written in C so it should be a piece
> of cake :)

This cake looks sweet, but a worm is hidden behind the cream.
The loop in the code itself is already a kludge for the buggy hardware
where the inconsistent read happens not so often (only at the boundary
and in a racy way).  It would be nice if we can have a more reliably
way to know the hardware buggyness, but it's difficult,
unsurprisingly.


thanks,

Takashi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ