lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 09 Jan 2008 16:27:29 +0100
From:	Rene Herman <rene.herman@...access.nl>
To:	Christer Weinigel <christer@...nigel.se>
CC:	Zachary Amsden <zach@...are.com>,
	"David P. Reed" <dpreed@...d.com>, Avi Kivity <avi@...amnet.com>,
	Ondrej Zary <linux@...nbow-software.org>,
	"H. Peter Anvin" <hpa@...or.com>, Bodo Eggert <7eggert@....de>,
	Ingo Molnar <mingo@...e.hu>,
	Alan Cox <alan@...rguk.ukuu.org.uk>,
	Paul Rolland <rol@...917.net>, Pavel Machek <pavel@....cz>,
	Thomas Gleixner <tglx@...utronix.de>,
	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
	rol <rol@...be.net>
Subject: Re: Re: [PATCH] x86: provide a DMI based port 0x80
 I/O delay override.

On 09-01-08 06:30, Christer Weinigel wrote:

> On Tue, 08 Jan 2008 18:52:42 -0800
> Zachary Amsden <zach@...are.com> wrote:

>> What is the outcome of this thread?  Are we going to use timing based
>> port delays, or can we finally drop these things entirely on 64-bit
>> architectures?
>>
>> I a have a doubly vested interest in this, both as the owner of an
>> affected HP dv9210us laptop and as a maintainer of paravirt code - and
>> would like 64-bit Linux code to stop using I/O to port 0x80 in both
>> cases (as I suspect would every other person involved with
>> virtualization).
>>
>> I've tried to follow this thread, but with all the jabs, 1-ups, and
>> obscure legacy hardware pageantry going on, it isn't clear what we're
>> really doing.
> 
> I belive Alan Cox is doing a review of some drivers, to see if they
> actually need the I/O port delay.  A lot of drivers probably use outb_p
> just because it was copy-pasted from some other driver and it can be
> removed.  Alan's review has also brought to light a lack of locking in
> some drivers, so I think Alan has been adding proper locking to some of
> the watchdog drivers.

Yes, Alan should be considered to be in the driver seat here (and current 
x86.git changes should be tossed).

> Most old ISA only device drivers can keep using OUT 80h.  They are not
> used on modern machines and it's better to keep them unchanged to avoid
> unneccesary incompatibilities.
> 
> As far as I know, the 8253 PIT timer code needs outb_p on some older
> platform, and this is one of the most troublesome since the same PIT
> controller (or a register compatible one) has been used since the
> original IBM PC, and it is frequently executed code.  Ingo Molnar has
> done an alternate implementation of the PIT clock source which uses
> udelay instead of OUT 80h to delay accesses to the ports. The kernel
> could make a choice of which variant to use based on the DMI year, if
> compiling for x86_64, or something similar.  Maybe have a command line
> option too.

Just udelay() should be fine after "fixing" udelay() to be somewhat usefully 
defined pre-calibration.

> The keyboard controller on some platform needs the delay, and the same
> driver is used on both ancient and modern systems, I think it can be
> changed to udelay since it's not so time critical code.
> 
> The 8259 interrupt controller on some platform needs the delay, I think
> it can be changed to udelay since it's only some setup code that uses
> outb_p.  I guess there are time critical accesses to the interrupt
> controller from assembly code somewhere to acknowledge interrupts, and
> that code needs a review.

I'd not expect very time crtical. The current outb_p use gives a delay 
somewhere between .5 and 2 microseconds as per earlier survey meaning a 
udelay(1) or 2 would be enough -- again, at the point that udelay() is sensible.

New machines don't use the legacy PIC anymore anyway.

> The floppy controller code uses outb_p.  Even though there might be
> floppy controllers on modern systems, I'd rather leave the floppy code
> alone since it's supposed to be very fragile.  If you still use
> floppies you deserve what you get.

Floppies forever. In practice, leaving it alone isn't going to matter, but 
in that same practice changing it to udelay() probably doesn't either. The 
ones to leave alone are the ones that are clumsy/impossible to test and the 
ones such as in NIC drivers that were specifically tuned.

> Some specific drivers, such as drivers for 8390 or 8390 clone based
> network cards are also a bit troublesome, they do need outb_p (and
> the delay for the original 8390 chip is specified in bus cycles), and
> there can be a big performance loss if pessimistic udelays are used for
> the delay.  There are still a bunch of PCMCIA cards based on that chip
> which means that those cards can be used with modern machines.  There
> are also PCI and memory mapped variants of the 8390, some of them new
> designs which are only register compatible, some other designs are
> using a real 8390 with a FPGA used as glue logic. I think Alan
> suggested compiling two versions of that driver, one with OUT 80h, and
> one with udelay.  Old machines can choose the old driver, and new
> machines can use the new one.  Other drivers can probably do the same
> thing, or if not time critical, always use a pessimistic udelay.

Not sure what the final suggestion for those was either....

> As for the implementation, I like the suggestion to split outb_b into
> two calls, one to outb and one to isa_slow_down_io.  It makes it very
> obvious that it is really two function calls, and that it needs
> locking.  For those uses that are not ISA port accesses,
> isa_slow_down_io should be changed to an appropriate udelay instead.

... or simply deleted. The current outb_p is "outb; slow_down_io" as a macro 
so that with this you also get no binary changes, making it rather easy to 
prove that things do not change timing in cases where you keep the delay.

(they're not so much function calls though -- they're inlined).

> The goal is anyway that a modern machine should not do OUT 80h, and old
> machines keep doing it since it has been working well for some 15-odd
> years, both in DOS device drivers and on Linux.  Using an alternate
> port may be a workaround, but it's probaby not a good idea since
> alternate ports have received less testing and there's bound to be some
> platform out there that has problems with any alternate port we
> might choose.

Based on specific DMI strings this can be limited to tested machines (as in 
current x86.git) but yes, that's not particularly pleasing.

> Allowing an alternate port will also add code bloat (OUT 80h, AL becomes
> MOV DX, alternate_port; OUT DX, AL) for a dubious gain.


... destroying DX while it's at it meaning this might (will) also need DX 
reloads. Not an argument versus a function call, but an argument versus the 
current and proposed manual "outb; slow_down_io" split.

> Did I miss anyting?

Not so much it seems. Only that the only reason for the outb_p slit is an 
API one. Molnar wants the API cleaned up to make sure no new users of outb_p 
creep in, and being explkicit abhout what it that you're doing is going to 
take care of that.

If simple outb_p() deprecation is considered enough instead, no need to 
touch anything in drivers/, only changes to "outb(); udelay()" outside drivers/.

I'd let Alan decide here.

Thanks for the roundup.

Rene.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ