[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090622133222.3a689f22@lxorguk.ukuu.org.uk>
Date: Mon, 22 Jun 2009 13:32:22 +0100
From: Alan Cox <alan@...rguk.ukuu.org.uk>
To: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Cc: greg@...ah.com, torvalds@...ux-foundation.org,
Kevin.Huang@....com.tw
Subject: Re: Staging: add pata_rdc driver
> +static struct pci_bits ATA_Decode_Enable_Bits[] = { // see ATA Host Adapters Standards.
> + { 0x41U, 1U, 0x80UL, 0x80UL }, /* port (Channel) 0 */
> + { 0x43U, 1U, 0x80UL, 0x80UL }, /* port (Channel) 1 */
> +};
> +
Decode bits 0x80 in 0x41/0x43 - same as ata_piix
> + /* no hotplugging support (FIXME) */ // why???
copied from the piix driver
> + Mask = ATAConfiguration_IDEIOConfiguration_PrimaryDeviceCable80Report;
Cable bits at 0x54: same format as ATA_PIIX
and this continues throughout the driver
So it seems the following occurred
- take ata_piix
- remove all the innards of it
- replace them with identically functional but convoluted vendor code for
the same actual hardware interface
- submit as new driver
Would someone please tell me wtf is going on here and why if the hardware
is so close to ata_piix it doesn't either use the piix driver or if its
very similar just use bits of it as was (as efar, mpiix and oldpiix do) ?
What if anything actually differs between Intel PIIX and the new RDC
controllers ? Why can't we just cp ata_piix.c ata_rdc.c and just remove
all the intel specific casing ?
Alan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists