[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080904125219.GJ2772@parisc-linux.org>
Date: Thu, 4 Sep 2008 06:52:19 -0600
From: Matthew Wilcox <matthew@....cx>
To: Stephen Hemminger <shemminger@...tta.com>
Cc: Ben Hutchings <bhutchings@...arflare.com>,
Jesse Barnes <jbarnes@...tuousgeek.org>,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
linux-pci@...r.kernel.org
Subject: Re: [PATCH 1/3] pci: VPD access timeout increase
On Wed, Sep 03, 2008 at 03:57:13PM -0700, Stephen Hemminger wrote:
> Accessing the VPD area can take a long time. There are comments in the
> SysKonnect vendor driver that it can take up to 25ms. The existing vpd
> access code fails consistently on my hardware.
Wow, that's slow. If you were to try to read all 32k, it'd take more
than three minutes! (I presume it doesn't actually have as much as 32k).
> Change the access routines to:
> * use a mutex rather than spinning with IRQ's disabled and lock held
> * have a longer timeout
> * call schedule while spinning to provide some responsivness
I agree with your approach, but have one minor comment:
> - spin_lock_irq(&vpd->lock);
> + mutex_lock(&vpd->lock);
This should be:
+ if (mutex_lock_interruptible(&vpd->lock))
+ return -EINTR;
> @@ -231,7 +232,7 @@ static int pci_vpd_pci22_write(struct pc
> val |= ((u8) *buf++) << 16;
> val |= ((u32)(u8) *buf++) << 24;
>
> - spin_lock_irq(&vpd->lock);
> + mutex_lock(&vpd->lock);
And the same here, of course.
--
Matthew Wilcox Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists