[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFxZ7H=x2-rgQgRE9b=ky-h0WM1x-48Zs8a0dVt0NvkB_g@mail.gmail.com>
Date: Thu, 12 Jan 2012 16:13:30 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Rafał Miłecki <zajec5@...il.com>
Cc: Arend van Spriel <arend@...adcom.com>,
Larry Finger <Larry.Finger@...inger.net>,
Alwin Beukers <alwin@...adcom.com>,
Roland Vossen <rvossen@...adcom.com>,
"John W. Linville" <linville@...driver.com>,
Network Development <netdev@...r.kernel.org>,
"Franky (Zhenhui) Lin" <frankyl@...adcom.com>
Subject: Re: brcm80211 breakage..
2012/1/12 Rafał Miłecki <zajec5@...il.com>:
>
> Have you tried booting with bcma & brcmsmac blacklisted? Does
> suspend&resume work then?
>
> Have you tried blacklisting just brcmsmac (letting bcma load)? Does
> s&r work then?
If I unload brcmsmac, I can suspend/resume. Once. It can't suspend a
second time.
I did see some message flash about "does not have a release()
function", but don't know if that was bcma or something else.
I do notice that both the bcma and suspend/resume seems quite broken.
It's using the legacy suspend/resume stuff and does the PCI resume on
its own (with no matching suspend!). That *really* isn't a good idea
these days.
The way to do it these days is to have a struct dev_pm_ops embedded in
the struct pci_driver (".driver.pm"), and let the PCI layer handle all
the generic PCI suspend/resume details - you only handle the
device-specific ones (ie in this case suspending/resuming the bcma bus
itself).
The generic PCI layer will do all the PCI stuff correctly, including
all the nasty races with shared interrupts etc. In a way that no
driver ever got it right. And it simplifies the driver too.
And the brcms driver does suspend/resume *completely* wrong, and seems
to actually re-suspend and re-resume the PCI device.
I'm surprised it has ever worked for anybody. It certainly doesn't work for me.
Linus
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists