[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <90844cba1cb64571a8597a6e0afee01d@realtek.com>
Date: Tue, 22 Feb 2022 08:48:30 +0000
From: Ricky WU <ricky_wu@...ltek.com>
To: "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>
CC: "ulf.hansson@...aro.org" <ulf.hansson@...aro.org>,
"kai.heng.feng@...onical.com" <kai.heng.feng@...onical.com>,
"tommyhebb@...il.com" <tommyhebb@...il.com>,
"linux-mmc@...r.kernel.org" <linux-mmc@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] mmc: rtsx: add 74 Clocks in power on flow
> -----Original Message-----
> From: gregkh@...uxfoundation.org <gregkh@...uxfoundation.org>
> Sent: Tuesday, February 22, 2022 3:42 PM
> To: Ricky WU <ricky_wu@...ltek.com>
> Cc: ulf.hansson@...aro.org; kai.heng.feng@...onical.com;
> tommyhebb@...il.com; linux-mmc@...r.kernel.org;
> linux-kernel@...r.kernel.org
> Subject: Re: [PATCH] mmc: rtsx: add 74 Clocks in power on flow
>
> On Tue, Feb 22, 2022 at 07:27:52AM +0000, Ricky WU wrote:
> > After 1ms stabilizing the voltage time add "Host provides at least 74
> > Clocks before issuing first command" that is spec definition
>
> You do have 72 columns to use here, no need to wrap this so tightly.
>
Ok...
so I need to have next patch to fix this format?
> >
> > Signed-off-by: Ricky Wu <ricky_wu@...ltek.com>
> > ---
> > drivers/mmc/host/rtsx_pci_sdmmc.c | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/drivers/mmc/host/rtsx_pci_sdmmc.c
> > b/drivers/mmc/host/rtsx_pci_sdmmc.c
> > index 2a3f14afe9f8..e016d720e453 100644
> > --- a/drivers/mmc/host/rtsx_pci_sdmmc.c
> > +++ b/drivers/mmc/host/rtsx_pci_sdmmc.c
> > @@ -940,10 +940,17 @@ static int sd_power_on(struct realtek_pci_sdmmc
> *host)
> > if (err < 0)
> > return err;
> >
> > + mdelay(1);
>
> What is this delay for?
>
Spec definition, need to wait 1ms for voltage stable
and below mdelay(5) is our device send 74 Clocks time
> thanks,
>
> greg k-h
> ------Please consider the environment before printing this e-mail.
Powered by blists - more mailing lists