[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1348240166.13371.100.camel@smile>
Date: Fri, 21 Sep 2012 18:09:26 +0300
From: Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
To: viresh kumar <viresh.kumar@...aro.org>
Cc: Vinod Koul <vinod.koul@...el.com>, spear-devel@...t.st.com,
linux-kernel@...r.kernel.org, Hein Tibosch <hein_tibosch@...oo.es>
Subject: Re: [PATCHv2 4/6] dw_dmac: autoconfigure block_size or use platform
data
On Fri, 2012-09-21 at 19:30 +0530, viresh kumar wrote:
> On Fri, Sep 21, 2012 at 5:35 PM, Andy Shevchenko
> <andriy.shevchenko@...ux.intel.com> wrote:
> > diff --git a/drivers/dma/dw_dmac.c b/drivers/dma/dw_dmac.c
> > @@ -1385,6 +1375,7 @@ static int __devinit dw_probe(struct platform_device *pdev)
>
> > + /* get hardware configuration parameters */
> > + if (autocfg)
> > + max_blk_size = dma_readl(dw, MAX_BLK_SIZE);
> > +
>
> Why don't you do above with the below ++ code
>
> > /* Calculate all channel mask before DMA setup */
> > dw->all_chan_mask = (1 << nr_channels) - 1;
> >
> > @@ -1470,6 +1465,16 @@ static int __devinit dw_probe(struct platform_device *pdev)
> > INIT_LIST_HEAD(&dwc->free_list);
> >
> > channel_clear_bit(dw, CH_EN, dwc->mask);
> > +
> > + /* hardware configuration */
> > + if (autocfg)
> > + /* Decode maximum block size for given channel. The
> > + * stored 4 bit value represents blocks from 0x00 for 3
> > + * up to 0x0a for 4095. */
>
> i.e. here?
Because there is only one register, but we have loop for channels. Shall
we read the same register as many times as amount of channels we have?
>
> > + dwc->block_size =
> > + (4 << ((max_blk_size >> 4 * i) & 0xf)) - 1;
> > + else
> > + dwc->block_size = pdata->block_size;
> > }
--
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>
Intel Finland Oy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists