[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SN1PR0301MB15504A8672AFF152F56D48C19B230@SN1PR0301MB1550.namprd03.prod.outlook.com>
Date: Mon, 26 Oct 2015 03:15:40 +0000
From: Zhao Qiang <qiang.zhao@...escale.com>
To: Scott Wood <scottwood@...escale.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linuxppc-dev@...ts.ozlabs.org" <linuxppc-dev@...ts.ozlabs.org>,
"lauraa@...eaurora.org" <lauraa@...eaurora.org>,
Xiaobo Xie <X.Xie@...escale.com>,
"benh@...nel.crashing.org" <benh@...nel.crashing.org>,
Li Leo <LeoLi@...escale.com>,
"paulus@...ba.org" <paulus@...ba.org>
Subject: RE: [PATCH v12 3/6] CPM/QE: use genalloc to manage CPM/QE muram
On Sat, 2015-10-24 at 04:59 AM, Wood Scott-B07421 <scottwood@...escale.com> wrote:
> -----Original Message-----
> From: Wood Scott-B07421
> Sent: Saturday, October 24, 2015 4:59 AM
> To: Zhao Qiang-B45475 <qiang.zhao@...escale.com>
> Cc: linux-kernel@...r.kernel.org; linuxppc-dev@...ts.ozlabs.org;
> lauraa@...eaurora.org; Xie Xiaobo-R63061 <X.Xie@...escale.com>;
> benh@...nel.crashing.org; Li Yang-Leo-R58472 <LeoLi@...escale.com>;
> paulus@...ba.org
> Subject: Re: [PATCH v12 3/6] CPM/QE: use genalloc to manage CPM/QE muram
>
> Don't send HTML e-mail.
>
> On Fri, 2015-10-23 at 02:06 -0500, Zhao Qiang-B45475 wrote:
> > On Fri, 2015-10-23 at 11:00 AM, Wood Scott-B07421
> > <scottwood@...escale.com>
> > wrote:
> > > -----Original Message-----
> > > From: Wood Scott-B07421
> > > Sent: Friday, October 23, 2015 11:00 AM
> > > To: Zhao Qiang-B45475 <qiang.zhao@...escale.com>
> > > Cc: linux-kernel@...r.kernel.org; linuxppc-dev@...ts.ozlabs.org;
> > > lauraa@...eaurora.org; Xie Xiaobo-R63061 <X.Xie@...escale.com>;
> > > benh@...nel.crashing.org; Li Yang-Leo-R58472 <LeoLi@...escale.com>;
> > > paulus@...ba.org
> > > Subject: Re: [PATCH v12 3/6] CPM/QE: use genalloc to manage CPM/QE
> > > muram
> > >
> > > On Wed, 2015-10-14 at 15:16 +0800, Zhao Qiang wrote:
> > > > -/**
> > > > +/*
> > > > * cpm_muram_alloc - allocate the requested size worth of
> > > > multi-user
> > ram
> > > > * @size: number of bytes to allocate
> > > > * @align: requested alignment, in bytes @@ -141,59 +151,102 @@ out:
> > > > */
> > > > unsigned long cpm_muram_alloc(unsigned long size, unsigned long
> > > > align) {
> > > > - unsigned long start;
> > > > unsigned long flags;
> > > > -
> > > > + unsigned long start;
> > > > + static struct genpool_data_align muram_pool_data;
> > > > spin_lock_irqsave(&cpm_muram_lock, flags);
> > > > - cpm_muram_info.alignment = align;
> > > > - start = rh_alloc(&cpm_muram_info, size, "commproc");
> > > > - memset(cpm_muram_addr(start), 0, size);
> > > > + muram_pool_data.align = align;
> > > > + gen_pool_set_algo(muram_pool, gen_pool_first_fit_align,
> > > > + &muram_pool_data);
> > > > + start = cpm_muram_alloc_common(size, &muram_pool_data);
> > > > spin_unlock_irqrestore(&cpm_muram_lock, flags);
> > > > -
> > > > return start;
> > > > }
> > > > EXPORT_SYMBOL(cpm_muram_alloc);
> > >
> > > Why is muram_pool_data static? Why is it being passed to
> > > gen_pool_set_algo()?
> > Cpm_muram use both align algo and fixed algo, so we need to set
> > corresponding algo and Algo data.
>
> The data gets passed in via gen_pool_alloc_data(). The point was to allow it to
> be on the caller's stack, not a long-lived data structure shared by all callers and
> needing synchronization.
You mean it is not necessary to point pool->data to data, just passing the data to gen_pool_alloc_data()?
However, the algo it needed to be set.
>
> > >The whole reason we're adding gen_pool_alloc_data() is to avoid
> > >that. Do we need gen_pool_alloc_algo() too?
> >
> > We add gen_pool_alloc_data() to pass data to algo, because align algo
> > and fixed algo, Because align and fixed algos need specific data.
>
> And my point is that because of that, it seems like we need a version that
> accepts an algorithm as well.
It the user just use only one algo, it doesn’t need to set algo,
However, qe_muram use two algos with alloc_align function
And alloc_fixed function.
-Zhao
Powered by blists - more mailing lists