lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 24 Mar 2015 17:30:06 -0700
From:	Maxime Ripard <maxime.ripard@...e-electrons.com>
To:	Vinod Koul <vinod.koul@...el.com>
Cc:	dmaengine@...r.kernel.org, linux-kernel@...r.kernel.org,
	Laurent Pinchart <laurent.pinchart@...asonboard.com>,
	Ludovic Desroches <ludovic.desroches@...el.com>
Subject: Re: [PATCH RFC 1/2] dmaengine: Introduce scheduled DMA framework

On Tue, Mar 24, 2015 at 09:50:43PM +0530, Vinod Koul wrote:
> On Mon, Mar 23, 2015 at 02:51:40PM -0700, Maxime Ripard wrote:
> > > > +	if (vd) {
> > > > +		lli = desc->v_lli;
> > > > +		while (true) {
> > > > +			bytes += sdma->ops->lli_size(lli);
> > > > +
> > > > +			if (!sdma->ops->lli_has_next(lli))
> > > > +				break;
> > > > +
> > > > +			lli = sdma->ops->lli_next(lli);
> > >
> > > wouldn't it be nicer to just use a link list here which is embedded
> > > in a generic struct which in turn can be embedded in a driver
> > > specific one, so we actually don't care what driver specific
> > > implementation is...
> > 
> > I'm not sure to get what you mean here. We will always have to care
> > about the driver specific implementation.
>
> by providing a callback driver can do whatever it wants whereas the framework
> does generic stuff

It's kind of the whole point. LLI can be either actual list, maps,
tables, or even not supported by hardware at all and should be
emulated.

On this, letting the driver do whatever is needed seems like a good
thing.

> > > that way we cna get rid of lli_size and lli_has_next and lli_next
> > > ones. Further that will allow drivers/framework to break a txn into
> > > multiple smaller txns internally.
> > 
> > I chose this accessors/iterators pattern because it actually make this
> > very easy to add new features on the LLI (like debugging, we just have
> > to add a new lli_dump) and that works.
> > 
> > We can also manage more easily the LLI, by allocating ourself the LLI
> > items, and freeing it afterwards, which is something that also is
> > often wrong, or at least addressed in different ways from one driver
> > to another (for example, the Atmel drivers have their own descriptor
> > allocation logic, while the others mostly rely on the dma_pools. Which
> > one is right, I don't know, but the right one should be shared by
> > everyone).
> > 
> > That would also be much easier to add software-emulated LLI, since you
> > just have to set the lli management functions to some common library
> > functions, and we're done.
> > 
> > I feel like it would be much more flexible, and would remove logic
> > from the driver itself, which is the point of this whole
> > "sub-framework".
>
> Yes the intent is right, I was thinking that using a kernel link frees us
> from managing the list, lesser chances of bugs that way

A generic structure embedded in a driver specific structure wouldn't
even cover all cases. For example, in tx_status, we do want to access
the hardware descriptor size, and possibly iterate over the LLI
itself, so we would still have to have some accessors and iterators.

Given that, I'm not sure that duplicating this info in several places
is a good idea.

> > > typically driver would do calculations for descriptor setting up the
> > > registers values which cna be programmed, so a driver callback would be nice
> > > here. That way lesser work in start function and which is good as I am
> > > assuming start should be done from irq context
> > 
> > I'm not really sure that would make sense. You don't have the
> > guarantee that the dma_chan pointer you are passing to it is actually
> > mapped to a physical channel at this time.
> > 
> > And I thought that the prep functions could be called from irq context
> > too?
>
> Yes that is true but if you have descriptor prepared and only need to
> program the hw that it is the fastest way. Plus the HW is idle so faster we
> can submit a new transaction the better it is.

You don't really know at this point whether the hardware is idle, or
on which channel it's actually going to be scheduled to.

If it's just to prepare and store some registers value, I guess it
might make sense. I don't really know to what should we attach this
kind of precomputed values, to the request? the descriptor? the LLI
itself?

> Btw the recommendation is that new txn should be submitted in ISR
> and not wait till tasklet, most driver do latter with few doing
> former.

Yes, that's one of the things that my driver took into account, since
most drivers get this wrong (including the one I wrote).

sdma_report is expected to be called from an interrupt context, and
will try to find a new descriptor to run on the channel which just got
reported as free. If such a descriptor is available, it will return it
to the driver, that is expected to run it on that channel.

Otherwise, we just put the channel back in the available list for
future use.

> The computation part of registers to be programmed should be anyway
> independent of channel, while actual programming and starting of channel we
> need to ensure right channel and thereby offsets are used :)

Yep, that part is hopefully covered :)

> > > Also this needs to take into account the dma_slave_config, which is allowed
> > > to me modified for subsequent txns
> > 
> > memcpy doesn't rely on dma_slave_config, does it?
> Nope but slave does :) this comment is for slave txn :)

You were actually commenting on prep_memcpy. slave_sg actually takes
it into account.

Thanks!
Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com

Download attachment "signature.asc" of type "application/pgp-signature" (820 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ