[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150513074638.GR10961@lukather>
Date: Wed, 13 May 2015 09:46:38 +0200
From: Maxime Ripard <maxime.ripard@...e-electrons.com>
To: Shaohua Li <shli@...nel.org>
Cc: Neil Brown <neilb@...e.de>, linux-raid@...r.kernel.org,
linux-kernel@...r.kernel.org, Lior Amsalem <alior@...vell.com>,
Thomas Petazzoni <thomas@...e-electrons.com>,
Gregory Clement <gregory.clement@...e-electrons.com>,
Boris Brezillon <boris@...e-electrons.com>
Subject: Re: Possible RAID6 regression with ASYNC_TX_DMA enabled in 4.1
Hi,
On Tue, May 12, 2015 at 03:59:07AM -0700, Shaohua Li wrote:
> On Tue, May 12, 2015 at 02:55:46PM +0200, Maxime Ripard wrote:
> > Hi Shaohua,
> >
> > On Sun, May 10, 2015 at 11:26:38PM -0700, Shaohua Li wrote:
> > > On Thu, May 07, 2015 at 02:57:02PM +0200, Maxime Ripard wrote:
> > > > Hi,
> > > >
> > > > I'm currently trying to add support for the PQ operations on the
> > > > marvell XOR engine, in dmaengine, obviously to be able to use async_tx
> > > > to offload these operations.
> > > >
> > > > I'm testing these patches with a RAID6 array with 4 disks.
> > > >
> > > > However, since the commit 59fc630b8b5f ("RAID5: batch adjacent full
> > > > stripe write", every write to that array fails with the following
> > > > stacktrace.
> > > >
> > > > http://code.bulix.org/eh8iew-88342?raw
> > > >
> > > > It seems to be generated by that warning here:
> > > >
> > > > http://lxr.free-electrons.com/source/crypto/async_tx/async_tx.c#L173
> > > >
> > > > And indeed, if we dump the status of depend_tx here, it's already been
> > > > acked.
> > > >
> > > > That doesn't happen if ASYNC_TX_DMA is disabled, hence using the
> > > > software version of it, instead of relying on our XOR engine. It
> > > > doesn't happen on any commit prior to the one mentionned above, with
> > > > the exact same changes applied. These changes are meant to be
> > > > contributed, so I can definitely push them somewhere if needed.
> > > >
> > > > I don't really know where to look for though, the change that is
> > > > causing this is probably the change in ops_run_reconstruct6, but I'm
> > > > not sure that this partial revert alone would work with regard to the
> > > > rest of the patch.
> > >
> > > I don't have a machine with dmaengine, it's likely there is error in this side.
> > > Could you please make stripe_can_batch() returns false always and check if the
> > > error disappear? This should narrow down if it's related to batch issue.
> >
> > The error indeed disappears if stripe_can_batch always returns false.
>
> Does this fix it?
>
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 77dfd72..5e820fc 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -1825,7 +1825,7 @@ ops_run_reconstruct6(struct stripe_head *sh, struct raid5_percpu *percpu,
> } else
> init_async_submit(&submit, 0, tx, NULL, NULL,
> to_addr_conv(sh, percpu, j));
> - async_gen_syndrome(blocks, 0, count+2, STRIPE_SIZE, &submit);
> + tx = async_gen_syndrome(blocks, 0, count+2, STRIPE_SIZE, &submit);
> if (!last_stripe) {
> j++;
> sh = list_first_entry(&sh->batch_list, struct stripe_head,
It does, thanks!
Feel free to add my Tested-by if you submit this patch.
Maxime
--
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
Download attachment "signature.asc" of type "application/pgp-signature" (820 bytes)
Powered by blists - more mailing lists