[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m2obigf9xa.fsf@firstfloor.org>
Date: Thu, 29 Nov 2012 12:09:37 -0800
From: Andi Kleen <andi@...stfloor.org>
To: Jim Kukunas <james.t.kukunas@...ux.intel.com>
Cc: Linux Raid <linux-raid@...r.kernel.org>,
Linux Kernel <linux-kernel@...r.kernel.org>,
Neil Brown <neilb@...e.de>, "H. Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH] lib/raid6: Add AVX2 optimized recovery functions
Jim Kukunas <james.t.kukunas@...ux.intel.com> writes:
> +
> + /* ymm0 = x0f[16] */
> + asm volatile("vpbroadcastb %0, %%ymm7" : : "m" (x0f));
> +
> + while (bytes) {
> +#ifdef CONFIG_X86_64
> + asm volatile("vmovdqa %0, %%ymm1" : : "m" (q[0]));
> + asm volatile("vmovdqa %0, %%ymm9" : : "m" (q[32]));
> + asm volatile("vmovdqa %0, %%ymm0" : : "m" (p[0]));
> + asm volatile("vmovdqa %0, %%ymm8" : : "m" (p[32]));
This is somewhat dangerous to assume registers do not get changed
between assembler statements or assembler statements do not get
reordered. Better always put such values into explicit variables or
merge them into a single asm statement.
asm volatile is also not enough to prevent reordering. If anything
you would need a memory clobber.
-Andi
--
ak@...ux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists