lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251206202959.084b8b3f@jic23-huawei>
Date: Sat, 6 Dec 2025 20:29:59 +0000
From: Jonathan Cameron <jic23@...nel.org>
To: Andy Shevchenko <andriy.shevchenko@...el.com>
Cc: Petre Rodan <petre.rodan@...dimension.ro>, David Lechner
 <dlechner@...libre.com>, Nuno S? <nuno.sa@...log.com>, Andy Shevchenko
 <andy@...nel.org>, Rob Herring <robh@...nel.org>, Krzysztof Kozlowski
 <krzk+dt@...nel.org>, Conor Dooley <conor+dt@...nel.org>,
 linux-iio@...r.kernel.org, devicetree@...r.kernel.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] iio: pressure: add Honeywell ABP2 driver


> > > We have drivers in the kernel with two buffers in the same structure.  
> > 
> > yup. some __align twice just like in my example, some __align just once, some use
> > a simple buffer placed on the stack with no apparent alignment when sending the
> > data.
> > 
> > I've been told during an older review that both buffers need to be aligned due to
> > DMA-related requirements.  
> 
> Correct, and this should be done for the buffers that are subject to DMA, PIO is
> fine without that. Consideration of Tx/Rx split varies, indeed. And I'm not sure
> that the authors of the drivers fully understand when and what alignment is
> required. Jonathan once explained to me why only a single alignment is good and
> second is not needed, but I forgot that, can't reproduce right now. It's
> available on lore archive somewhere.

The logic behind this __aligned(IIO_DMA_MINALIGN) is all about who can mess with
stuff in a given cacheline whilst DMA is ongoing.

There are DMA masters that, when only actually using a small part of a cacheline
will copy the whole thing to local storage, run the DMA and update the bits they
are using before copying the whole cacheline back again.  Note there are other
DMA master designs that don't do this - so you may never see it!
IIRC I only hit this once many years ago - it is very very hard to debug though!

So if anyone else (e.g. the CPU running something parallel) touches other data in the
cacheline (64bytes or so) it will be overwritten with the stale values.

The iio_priv() structure embedded in the larger allocation has forced alignment
to allow us then to use an embedded __aligned(IIO_DMA_MINALIGN) to force padding
between other elements of iio_priv() structure and the start of our new buffer.

We have to be careful to not put anything that might be concurrently from a different
element (e.g. CPU) after these. E.g.
struct bob {
	int a;

	u8 rx[4] __aligned(IIO_DMA_MINALIGN);
	int vitaldata;
}
could result in stale values in vitaldata.

The assumption behind being able to do

struct bob {
	int a;
	u8 rx[4] __aligned(IIO_DMA_MINALIGN);
	u8 tx[4]; //no marking
}
is that both rx and tx are handed to the DMA engine for an operation (some lock
or similar protects them anyway) and DMA engine that managed to corrupt data in
the copy of the cacheline that was entirely in its control is horribly broken anyway
and it's up to the SPI layer or similar to bounce buffers if that's the case.  Short
of randomly writing stuff it never should touch in it's own local copy of the cacheline
I have no idea how this would happen. 


Jonathan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ