lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 11 May 2010 02:37:21 -0400
From:	Mike Frysinger <vapier.adi@...il.com>
To:	Dmitry Torokhov <dmitry.torokhov@...il.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Oskar Schirmer <os@...ix.com>,
	Michael Hennerich <Michael.Hennerich@...log.com>,
	linux-input@...r.kernel.org, linux-kernel@...r.kernel.org,
	Daniel Glöckner <dg@...ix.com>,
	Oliver Schneidewind <osw@...ix.com>,
	Johannes Weiner <jw@...ix.com>
Subject: Re: [PATCH v3] ad7877: keep dma rx buffers in seperate cache lines

On Tue, May 11, 2010 at 02:33, Dmitry Torokhov wrote:
> On Tue, May 11, 2010 at 02:23:34AM -0400, Mike Frysinger wrote:
>> On Tue, May 11, 2010 at 02:21, Dmitry Torokhov wrote:
>> > On Tue, May 11, 2010 at 02:11:41AM -0400, Mike Frysinger wrote:
>> >> On Tue, May 11, 2010 at 02:05, Dmitry Torokhov wrote:
>> >> > Dmitry
>> >> >
>> >> > Input: ad7877 - keep dma rx buffers in seperate cache lines
>> >> >
>> >> > From: Oskar Schirmer <os@...ix.com>
>> >> >
>> >> > With dma based spi transmission, data corruption is observed
>> >> > occasionally. With dma buffers located right next to msg and
>> >> > xfer fields, cache lines correctly flushed in preparation for
>> >> > dma usage may be polluted again when writing to fields in the
>> >> > same cache line.
>> >> >
>> >> > Make sure cache fields used with dma do not share cache lines
>> >> > with fields changed during dma handling. As both fields are part
>> >> > of a struct that is allocated via kzalloc, thus cache aligned,
>> >> > moving the fields to the 1st position and insert padding for
>> >> > alignment does the job.
>> >> >
>> >> > Signed-off-by: Oskar Schirmer <os@...ix.com>
>> >> > Signed-off-by: Daniel Glöckner <dg@...ix.com>
>> >> > Signed-off-by: Oliver Schneidewind <osw@...ix.com>
>> >> > Signed-off-by: Johannes Weiner <jw@...ix.com>
>> >> > Acked-by: Mike Frysinger <vapier@...too.org>
>> >> > [dtor@...l.ru - changed to use ___cacheline_aligned at suggestion
>> >> >  of akpm]
>> >> > Signed-off-by: Dmitry Torokhov <dtor@...l.ru>
>> >> > ---
>> >> >
>> >> >  drivers/input/touchscreen/ad7877.c |   15 ++++++++++++---
>> >> >  1 files changed, 12 insertions(+), 3 deletions(-)
>> >> >
>> >> >
>> >> > diff --git a/drivers/input/touchscreen/ad7877.c b/drivers/input/touchscreen/ad7877.c
>> >> > index e019d53..0d2d7e5 100644
>> >> > --- a/drivers/input/touchscreen/ad7877.c
>> >> > +++ b/drivers/input/touchscreen/ad7877.c
>> >> > @@ -156,9 +156,14 @@ struct ser_req {
>> >> >        u16                     reset;
>> >> >        u16                     ref_on;
>> >> >        u16                     command;
>> >> > -       u16                     sample;
>> >> >        struct spi_message      msg;
>> >> >        struct spi_transfer     xfer[6];
>> >> > +
>> >> > +       /*
>> >> > +        * DMA (thus cache coherency maintenance) requires the
>> >> > +        * transfer buffers to live in their own cache lines.
>> >> > +        */
>> >> > +       u16 sample ____cacheline_aligned;
>> >> >  };
>> >> >
>> >> >  struct ad7877 {
>> >> > @@ -182,8 +187,6 @@ struct ad7877 {
>> >> >        u8                      averaging;
>> >> >        u8                      pen_down_acc_interval;
>> >> >
>> >> > -       u16                     conversion_data[AD7877_NR_SENSE];
>> >> > -
>> >> >        struct spi_transfer     xfer[AD7877_NR_SENSE + 2];
>> >> >        struct spi_message      msg;
>> >> >
>> >> > @@ -195,6 +198,12 @@ struct ad7877 {
>> >> >        spinlock_t              lock;
>> >> >        struct timer_list       timer;          /* P: lock */
>> >> >        unsigned                pending:1;      /* P: lock */
>> >> > +
>> >> > +       /*
>> >> > +        * DMA (thus cache coherency maintenance) requires the
>> >> > +        * transfer buffers to live in their own cache lines.
>> >> > +        */
>> >> > +       u16 conversion_data[AD7877_NR_SENSE] ____cacheline_aligned;
>> >> >  };
>> >>
>> >> i'm not sure this is correct.  the cached_aligned attribute makes sure
>> >> it starts on a cache boundary, but it doesnt make sure it pads out to
>> >> one.  so it might work more of the time, but i dont think it's
>> >> guaranteed.
>> >
>> > The buffers are moved to the end of the structure - there is nothing
>> > else there.
>>
>> what guarantee exactly do you have for that statement ?
>
> The data is kmalloced, kmalloc aligns on cacheline boundary AFAIK which
> means that next kmalloc data chunk will not share "our" cacheline.

so obvious once you say it out loud.  as long as kmalloc() guarantees
it, your patch sounds fine to me.  once Oskar double checks, ship it!

thanks :)
-mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ