lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200907151612.20240.siarhei.siamashka@nokia.com>
Date:	Wed, 15 Jul 2009 16:12:19 +0300
From:	Siarhei Siamashka <siarhei.siamashka@...ia.com>
To:	ext Jamie Lokier <jamie@...reable.org>
Cc:	"Kirill A. Shutemov" <kirill@...temov.name>,
	ARM Linux Mailing List 
	<linux-arm-kernel@...ts.arm.linux.org.uk>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] ARM: copy_page.S: take into account the size of the cache line

On Saturday 11 July 2009 02:51:23 ext Jamie Lokier wrote:
> Kirill A. Shutemov wrote:
> > From: Kirill A. Shutemov <kirill@...temov.name>
> >
> > Optimized version of copy_page() was written with assumption that cache
> > line size is 32 bytes. On Cortex-A8 cache line size is 64 bytes.
> >
> > This patch tries to generalize copy_page() to work with any cache line
> > size if cache line size is multiple of 16 and page size is multiple of
> > two cache line size.
> >
> > Unfortunately, kernel doesn't provide a macros with correct cache size.
> > L1_CACHE_SHIFT is 5 on any ARM. So we have to define macros for this
> > propose by ourself.
>
> Why don't you fix L1_CACHE_SHIFT for Cortex-A8?

That's the plan.

Right now Kirill is on a vacation, but I think he can continue investigating
this stuff when he is back and will come up with a clean solution.

Fixing L1_CACHE_SHIFT may open a whole can of worms (fixing some old
bugs, or breaking some things that might work only when incorrectly
assuming that cache line is always 32 bytes). For example, looks like this
thing in 'arch/arm/include/asm/dma-mapping.h' may be dangerous for
ARM cores, which have cache line size different from 32:

static inline int dma_get_cache_alignment(void)
{
     return 32;
}

-- 
Best regards,
Siarhei Siamashka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ