lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4A3B3633.3000507@snapgear.com>
Date:	Fri, 19 Jun 2009 16:54:43 +1000
From:	Greg Ungerer <gerg@...pgear.com>
To:	Christoph Hellwig <hch@...radead.org>
CC:	linux-kernel@...r.kernel.org, gerg@...inux.org,
	linux-m68k@...r.kernel.org
Subject: Re: [PATCH] m68k: merge the mmu and non-mmu versions of checksum.h

Hi Christoph,

Christoph Hellwig wrote:
> On Wed, Jun 17, 2009 at 05:11:15PM +1000, Greg Ungerer wrote:
>> +#ifdef CONFIG_MMU
>>  /*
>>   *	This is a version of ip_compute_csum() optimized for IP headers,
>>   *	which always checksum on 4 octet boundaries.
>> @@ -59,6 +61,9 @@ static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
>>  		 : "memory");
>>  	return (__force __sum16)~sum;
>>  }
>> +#else
>> +__sum16 ip_fast_csum(const void *iph, unsigned int ihl);
>> +#endif
> 
> Any good reason this is inline for all mmu processors and out of line
> for nommu, independent of the actual cpu variant?

I don't recall of the simple (and thus non-mmu) m68k variants
support all the instructions used in this optimized version.
I will check that. It might be that this is mis-placed and
is actually conditional on the CPU type.

The C code version is significantly bigger, I think that is why
it was not inlined here (see arch/m68knommu/lib/checksum.c)


>>  static inline __sum16 csum_fold(__wsum sum)
>>  {
>>  	unsigned int tmp = (__force u32)sum;
>> +#ifdef CONFIG_COLDFIRE
>> +	tmp = (tmp & 0xffff) + (tmp >> 16);
>> +	tmp = (tmp & 0xffff) + (tmp >> 16);
>> +	return (__force __sum16)~tmp;
>> +#else
>>  	__asm__("swap %1\n\t"
>>  		"addw %1, %0\n\t"
>>  		"clrw %1\n\t"
>> @@ -74,6 +84,7 @@ static inline __sum16 csum_fold(__wsum sum)
>>  		: "=&d" (sum), "=&d" (tmp)
>>  		: "0" (sum), "1" (tmp));
>>  	return (__force __sum16)~sum;
>> +#endif
>>  }
> 
> I think this would be cleaner by having totally separate functions
> for both cases, e.g.
> 
> #ifdef CONFIG_COLDFIRE
> static inline __sum16 csum_fold(__wsum sum)
> {
> 	unsigned int tmp = (__force u32)sum;
> 
> 	tmp = (tmp & 0xffff) + (tmp >> 16);
> 	tmp = (tmp & 0xffff) + (tmp >> 16);
> 
> 	return (__force __sum16)~tmp;
> }
> #else
> ...
> #endif

Ok, I will change that.

Thanks
Greg


------------------------------------------------------------------------
Greg Ungerer  --  Principal Engineer        EMAIL:     gerg@...pgear.com
SnapGear Group, McAfee                      PHONE:       +61 7 3435 2888
825 Stanley St,                             FAX:         +61 7 3891 3630
Woolloongabba, QLD, 4102, Australia         WEB: http://www.SnapGear.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ