[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <201912060941.f3Qi9xtV%lkp@intel.com>
Date: Fri, 6 Dec 2019 09:45:01 +0800
From: kbuild test robot <lkp@...el.com>
To: David Laight <David.Laight@...LAB.COM>
Cc: kbuild-all@...ts.01.org,
linux-kernel <linux-kernel@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH] x86: Optimise x86 IP checksum code
Hi David,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on tip/auto-latest]
[also build test WARNING on tip/x86/core linus/master v5.4 next-20191202]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]
url: https://github.com/0day-ci/linux/commits/David-Laight/x86-Optimise-x86-IP-checksum-code/20191203-211313
base: https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git e445033e58108a9891abfbc0dea90b066a75e4a9
reproduce:
# apt-get install sparse
# sparse version: v0.6.1-91-g817270f-dirty
make ARCH=x86_64 allmodconfig
make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'
If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@...el.com>
sparse warnings: (new ones prefixed by >>)
>> arch/x86/lib/csum-partial_64.c:141:23: sparse: sparse: incorrect type in return expression (different base types) @@ expected restricted __wsum @@ got icted __wsum @@
>> arch/x86/lib/csum-partial_64.c:141:23: sparse: expected restricted __wsum
>> arch/x86/lib/csum-partial_64.c:141:23: sparse: got unsigned int
vim +141 arch/x86/lib/csum-partial_64.c
126
127 /*
128 * computes the checksum of a memory block at buff, length len,
129 * and adds in "sum" (32-bit)
130 *
131 * returns a 32-bit number suitable for feeding into itself
132 * or csum_tcpudp_magic
133 *
134 * this function must be called with even lengths, except
135 * for the last fragment, which may be odd
136 *
137 * it's best to have buff aligned on a 64-bit boundary
138 */
139 __wsum csum_partial(const void *buff, int len, __wsum sum)
140 {
> 141 return do_csum(buff, len, (__force u32)sum);
142 }
143 EXPORT_SYMBOL(csum_partial);
144
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org Intel Corporation
Powered by blists - more mailing lists