[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180129123849.304970272@linuxfoundation.org>
Date: Mon, 29 Jan 2018 13:56:43 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Pablo Neira Ayuso <pablo@...filter.org>,
Michal Kubecek <mkubecek@...e.cz>
Subject: [PATCH 4.4 38/74] netfilter: fix IS_ERR_VALUE usage
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Pablo Neira Ayuso <pablo@...filter.org>
commit 92b4423e3a0bc5d43ecde4bcad871f8b5ba04efd upstream.
This is a forward-port of the original patch from Andrzej Hajda,
he said:
"IS_ERR_VALUE should be used only with unsigned long type.
Otherwise it can work incorrectly. To achieve this function
xt_percpu_counter_alloc is modified to return unsigned long,
and its result is assigned to temporary variable to perform
error checking, before assigning to .pcnt field.
The patch follows conclusion from discussion on LKML [1][2].
[1]: http://permalink.gmane.org/gmane.linux.kernel/2120927
[2]: http://permalink.gmane.org/gmane.linux.kernel/2150581"
Original patch from Andrzej is here:
http://patchwork.ozlabs.org/patch/582970/
This patch has clashed with input validation fixes for x_tables.
Signed-off-by: Pablo Neira Ayuso <pablo@...filter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Acked-by: Michal Kubecek <mkubecek@...e.cz>
---
include/linux/netfilter/x_tables.h | 6 +++---
net/ipv4/netfilter/arp_tables.c | 6 ++++--
net/ipv4/netfilter/ip_tables.c | 6 ++++--
net/ipv6/netfilter/ip6_tables.c | 6 ++++--
4 files changed, 15 insertions(+), 9 deletions(-)
--- a/include/linux/netfilter/x_tables.h
+++ b/include/linux/netfilter/x_tables.h
@@ -381,16 +381,16 @@ static inline unsigned long ifname_compa
* allows us to return 0 for single core systems without forcing
* callers to deal with SMP vs. NONSMP issues.
*/
-static inline u64 xt_percpu_counter_alloc(void)
+static inline unsigned long xt_percpu_counter_alloc(void)
{
if (nr_cpu_ids > 1) {
void __percpu *res = __alloc_percpu(sizeof(struct xt_counters),
sizeof(struct xt_counters));
if (res == NULL)
- return (u64) -ENOMEM;
+ return -ENOMEM;
- return (u64) (__force unsigned long) res;
+ return (__force unsigned long) res;
}
return 0;
--- a/net/ipv4/netfilter/arp_tables.c
+++ b/net/ipv4/netfilter/arp_tables.c
@@ -511,11 +511,13 @@ find_check_entry(struct arpt_entry *e, c
{
struct xt_entry_target *t;
struct xt_target *target;
+ unsigned long pcnt;
int ret;
- e->counters.pcnt = xt_percpu_counter_alloc();
- if (IS_ERR_VALUE(e->counters.pcnt))
+ pcnt = xt_percpu_counter_alloc();
+ if (IS_ERR_VALUE(pcnt))
return -ENOMEM;
+ e->counters.pcnt = pcnt;
t = arpt_get_target(e);
target = xt_request_find_target(NFPROTO_ARP, t->u.user.name,
--- a/net/ipv4/netfilter/ip_tables.c
+++ b/net/ipv4/netfilter/ip_tables.c
@@ -653,10 +653,12 @@ find_check_entry(struct ipt_entry *e, st
unsigned int j;
struct xt_mtchk_param mtpar;
struct xt_entry_match *ematch;
+ unsigned long pcnt;
- e->counters.pcnt = xt_percpu_counter_alloc();
- if (IS_ERR_VALUE(e->counters.pcnt))
+ pcnt = xt_percpu_counter_alloc();
+ if (IS_ERR_VALUE(pcnt))
return -ENOMEM;
+ e->counters.pcnt = pcnt;
j = 0;
mtpar.net = net;
--- a/net/ipv6/netfilter/ip6_tables.c
+++ b/net/ipv6/netfilter/ip6_tables.c
@@ -666,10 +666,12 @@ find_check_entry(struct ip6t_entry *e, s
unsigned int j;
struct xt_mtchk_param mtpar;
struct xt_entry_match *ematch;
+ unsigned long pcnt;
- e->counters.pcnt = xt_percpu_counter_alloc();
- if (IS_ERR_VALUE(e->counters.pcnt))
+ pcnt = xt_percpu_counter_alloc();
+ if (IS_ERR_VALUE(pcnt))
return -ENOMEM;
+ e->counters.pcnt = pcnt;
j = 0;
mtpar.net = net;
Powered by blists - more mailing lists