lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 10 Aug 2023 11:05:52 -0400
From: Sishuai Gong <sishuai.system@...il.com>
To: Julian Anastasov <ja@....bg>
Cc: horms@...ge.net.au,
 Linux Kernel Network Developers <netdev@...r.kernel.org>,
 lvs-devel@...r.kernel.org
Subject: Re: Race over table->data in proc_do_sync_threshold()

Hello,

I am not familiar with the code but I would like to give it a try :).

It seems to me that replacing the second memcpy with WRITE_ONCE() 
is not necessary as long as we still hold the lock. Otherwise is this close
to what you suggest?

diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
index 62606fb44d02..b4e22e30b896 100644
--- a/net/netfilter/ipvs/ip_vs_ctl.c
+++ b/net/netfilter/ipvs/ip_vs_ctl.c
@@ -1876,6 +1876,7 @@ static int
 proc_do_sync_threshold(struct ctl_table *table, int write,
                       void *buffer, size_t *lenp, loff_t *ppos)
 {
+      struct netns_ipvs *ipvs = table->extra2;
        int *valp = table->data;
        int val[2];
        int rc;
@@ -1885,6 +1886,7 @@ proc_do_sync_threshold(struct ctl_table *table, int write,
                .mode = table->mode,
        };

+      mutex_lock(&ipvs->sync_mutex);
        memcpy(val, valp, sizeof(val));
        rc = proc_dointvec(&tmp, write, buffer, lenp, ppos);
        if (write) {
@@ -1894,6 +1896,7 @@ proc_do_sync_threshold(struct ctl_table *table, int write,
                else
                        memcpy(valp, val, sizeof(val));
        }
+      mutex_unlock(&ipvs->sync_mutex);
        return rc;
 }

@@ -4321,6 +4324,7 @@ static int __net_init ip_vs_control_net_init_sysctl(struct netns_ipvs *ipvs)
        ipvs->sysctl_sync_threshold[0] = DEFAULT_SYNC_THRESHOLD;
        ipvs->sysctl_sync_threshold[1] = DEFAULT_SYNC_PERIOD;
        tbl[idx].data = &ipvs->sysctl_sync_threshold;
+      tbl[idx].extra2 = ipvs;
        tbl[idx++].maxlen = sizeof(ipvs->sysctl_sync_threshold);
        ipvs->sysctl_sync_refresh_period = DEFAULT_SYNC_REFRESH_PERIOD;
        tbl[idx++].data = &ipvs->sysctl_sync_refresh_period;

> On Aug 10, 2023, at 2:20 AM, Julian Anastasov <ja@....bg> wrote:
> 
> 
> Hello,
> 
> On Wed, 9 Aug 2023, Sishuai Gong wrote:
> 
>> Hi,
>> 
>> We observed races over (struct ctl_table *) table->data when two threads
>> are running proc_do_sync_threshold() in parallel, as shown below:
>> 
>> Thread-1 Thread-2
>> memcpy(val, valp, sizeof(val)); memcpy(valp, val, sizeof(val));
>> 
>> This race probably would mess up table->data. Is it better to add a lock?
> 
> We can put mutex_lock(&ipvs->sync_mutex) before the first
> memcpy and to use two WRITE_ONCE instead of the second memcpy. But
> this requires extra2 = ipvs in ip_vs_control_net_init_sysctl():
> 
> tbl[idx].data = &ipvs->sysctl_sync_threshold;
> + tbl[idx].extra2 = ipvs;
> 
> Will you provide patch?
> 
> Regards
> 
> --
> Julian Anastasov <ja@....bg>



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ