lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 31 Jul 2017 07:47:43 -0700 From: John Fastabend <john.fastabend@...il.com> To: Daniel Borkmann <daniel@...earbox.net>, "Levin, Alexander (Sasha Levin)" <alexander.levin@...izon.com> CC: "davem@...emloft.net" <davem@...emloft.net>, "ast@...com" <ast@...com>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "brouer@...hat.com" <brouer@...hat.com>, "andy@...yhouse.net" <andy@...yhouse.net> Subject: Re: [net-next PATCH 11/12] net: add notifier hooks for devmap bpf map On 07/31/2017 01:55 AM, Daniel Borkmann wrote: > On 07/30/2017 03:28 PM, Levin, Alexander (Sasha Levin) wrote: >> On Mon, Jul 17, 2017 at 09:30:02AM -0700, John Fastabend wrote: >>> @@ -341,9 +368,11 @@ static int dev_map_update_elem(struct bpf_map *map, void *key, void *value, >>> * Remembering the driver side flush operation will happen before the >>> * net device is removed. >>> */ >>> + mutex_lock(&dev_map_list_mutex); >>> old_dev = xchg(&dtab->netdev_map[i], dev); >>> if (old_dev) >>> call_rcu(&old_dev->rcu, __dev_map_entry_free); >>> + mutex_unlock(&dev_map_list_mutex); >>> >>> return 0; >>> } >> >> This function gets called under rcu critical section, where we can't grab mutexes: > > Agree, same goes for the delete callback that mutex is not allowed > in this context. If I recall, this was for the devmap netdev notifier > in order to check whether we need to purge dev entries from the map, > so that the device can be unregistered gracefully. Given that devmap > ops like update/delete are only allowed from user space, we could > look into whether this map type actually needs to hold RCU at all > here, or other option is to try and get rid of the mutex altogether. > John, could you take a look for a fix? > > Thanks a lot, > Daniel > I'll work up a fix today/tomorrow. Thanks.
Powered by blists - more mailing lists