lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 4 Nov 2015 11:26:14 +0900 From: Namhyung Kim <namhyung@...nel.org> To: Tom Zanussi <tom.zanussi@...ux.intel.com> CC: rostedt@...dmis.org, daniel.wagner@...-carit.de, masami.hiramatsu.pt@...achi.com, josh@...htriplett.org, andi@...stfloor.org, mathieu.desnoyers@...icios.com, peterz@...radead.org, linux-kernel@...r.kernel.org Subject: Re: [PATCH v11 08/28] tracing: Add lock-free tracing_map Hi Tom, On Tue, Nov 03, 2015 at 07:47:52PM -0600, Tom Zanussi wrote: > Hi Namhyung, > > On Mon, 2015-11-02 at 16:08 +0900, Namhyung Kim wrote: > > I thought it'd be better if users can see which one is the real drop > > or not. IOW if drop count is much smaller than the normal event > > count, [s]he might want to ignore the occasional drops. Otherwise, > > [s]he should restart with a bigger table. This requires accurate > > counts of events and drops though. > > > > OK, how about the below - it basically moves the drops set/test/inc into > tracing_map_insert(), as well as a total hits count. So those values > will be available for users to use in deciding whether to use the data > or restart with a bigger table, and the loop is bailed out of only if no > matching keys are found and there are drops, so callers can continue > updating existing entries. But if a key didn't get a desired index, it'd still fail to update.. > > Users who want the original behavior still get the NULL return and can > stop calling tracing_map_insert() as before: > > struct tracing_map_elt *tracing_map_insert(struct tracing_map *map, void *key) > { > u32 idx, key_hash, test_key; > struct tracing_map_entry *entry; > > key_hash = jhash(key, map->key_size, 0); > if (key_hash == 0) > key_hash = 1; > idx = key_hash >> (32 - (map->map_bits + 1)); > > while (1) { > idx &= (map->map_size - 1); > entry = TRACING_MAP_ENTRY(map->map, idx); > test_key = entry->key; > > if (test_key && test_key == key_hash && entry->val && > keys_match(key, entry->val->key, map->key_size)) { > atomic64_inc(&map->hits); > return entry->val; > } > > if (atomic64_read(&map->drops)) { > atomic64_inc(&map->drops); > break; > } IMHO it should be removed. > > if (!test_key && !cmpxchg(&entry->key, 0, key_hash)) { > struct tracing_map_elt *elt; > > elt = get_free_elt(map); > if (!elt) { > atomic64_inc(&map->drops); And reset entry->key here.. > break; > } > memcpy(elt->key, key, map->key_size); > entry->val = elt; > > atomic64_inc(&map->hits); > return entry->val; > } > idx++; > } > > return NULL; > } Then tracing_map_lookup() can be implemented like *_insert but bail out from the loop if test_key is 0. The caller might do like this: if (!atomic64_read(&map->drops)) elt = tracing_map_insert(...); else elt = tracing_map_lookup(...); Thanks, Namhyung -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists