[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cbdd45b2-4a03-f11c-f00f-5da90d5dd2e7@meta.com>
Date: Tue, 27 Jun 2023 17:54:14 -0700
From: Alexei Starovoitov <ast@...a.com>
To: Hou Tao <houtao@...weicloud.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
daniel@...earbox.net, andrii@...nel.org, void@...ifault.com,
paulmck@...nel.org
Cc: tj@...nel.org, rcu@...r.kernel.org, netdev@...r.kernel.org,
bpf@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH v2 bpf-next 12/13] bpf: Introduce bpf_mem_free_rcu()
similar to kfree_rcu().
On 6/24/23 12:53 AM, Hou Tao wrote:
> Hi,
>
> On 6/24/2023 11:13 AM, Alexei Starovoitov wrote:
>> From: Alexei Starovoitov <ast@...nel.org>
>>
>> Introduce bpf_mem_[cache_]free_rcu() similar to kfree_rcu().
>> Unlike bpf_mem_[cache_]free() that links objects for immediate reuse into
>> per-cpu free list the _rcu() flavor waits for RCU grace period and then moves
>> objects into free_by_rcu_ttrace list where they are waiting for RCU
>> task trace grace period to be freed into slab.
>>
> SNIP
>> +static void check_free_by_rcu(struct bpf_mem_cache *c)
>> +{
>> + struct llist_node *llnode, *t;
>> +
>> + if (llist_empty(&c->free_by_rcu) && llist_empty(&c->free_llist_extra_rcu))
>> + return;
>> +
>> + /* drain free_llist_extra_rcu */
>> + llist_for_each_safe(llnode, t, llist_del_all(&c->free_llist_extra_rcu))
>> + if (__llist_add(llnode, &c->free_by_rcu))
>> + c->free_by_rcu_tail = llnode;
>
> Just like add_obj_to_free_list(), we should do conditional
> local_irq_save(flags) and local_inc_return(&c->active) as well for
> free_by_rcu, otherwise free_by_rcu may be corrupted by bpf program
> running in a NMI context.
Good catch. Will do.
Powered by blists - more mailing lists