[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4E6A1738-4138-4F48-95ED-BD139A72B296@fb.com>
Date: Fri, 13 Nov 2020 20:48:07 +0000
From: Song Liu <songliubraving@...com>
To: Roman Gushchin <guro@...com>
CC: bpf <bpf@...r.kernel.org>, Alexei Starovoitov <ast@...nel.org>,
"Daniel Borkmann" <daniel@...earbox.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Andrii Nakryiko <andrii@...nel.org>,
Shakeel Butt <shakeelb@...gle.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Kernel Team <Kernel-team@...com>
Subject: Re: [PATCH bpf-next v5 06/34] bpf: prepare for memcg-based memory
accounting for bpf maps
> On Nov 13, 2020, at 11:40 AM, Roman Gushchin <guro@...com> wrote:
>
> On Fri, Nov 13, 2020 at 09:46:49AM -0800, Song Liu wrote:
>>
>>
>>> On Nov 12, 2020, at 2:15 PM, Roman Gushchin <guro@...com> wrote:
>>
>> [...]
>>
>>>
>>> +#ifdef CONFIG_MEMCG_KMEM
>>> +static __always_inline int __bpf_map_update_elem(struct bpf_map *map, void *key,
>>> + void *value, u64 flags)
>>> +{
>>> + struct mem_cgroup *old_memcg;
>>> + bool in_interrupt;
>>> + int ret;
>>> +
>>> + /*
>>> + * If update from an interrupt context results in a memory allocation,
>>> + * the memory cgroup to charge can't be determined from the context
>>> + * of the current task. Instead, we charge the memory cgroup, which
>>> + * contained a process created the map.
>>> + */
>>> + in_interrupt = in_interrupt();
>>> + if (in_interrupt)
>>> + old_memcg = set_active_memcg(map->memcg);
>>
>> set_active_memcg() checks in_interrupt() again. Maybe we can introduce another
>> helper to avoid checking it twice? Something like
>>
>> static inline struct mem_cgroup *
>> set_active_memcg_int(struct mem_cgroup *memcg)
>> {
>> struct mem_cgroup *old;
>>
>> old = this_cpu_read(int_active_memcg);
>> this_cpu_write(int_active_memcg, memcg);
>> return old;
>> }
>
> Yeah, it's a good idea!
>
> in_interrupt() check is very cheap (like checking some bits in a per-cpu variable),
> so I don't think there will be any measurable difference. So I suggest to implement
> it later as an enhancement on top (maybe in the next merge window), to avoid an another
> delay. Otherwise I'll need to send a patch to mm@, wait for reviews and an inclusion
> into the mm tree, etc). Does it work for you?
Yeah, that works.
Acked-by: Song Liu <songliubraving@...com>
Powered by blists - more mailing lists