[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201121025904.GA478375@carbon.DHCP.thefacebook.com>
Date: Fri, 20 Nov 2020 18:59:04 -0800
From: Roman Gushchin <guro@...com>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
CC: <bpf@...r.kernel.org>, <ast@...nel.org>, <daniel@...earbox.net>,
<netdev@...r.kernel.org>, <andrii@...nel.org>,
<akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <kernel-team@...com>
Subject: Re: [PATCH bpf-next v7 32/34] bpf: eliminate rlimit-based memory
accounting infra for bpf maps
On Fri, Nov 20, 2020 at 06:52:27PM -0800, Alexei Starovoitov wrote:
> On Thu, Nov 19, 2020 at 09:37:52AM -0800, Roman Gushchin wrote:
> > static void bpf_map_put_uref(struct bpf_map *map)
> > @@ -619,7 +562,7 @@ static void bpf_map_show_fdinfo(struct seq_file *m, struct file *filp)
> > "value_size:\t%u\n"
> > "max_entries:\t%u\n"
> > "map_flags:\t%#x\n"
> > - "memlock:\t%llu\n"
> > + "memlock:\t%llu\n" /* deprecated */
> > "map_id:\t%u\n"
> > "frozen:\t%u\n",
> > map->map_type,
> > @@ -627,7 +570,7 @@ static void bpf_map_show_fdinfo(struct seq_file *m, struct file *filp)
> > map->value_size,
> > map->max_entries,
> > map->map_flags,
> > - map->memory.pages * 1ULL << PAGE_SHIFT,
> > + 0LLU,
>
> The set looks great to me overall, but above change is problematic.
> There are tools out there that read this value.
> Returning zero might cause oncall alarms to trigger.
> I think we can be more accurate here.
> Instead of zero the kernel can return
> round_up(max_entries * round_up(key_size + value_size, 8), PAGE_SIZE)
> It's not the same as before, but at least the numbers won't suddenly
> go to zero and comparison between maps is still relevant.
> Of course we can introduce a page size calculating callback per map type,
> but imo that would be overkill. These monitoring tools don't care about
> precise number, but rather about relative value and growth from one
> version of the application to another.
>
> If Daniel doesn't find other issues this can be fixed in the follow up.
Makes total sense. I'll prepare a follow-up patch.
Thanks!
Powered by blists - more mailing lists