lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+khW7iq+UKsfQxdT3QpSqPUFN8gQWWDLoQ9zxB=uWTs63AZEA@mail.gmail.com>
Date:   Mon, 28 Mar 2022 10:46:15 -0700
From:   Hao Luo <haoluo@...gle.com>
To:     Yonghong Song <yhs@...com>
Cc:     Alexei Starovoitov <ast@...nel.org>,
        Andrii Nakryiko <andrii@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        KP Singh <kpsingh@...nel.org>, Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, bpf@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC bpf-next 0/2] Mmapable task local storage.

On Mon, Mar 28, 2022 at 10:39 AM Hao Luo <haoluo@...gle.com> wrote:
>
> Hi Yonghong,
>
> On Fri, Mar 25, 2022 at 12:16 PM Yonghong Song <yhs@...com> wrote:
> >
> > On 3/24/22 4:41 PM, Hao Luo wrote:
> > > Some map types support mmap operation, which allows userspace to
> > > communicate with BPF programs directly. Currently only arraymap
> > > and ringbuf have mmap implemented.
> > >
> > > However, in some use cases, when multiple program instances can
> > > run concurrently, global mmapable memory can cause race. In that
> > > case, userspace needs to provide necessary synchronizations to
> > > coordinate the usage of mapped global data. This can be a source
> > > of bottleneck.
> >
> > I can see your use case here. Each calling process can get the
> > corresponding bpf program task local storage data through
> > mmap interface. As you mentioned, there is a tradeoff
> > between more memory vs. non-global synchronization.
> >
> > I am thinking that another bpf_iter approach can retrieve
> > the similar result. We could implement a bpf_iter
> > for task local storage map, optionally it can provide
> > a tid to retrieve the data for that particular tid.
> > This way, user space needs an explicit syscall, but
> > does not need to allocate more memory than necessary.
> >
> > WDYT?
> >
>
> Thanks for the suggestion. I have two thoughts about bpf_iter + tid and mmap:
>
> - mmap prevents the calling task from reading other task's value.
> Using bpf_iter, one can pass other task's tid to get their values. I
> assume there are two potential ways of passing tid to bpf_iter: one is
> to use global data in bpf prog, the other is adding tid parameterized
> iter_link. For the first, it's not easy for unpriv tasks to use. For
> the second, we need to create one iter_link object for each interested
> tid. It may not be easy to use either.
>
> - Regarding adding an explicit syscall. I thought about adding
> write/read syscalls for task local storage maps, just like reading
> values from iter_link. Writing or reading task local storage map
> updates/reads the current task's value. I think this could achieve the
> same effect as mmap.
>

Actually, my use case of using mmap on task local storage is to allow
userspace to pass FDs into bpf prog. Some of the helpers I want to add
need to take an FD as parameter and the bpf progs can run
concurrently, thus using global data is racy. Mmapable task local
storage is the best solution I can find for this purpose.

Song also mentioned to me offline, that mmapable task local storage
may be useful for his use case.

I am actually open to other proposals.

> > >
> > > It would be great to have a mmapable local storage in that case.
> > > This patch adds that.
> > >
> > > Mmap isn't BPF syscall, so unpriv users can also use it to
> > > interact with maps.
> > >
> > > Currently the only way of allocating mmapable map area is using
> > > vmalloc() and it's only used at map allocation time. Vmalloc()
> > > may sleep, therefore it's not suitable for maps that may allocate
> > > memory in an atomic context such as local storage. Local storage
> > > uses kmalloc() with GFP_ATOMIC, which doesn't sleep. This patch
> > > uses kmalloc() with GFP_ATOMIC as well for mmapable map area.
> > >
> > > Allocating mmapable memory has requirment on page alignment. So we
> > > have to deliberately allocate more memory than necessary to obtain
> > > an address that has sdata->data aligned at page boundary. The
> > > calculations for mmapable allocation size, and the actual
> > > allocation/deallocation are packaged in three functions:
> > >
> > >   - bpf_map_mmapable_alloc_size()
> > >   - bpf_map_mmapable_kzalloc()
> > >   - bpf_map_mmapable_kfree()
> > >
> > > BPF local storage uses them to provide generic mmap API:
> > >
> > >   - bpf_local_storage_mmap()
> > >
> > > And task local storage adds the mmap callback:
> > >
> > >   - task_storage_map_mmap()
> > >
> > > When application calls mmap on a task local storage, it gets its
> > > own local storage.
> > >
> > > Overall, mmapable local storage trades off memory with flexibility
> > > and efficiency. It brings memory fragmentation but can make programs
> > > stateless. Therefore useful in some cases.
> > >
> > > Hao Luo (2):
> > >    bpf: Mmapable local storage.
> > >    selftests/bpf: Test mmapable task local storage.
> > >
> > >   include/linux/bpf.h                           |  4 +
> > >   include/linux/bpf_local_storage.h             |  5 +-
> > >   kernel/bpf/bpf_local_storage.c                | 73 +++++++++++++++++--
> > >   kernel/bpf/bpf_task_storage.c                 | 40 ++++++++++
> > >   kernel/bpf/syscall.c                          | 67 +++++++++++++++++
> > >   .../bpf/prog_tests/task_local_storage.c       | 38 ++++++++++
> > >   .../bpf/progs/task_local_storage_mmapable.c   | 38 ++++++++++
> > >   7 files changed, 257 insertions(+), 8 deletions(-)
> > >   create mode 100644 tools/testing/selftests/bpf/progs/task_local_storage_mmapable.c
> > >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ