lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a634037bb023973b8263a65b93fa73a7a5c0dc52.camel@intel.com>
Date:   Tue, 24 May 2022 17:40:53 +0000
From:   "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>
To:     "hch@....de" <hch@....de>, "mcgrof@...nel.org" <mcgrof@...nel.org>,
        "dave@...olabs.net" <dave@...olabs.net>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "daniel@...earbox.net" <daniel@...earbox.net>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "ast@...nel.org" <ast@...nel.org>,
        "bpf@...r.kernel.org" <bpf@...r.kernel.org>,
        "kernel-team@...com" <kernel-team@...com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "song@...nel.org" <song@...nel.org>,
        "Torvalds, Linus" <torvalds@...ux-foundation.org>,
        "arnd@...db.de" <arnd@...db.de>
Subject: Re: [PATCH v3 bpf-next 5/8] bpf: use module_alloc_huge for
 bpf_prog_pack

On Sat, 2022-05-21 at 13:06 -0700, Luis Chamberlain wrote:
> On Sat, May 21, 2022 at 03:20:28AM +0000, Edgecombe, Rick P wrote:
> > On Fri, 2022-05-20 at 18:00 -0700, Luis Chamberlain wrote:
> > > although VM_FLUSH_RESET_PERMS is rather new my concern here is
> > > we're
> > > essentially enabling sloppy users to grow without also addressing
> > > what if we have to take the leash back to support
> > > VM_FLUSH_RESET_PERMS
> > > properly? If the hack to support this on other architectures
> > > other
> > > than
> > > x86 is as simple as the one you in vm_remove_mappings() today:
> > > 
> > >         if (flush_reset &&
> > > !IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
> > >                 set_memory_nx(addr, area->nr_pages);
> > >                 set_memory_rw(addr, area->nr_pages);
> > >         }
> > > 
> > > then I suppose this isn't a big deal. I'm just concerned here
> > > this
> > > being
> > > a slippery slope of sloppiness leading to something which we will
> > > regret later.
> > > 
> > > My intution tells me this shouldn't be a big issue, but I just
> > > want
> > > to
> > > confirm.
> > 
> > Yea, I commented the same concern on the last thread:
> > 
> > 
https://lore.kernel.org/lkml/83a69976cb93e69c5ad7a9511b5e57c402eee19d.camel@intel.com/
> > 
> > Song said he plans to make kprobes and ftrace work with this new
> > allocator. If that happens VM_FLUSH_RESET_PERMS would only have one
> > user - modules. Care to chime in with your plans for modules?
> 
> My plans are to not break things and to slowly tidy things up. If
> you see linux-next, things are at least starting to be split in
> nice pieces. With time, clean that further so to not break things.
> You were the one who added VM_FLUSH_RESET_PERMS, wasn't that to deal
> with secmem stuff? So wouldn't you know better what you recommend for
> it?

It was originally to correct some W^X issues. If a vmalloc was freed
with X permission it caused some exposure. The security side could be
fixed with copious set_memory() calls in just the right order, but
there was a suggestion to make vmalloc handle it so it could be done
more efficiently and callers would not have to know the details for at
least that part of the operation. This prog pack stuff is already more
efficient with respect to TLB flushes. So while VM_FLUSH_RESET_PERMS
could still improve it slightly, the situation is now probably better
than it was pre-VM_FLUSH_RESET_PERMS anyway. So that mostly leaves the
problem of some special knowledge leaking back into the callers.

With a next solution it would hopefully be handled differently still,
using the the unmapped page stuff Mike Rapoport was working on.

> 
> Seeing all this, given module_alloc() users are growing and seeing
> the tiny bit of growth of use in this space, I'd think we should
> rename module_alloc() to vmalloc_exec(), and likewise the same for
> module_memfree() to vmalloc_exec_free(). But it would be our first
> __weak vmalloc, and not sure if that's looked down upon.

A rename seems good to me. Module space is really just dynamically
allocated text space now. There used to be a vmalloc_exec() that
allocated text in vmalloc space, so maybe the name should have
something to denote that it goes into the special arch specific text
space.

> 
> > If there
> > are actual near term plans to keep working on this,
> > VM_FLUSH_RESET_PERMS might be changed again or turn into something
> > else. Like if we are about to re-think everything, then it doesn't
> > matter as much to fix what would then be old.
> 
> I think it's up to you as you added it and I'm not looking to add
> any bells or wistles, just tidy things up *slowly*.
> 
> > Besides not fixing VM_FLUSH_RESET_PERMS/hibernate though, I think
> > this
> > allocator still feels a little rough. For example I don't think we
> > actually know how much the huge mappings are helping.
> 
> Right, 100% agreed. The performance numbers provided are nice but
> they are not anything folks can reproduce at all. I hinted towards
> perf stuff which could be used and enable other users later to also
> use similar stats to showcase its value if they want to move to
> huge pages.
> 
> It is a side note, and perhaps a stupid question, as I don't grok mm,
> but I'm perplexed about the fact that if the value is seen so high
> towards
> huge pages for exec stuff in kernel, wouldn't there be a few folks
> who
> might want to try this for regular exec stuff? Wouldn't there be much
> more gains there?

Core kernel text is already 2MB mapped, on x86 at least. It indeed
helps performance. I'd like to see about 2MB module text. I can only
assume that it would help performance though. Some people wiser than me
in performance stuff suggested it should be tested to actually know.

> 
> > It is also
> > allocating memory in a big chunk from a single node and reusing it,
> > where before we were allocating based on numa node for each jit.
> > Would
> > some user's suffer from that? Maybe it's obvious to others, but I
> > would
> > have expected to see more discussion of MM things like that.
> 
> Curious, why was it moved to use a single node?

To allocate from the closest node you need to have per-node caches.
When I tried to do something similar to this with the grouped page
cache, having per-node caches was suggested should be required. I never
benchmarked the difference though.

> 
> > But I like general direction of caching and using text_poke() to
> > write
> > the jits a lot. However it works, it seems to make a big impact in
> > at
> > least some workloads.
> > 
> > So yea, seems sloppy, but probably (...I guess?) more good for
> > users
> > then sloppy for us.
> 
> The impact of sloppiness lies in possible odd bugs later and trying
> to
> decipher what was being done. So I do have concerns with the
> immediate
> tribal knowlege incurred by the current implementation.

I am also bothered by it. I'm glad to hear someone else cares. I can
think about doing it more incrementally. The problem is you kind of
need to know if you can integrate with all the module_alloc() users and
get sane behavior on the backend, to tell if your new interface is
actually any good.

This is pretty much how I think we can:
 - remove all special knowledge from callers
 - support all module_alloc() callers
 - do things more efficiently on x86
 - support all the arch specific extra capabilities that I know about

https://lore.kernel.org/lkml/20201120202426.18009-1-rick.p.edgecombe@intel.com/#r

It's why I shrug a little about writing caller code with special
knowledge in it. It's not really possible to avoid it completely with
the current interfaces IMO.

> What is your
> own roadmap for VM_FLUSH_RESET_PERMS? Sounds like a future possibly
> maybe re-do?

If it were me, I would start back with that RFC and try to move the
allocation side forward too. I haven't seen anything since, that makes
me think it was the wrong direction. But I have employer tasks that
take priority unfortunately. If anyone else wants to take a shot at it,
I can help review. Otherwise, hopefully I can get back to it someday.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ