[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <31d6e8d6-0747-a282-746b-5c144a9970bb@canonical.com>
Date: Mon, 26 Jun 2023 17:31:37 -0700
From: John Johansen <john.johansen@...onical.com>
To: Anil Altinay <aaltinay@...gle.com>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
LKLM <linux-kernel@...r.kernel.org>,
Sergey Senozhatsky <senozhatsky@...omium.org>,
Peter Zijlstra <peterz@...radead.org>,
Tomasz Figa <tfiga@...omium.org>,
linux-security-module@...r.kernel.org
Subject: Re: [PATCH v3] apparmor: global buffers spin lock may get contended
On 6/26/23 16:33, Anil Altinay wrote:
> Hi John,
>
> I was wondering if you get a chance to work on patch v4. Please let me know if you need help with testing.
>
yeah, testing help is always much appreciated. I have a v4, and I am working on 3 alternate version to compare against, to help give a better sense if we can get away with simplifying or tweak the scaling. I should be able to post them out some time tonight.
> Best,
> Anil
>
> On Tue, Feb 21, 2023 at 1:27 PM Anil Altinay <aaltinay@...gle.com <mailto:aaltinay@...gle.com>> wrote:
>
> I can test the patch with 5.10 and 5.15 kernels in different machines.
> Just let me know which machine types you would like me to test.
>
> On Mon, Feb 20, 2023 at 12:42 AM John Johansen
> <john.johansen@...onical.com <mailto:john.johansen@...onical.com>> wrote:
> >
> > On 2/17/23 02:44, Sebastian Andrzej Siewior wrote:
> > > On 2023-02-16 16:08:10 [-0800], John Johansen wrote:
> > >> --- a/security/apparmor/lsm.c
> > >> +++ b/security/apparmor/lsm.c
> > >> @@ -49,12 +49,19 @@ union aa_buffer {
> > >> char buffer[1];
> > >> };
> > >> +struct aa_local_cache {
> > >> + unsigned int contention;
> > >> + unsigned int hold;
> > >> + struct list_head head;
> > >> +};
> > >
> > > if you stick a local_lock_t into that struct, then you could replace
> > > cache = get_cpu_ptr(&aa_local_buffers);
> > > with
> > > local_lock(&aa_local_buffers.lock);
> > > cache = this_cpu_ptr(&aa_local_buffers);
> > >
> > > You would get the preempt_disable() based locking for the per-CPU
> > > variable (as with get_cpu_ptr()) and additionally some lockdep
> > > validation which would warn if it is used outside of task context (IRQ).
> > >
> > I did look at local_locks and there was a reason I didn't use them. I
> > can't recall as the original iteration of this is over a year old now.
> > I will have to dig into it again.
> >
> > > I didn't parse completely the hold/contention logic but it seems to work
> > > ;)
> > > You check "cache->count >= 2" twice but I don't see an inc/ dec of it
> > > nor is it part of aa_local_cache.
> > >
> > sadly I messed up the reordering of this and the debug patch. This will be
> > fixed in v4.
> >
> > > I can't parse how many items can end up on the local list if the global
> > > list is locked. My guess would be more than 2 due the ->hold parameter.
> > >
> > So this iteration, forces pushing back to global list if there are already
> > two on the local list. The hold parameter just affects how long the
> > buffers remain on the local list, before trying to place them back on
> > the global list.
> >
> > Originally before the count was added more than 2 buffers could end up
> > on the local list, and having too many local buffers is a waste of
> > memory. The count got added to address this. The value of 2 (which should
> > be switched to a define) was chosen because no mediation routine currently
> > uses more than 2 buffers.
> >
> > Note that this doesn't mean that more than two buffers can be allocated
> > to a tasks on a cpu. Its possible in some cases to have a task have
> > allocated buffers and to still have buffers on the local cache list.
> >
> > > Do you have any numbers on the machine and performance it improved? It
> > > sure will be a good selling point.
> > >
> >
> > I can include some supporting info, for a 16 core machine. But it will
> > take some time to for me to get access to a bigger machine, where this
> > is much more important. Hence the call for some of the other people
> > on this thread to test.
> >
> > thanks for the feedback
> >
>
Powered by blists - more mailing lists