lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 23 Aug 2016 14:48:11 +0100
From:   Sudeep K N <sudeepholla.maillist@...il.com>
To:     Vladimir Davydov <vdavydov@...tuozzo.com>
Cc:     Eric Dumazet <eric.dumazet@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "David S. Miller" <davem@...emloft.net>,
        Johannes Weiner <hannes@...xchg.org>,
        Michal Hocko <mhocko@...nel.org>, linux-mm@...ck.org,
        linux-fsdevel@...r.kernel.org, netdev <netdev@...r.kernel.org>,
        x86@...nel.org, open list <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Sudeep Holla <sudeep.holla@....com>
Subject: Re: [PATCH RESEND 8/8] af_unix: charge buffers to kmemcg

On Tue, May 24, 2016 at 5:36 PM, Vladimir Davydov
<vdavydov@...tuozzo.com> wrote:
> On Tue, May 24, 2016 at 06:02:06AM -0700, Eric Dumazet wrote:
>> On Tue, 2016-05-24 at 11:49 +0300, Vladimir Davydov wrote:
>> > Unix sockets can consume a significant amount of system memory, hence
>> > they should be accounted to kmemcg.
>> >
>> > Since unix socket buffers are always allocated from process context,
>> > all we need to do to charge them to kmemcg is set __GFP_ACCOUNT in
>> > sock->sk_allocation mask.
>>
>> I have two questions :
>>
>> 1) What happens when a buffer, allocated from socket <A> lands in a
>> different socket <B>, maybe owned by another user/process.
>>
>> Who owns it now, in term of kmemcg accounting ?
>
> We never move memcg charges. E.g. if two processes from different
> cgroups are sharing a memory region, each page will be charged to the
> process which touched it first. Or if two processes are working with the
> same directory tree, inodes and dentries will be charged to the first
> user. The same is fair for unix socket buffers - they will be charged to
> the sender.
>
>>
>> 2) Has performance impact been evaluated ?
>
> I ran netperf STREAM_STREAM with default options in a kmemcg on
> a 4 core x 2 HT box. The results are below:
>
>  # clients            bandwidth (10^6bits/sec)
>                     base              patched
>          1      67643 +-  725      64874 +-  353    - 4.0 %
>          4     193585 +- 2516     186715 +- 1460    - 3.5 %
>          8     194820 +-  377     187443 +- 1229    - 3.7 %
>
> So the accounting doesn't come for free - it takes ~4% of performance.
> I believe we could optimize it by using per cpu batching not only on
> charge, but also on uncharge in memcg core, but that's beyond the scope
> of this patch set - I'll take a look at this later.
>
> Anyway, if performance impact is found to be unacceptable, it is always
> possible to disable kmem accounting at boot time (cgroup.memory=nokmem)
> or not use memory cgroups at runtime at all (thanks to jump labels
> there'll be no overhead even if they are compiled in).
>

I started seeing almost 10% degradation in the hackbench score with v4.8-rc1
Bisecting it resulted in this patch, i.e. Commit 3aa9799e1364 ("af_unix: charge
buffers to kmemcg") in the mainline.

As per the commit log, it seems like that's expected but I was not sure about
the margin. I also see the hackbench score is more inconsistent after this
patch, but I may be wrong as that's based on limited observation.

Is this something we can ignore as hackbench is more synthetic compared
to the gain this patch provides in some real workloads ?

-- 
Regards,
Sudeep

Powered by blists - more mailing lists