lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 14 Feb 2024 14:59:05 -0800
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: Suren Baghdasaryan <surenb@...gle.com>, Yosry Ahmed
 <yosryahmed@...gle.com>
Cc: akpm@...ux-foundation.org, kent.overstreet@...ux.dev, mhocko@...e.com, 
 vbabka@...e.cz, hannes@...xchg.org, roman.gushchin@...ux.dev,
 mgorman@...e.de,  dave@...olabs.net, willy@...radead.org,
 liam.howlett@...cle.com, corbet@....net,  void@...ifault.com,
 peterz@...radead.org, juri.lelli@...hat.com,  catalin.marinas@....com,
 will@...nel.org, arnd@...db.de, tglx@...utronix.de,  mingo@...hat.com,
 dave.hansen@...ux.intel.com, x86@...nel.org, peterx@...hat.com, 
 david@...hat.com, axboe@...nel.dk, mcgrof@...nel.org, masahiroy@...nel.org,
  nathan@...nel.org, dennis@...nel.org, tj@...nel.org,
 muchun.song@...ux.dev,  rppt@...nel.org, paulmck@...nel.org,
 pasha.tatashin@...een.com, yuzhao@...gle.com,  dhowells@...hat.com,
 hughd@...gle.com, andreyknvl@...il.com, keescook@...omium.org, 
 ndesaulniers@...gle.com, vvvvvv@...gle.com, gregkh@...uxfoundation.org, 
 ebiggers@...gle.com, ytcoode@...il.com, vincent.guittot@...aro.org, 
 dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com, 
 bristot@...hat.com, vschneid@...hat.com, cl@...ux.com, penberg@...nel.org, 
 iamjoonsoo.kim@....com, 42.hyeyoo@...il.com, glider@...gle.com,
 elver@...gle.com,  dvyukov@...gle.com, shakeelb@...gle.com,
 songmuchun@...edance.com,  jbaron@...mai.com, rientjes@...gle.com,
 minchan@...gle.com, kaleshsingh@...gle.com,  kernel-team@...roid.com,
 linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org, 
 iommu@...ts.linux.dev, linux-arch@...r.kernel.org,
 linux-fsdevel@...r.kernel.org,  linux-mm@...ck.org,
 linux-modules@...r.kernel.org, kasan-dev@...glegroups.com, 
 cgroups@...r.kernel.org
Subject: Re: [PATCH v3 00/35] Memory allocation profiling

On Wed, 2024-02-14 at 12:30 -0800, Suren Baghdasaryan wrote:
> On Wed, Feb 14, 2024 at 12:17 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
> > 
> > > > > Performance overhead:
> > > > > To evaluate performance we implemented an in-kernel test executing
> > > > > multiple get_free_page/free_page and kmalloc/kfree calls with allocation
> > > > > sizes growing from 8 to 240 bytes with CPU frequency set to max and CPU
> > > > > affinity set to a specific CPU to minimize the noise. Below are results
> > > > > from running the test on Ubuntu 22.04.2 LTS with 6.8.0-rc1 kernel on
> > > > > 56 core Intel Xeon:
> > > > > 
> > > > >                         kmalloc                 pgalloc
> > > > > (1 baseline)            6.764s                  16.902s
> > > > > (2 default disabled)    6.793s (+0.43%)         17.007s (+0.62%)
> > > > > (3 default enabled)     7.197s (+6.40%)         23.666s (+40.02%)
> > > > > (4 runtime enabled)     7.405s (+9.48%)         23.901s (+41.41%)
> > > > > (5 memcg)               13.388s (+97.94%)       48.460s (+186.71%)
> > > 
> > > (6 default disabled+memcg)    13.332s (+97.10%)         48.105s (+18461%)
> > > (7 default enabled+memcg)     13.446s (+98.78%)       54.963s (+225.18%)
> > 
> > I think these numbers are very interesting for folks that already use
> > memcg. Specifically, the difference between 6 & 7, which seems to be
> > ~0.85% and ~14.25%. IIUC, this means that the extra overhead is
> > relatively much lower if someone is already using memcgs.
> 
> Well, yes, percentage-wise it's much lower. If you look at the
> absolute difference between 6 & 7 vs 2 & 3, it's quite close.
> 
> > 
> > > 
> > > (6) shows a bit better performance than (5) but it's probably noise. I
> > > would expect them to be roughly the same. Hope this helps.
> > > 
> > > > 

Thanks for the data.  It does show that turning on memcg does not cost
extra overhead percentage wise.

Tim

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ