lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190107182224.32380-1-shakeelb@google.com>
Date:   Mon,  7 Jan 2019 10:22:24 -0800
From:   Shakeel Butt <shakeelb@...gle.com>
To:     Manfred Spraul <manfred@...orfullife.com>
Cc:     Dmitry Vyukov <dvyukov@...gle.com>,
        syzbot+1145ec2e23165570c3ac@...kaller.appspotmail.com,
        Andrew Morton <akpm@...ux-foundation.org>,
        David Howells <dhowells@...hat.com>,
        "Eric W . Biederman" <ebiederm@...ssion.com>,
        ktsanaktsidis@...desk.com, LKML <linux-kernel@...r.kernel.org>,
        Michal Hocko <mhocko@...e.com>,
        Mike Rapoport <rppt@...ux.vnet.ibm.com>,
        Stephen Rothwell <sfr@...b.auug.org.au>,
        syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
        Matthew Wilcox <willy@...radead.org>,
        Davidlohr Bueso <dave@...olabs.net>,
        Shakeel Butt <shakeelb@...gle.com>
Subject: Re: general protection fault in put_pid


On Mon, Jan 7, 2019 at 10:04 AM Manfred Spraul
<manfred@...orfullife.com> wrote:
>
> On 1/3/19 11:18 PM, Shakeel Butt wrote:
> > Hi Manfred,
> >
> > On Sun, Dec 23, 2018 at 4:26 AM Manfred Spraul
> > <manfred@...orfullife.com> wrote:
> >> Hello Dmitry,
> >>
> >> On 12/23/18 10:57 AM, Dmitry Vyukov wrote:
> >>> I can reproduce this infinite memory consumption with the C
> >>> program:
> >>> https://gist.githubusercontent.com/dvyukov/03ec54b3429ade16fa07bf8b2379aff3/raw/ae4f654e279810de2505e8fa41b73dc1d77778e6/gistfile1.txt
> >>>
> >>> But this is working as intended, right? It just creates infinite
> >>> number of large semaphore sets, which reasonably consumes infinite
> >>> amount of memory.
> >>> Except that it also violates the memcg bound and a process can
> >>> have
> >>> effectively unlimited amount of such "drum memory" in semaphores.
> >> Yes, this is as intended:
> >>
> >> If you call semget(), then you can use memory, up to the limits in
> >> /proc/sys/kernel/sem.
> >>
> >> Memcg is not taken into account, an admin must set
> >> /proc/sys/kernel/sem.
> >>
> >> The default are "infinite amount of memory allowed", as this is the
> >> most
> >> sane default: We had a logic that tried to autotune (i.e.: a new
> >> namespace "inherits" a fraction of the parent namespaces memory
> >> limits),
> >> but this we more or less always wrong.
> >>
> >>
> > What's the disadvantage of setting the limits in
> > /proc/sys/kernel/sem
> > high and let the task's memcg limits the number of semaphore a
> > process
> > can create? Please note that the memory underlying shmget and msgget
> > is already accounted to memcg.
>
> Nothing, it it just a question of implementing it.
>

I think it should be something like following:

---
 ipc/sem.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index 745dc6187e84..ad63df2658aa 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -494,7 +494,7 @@ static struct sem_array *sem_alloc(size_t nsems)
 		return NULL;
 
 	size = sizeof(*sma) + nsems * sizeof(sma->sems[0]);
-	sma = kvmalloc(size, GFP_KERNEL);
+	sma = kvmalloc(size, GFP_KERNEL_ACCOUNT);
 	if (unlikely(!sma))
 		return NULL;
 
@@ -1897,7 +1897,8 @@ static struct sem_undo *find_alloc_undo(struct ipc_namespace *ns, int semid)
 	rcu_read_unlock();
 
 	/* step 2: allocate new undo structure */
-	new = kzalloc(sizeof(struct sem_undo) + sizeof(short)*nsems, GFP_KERNEL);
+	new = kzalloc(sizeof(struct sem_undo) + sizeof(short)*nsems,
+		      GFP_KERNEL_ACCOUNT);
 	if (!new) {
 		ipc_rcu_putref(&sma->sem_perm, sem_rcu_free);
 		return ERR_PTR(-ENOMEM);
-- 
2.20.1.97.g81188d93c3-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ