[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1172831630.5264.139.camel@roc-desktop>
Date: Fri, 02 Mar 2007 18:33:49 +0800
From: "Wu, Bryan" <bryan.wu@...log.com>
To: linux-kernel@...r.kernel.org
Subject: Questions about the SYSVIPC share memory on NOMMU uClinux
architecture
Hi folks,
Recently, I was struggling in a bug about the shm->nattch. Actually, the
test case is from LTP kernel/syscall/ipc/shmctl/shmctl01.c code. We
ported it to the uClinux-blackfin platform.
The algorithm is very simple.
a) the parent process will create a share memory
b) parent will vfork/execlp 4 children process
c) children will call shmat() attach to the share memory (shm->nattch
should be increased), then children will pause()
d) parent call shmclt() to get the share memory nattch, if nattch != 4,
then the testcase will fail.
In our uClinux-blackfin platform, nattch = 1.
So I dig into the source code ipc/shm.c, then there are some questions
about the code.
a)
in function do_shmat(), after nattch++ why nattch-- as following:
================================================================================
user_addr = (void*) do_mmap (file, addr, size, prot, flags, 0);
// here no return or goto valid place.
invalid:
up_write(¤t->mm->mmap_sem);
mutex_lock(&shm_ids(ns).mutex);
shp = shm_lock(ns, shmid);
BUG_ON(!shp);
shp->shm_nattch--; /* Why??? */
if(shp->shm_nattch == 0 &&
shp->shm_perm.mode & SHM_DEST)
shm_destroy(ns, shp);
else
shm_unlock(shp);
mutex_unlock(&shm_ids(ns).mutex);
*raddr = (unsigned long) user_addr;
err = 0;
if (IS_ERR(user_addr))
err = PTR_ERR(user_addr);
out:
return err;
================================================================================
b) do_mmap() -> mm/nommu.c do_mmap_pgoff()
When create a new vma structure, shm_open(), shm_inc() will be called. Then nattch++.
So the nattch counting is disordered.
Please give me some hint about it. Actually, this case can pass on X86 platform
Thanks
-Bryan Wu
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists