[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1173058801.5264.143.camel@roc-desktop>
Date: Mon, 05 Mar 2007 09:40:01 +0800
From: "Wu, Bryan" <bryan.wu@...log.com>
To: bryan.wu@...log.com, ebiederm@...ssion.com
Cc: linux-kernel@...r.kernel.org
Subject: Re: Questions about the SYSVIPC share memory on NOMMU
uClinuxarchitecture
On Fri, 2007-03-02 at 05:33 -0500, Wu, Bryan wrote:
> Hi folks,
>
> Recently, I was struggling in a bug about the shm->nattch. Actually,
> the
> test case is from LTP kernel/syscall/ipc/shmctl/shmctl01.c code. We
> ported it to the uClinux-blackfin platform.
>
Sorry for dropping the kernel version inforamtion, I found this in
2.6.19 kernel and 2.6.20-mm2 kernel.
> The algorithm is very simple.
> a) the parent process will create a share memory
> b) parent will vfork/execlp 4 children process
> c) children will call shmat() attach to the share memory (shm->nattch
> should be increased), then children will pause()
> d) parent call shmclt() to get the share memory nattch, if nattch !=
> 4,
> then the testcase will fail.
>
> In our uClinux-blackfin platform, nattch = 1.
>
> So I dig into the source code ipc/shm.c, then there are some
> questions
> about the code.
>
> a)
> in function do_shmat(), after nattch++ why nattch-- as following:
> ================================================================================
> user_addr = (void*) do_mmap (file, addr, size, prot, flags, 0);
>
> // here no return or goto valid place.
>
> invalid:
> up_write(¤t->mm->mmap_sem);
>
> mutex_lock(&shm_ids(ns).mutex);
> shp = shm_lock(ns, shmid);
> BUG_ON(!shp);
> shp->shm_nattch--; /* Why??? */
> if(shp->shm_nattch == 0 &&
> shp->shm_perm.mode & SHM_DEST)
> shm_destroy(ns, shp);
> else
> shm_unlock(shp);
> mutex_unlock(&shm_ids(ns).mutex);
>
> *raddr = (unsigned long) user_addr;
> err = 0;
> if (IS_ERR(user_addr))
> err = PTR_ERR(user_addr);
> out:
> return err;
> ================================================================================
>
> b) do_mmap() -> mm/nommu.c do_mmap_pgoff()
> When create a new vma structure, shm_open(), shm_inc() will be called.
> Then nattch++.
> So the nattch counting is disordered.
>
> Please give me some hint about it. Actually, this case can pass on X86
> platform
>
Is there any help available?
Thanks a lot
-Bryan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists