[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250912003054.2564842-1-tiwei.bie@linux.dev>
Date: Fri, 12 Sep 2025 08:30:54 +0800
From: Tiwei Bie <tiwei.bie@...ux.dev>
To: benjamin@...solutions.net
Cc: richard@....at,
anton.ivanov@...bridgegreys.com,
johannes@...solutions.net,
arnd@...db.de,
linux-um@...ts.infradead.org,
linux-kernel@...r.kernel.org,
tiwei.btw@...group.com,
tiwei.bie@...ux.dev
Subject: Re: [PATCH v2 04/10] um: Turn signals_* into thread-local variables
Hi,
On Thu, 11 Sep 2025 10:06:53 +0200, Benjamin Berg wrote:
> On Thu, 2025-09-11 at 09:37 +0200, Benjamin Berg wrote:
> > On Thu, 2025-09-11 at 12:34 +0800, Tiwei Bie wrote:
> > > On Wed, 10 Sep 2025 14:15:28 +0200, Johannes Berg wrote:
> > > > On Sun, 2025-08-10 at 13:51 +0800, Tiwei Bie wrote:
> > > > > From: Tiwei Bie <tiwei.btw@...group.com>
> > > > >
> > > > > Turn signals_enabled, signals_pending and signals_active into
> > > > > thread-local variables. This enables us to control and track
> > > > > signals independently on each CPU thread. This is a preparation
> > > > > for adding SMP support.
> > > >
> > > > [...]
> > > >
> > > > > +static __thread int signals_enabled;
> > > >
> > > > How much glibc infrastructure does __thread rely on? More
> > > > specifically:
> > > > Some time ago we had a discussion about building UML as a nolibc
> > > > binary,
> > > > what would that mean for the __thread usage here?
> > >
> > > We would need to parse TLS data (PT_TLS) from the ELF file
> > > ourselves
> > > and properly set up TLS when creating threads using clone().
> >
> > I guess right now we cannot use PER_CPU variables in these files.
> > However, my expectation that this is possible when using nolibc, and
> > then it should be simple enough to replace the __thread.
Good idea!
>
> That said, I do believe that the allocations from the libc itself are
> problematic. A lot of the mappings from UML are there already (i.e. the
> physical memory is mapped). However, I believe the vmalloc area for
> example is not guarded.
>
> So when pthread allocates the thread specific memory (stack, TLS, ...),
> we really do not know where this will be mapped into the address space.
> If it happens to be in an area that UML wants to use later, then UML
> could map e.g. vmalloc data over it.
>
> Now, it could be that (currently) the addresses picked by pthread (or
> the host kernel) do not actually clash with anything. However, I do not
> think there is any guarantee for that.
Indeed. The mmap from libc (pthread, shared libs, ...) can potentially
conflict with UML. The reason it has been working on x86_64 so far might
be that we did this in linux_main():
task_size = task_size & PGDIR_MASK;
The current layout is:
shared libs and pthreads are located at 7ffxxxxxxxxx
TASK_SIZE = 7f8000000000
VMALLOC_END = 7f7fffffe000 (which is TASK_SIZE-2*PAGE_SIZE)
However, on i386, the risk of conflicts looks much higher:
TASK_SIZE = ffc00000
VMALLOC_END = ffbfe000
......
f7c00000-f7c20000 r--p 00000000 08:01 9114 /usr/lib32/libc.so.6
f7c20000-f7d9e000 r-xp 00020000 08:01 9114 /usr/lib32/libc.so.6
f7d9e000-f7e23000 r--p 0019e000 08:01 9114 /usr/lib32/libc.so.6
f7e23000-f7e24000 ---p 00223000 08:01 9114 /usr/lib32/libc.so.6
f7e24000-f7e26000 r--p 00223000 08:01 9114 /usr/lib32/libc.so.6
f7e26000-f7e27000 rw-p 00225000 08:01 9114 /usr/lib32/libc.so.6
f7e27000-f7e31000 rw-p 00000000 00:00 0
f7fbe000-f7fc0000 rw-p 00000000 00:00 0
f7fc0000-f7fc4000 r--p 00000000 00:00 0 [vvar]
f7fc4000-f7fc6000 r-xp 00000000 00:00 0 [vdso]
f7fc6000-f7fc7000 r--p 00000000 08:01 9107 /usr/lib32/ld-linux.so.2
f7fc7000-f7fec000 r-xp 00001000 08:01 9107 /usr/lib32/ld-linux.so.2
f7fec000-f7ffb000 r--p 00026000 08:01 9107 /usr/lib32/ld-linux.so.2
f7ffb000-f7ffd000 r--p 00034000 08:01 9107 /usr/lib32/ld-linux.so.2
f7ffd000-f7ffe000 rw-p 00036000 08:01 9107 /usr/lib32/ld-linux.so.2
fffdd000-ffffe000 rw-p 00000000 00:00 0 [stack]
Ideally, we could completely eliminate the dependency on libc. Before that,
perhaps we could reserve a region of address space for UML with mmap(PROT_NONE).
Regards,
Tiwei
Powered by blists - more mailing lists