lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 2 Jun 2011 14:46:23 -0700 (PDT)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	linux-kernel@...r.kernel.org
Subject: [RFC] "mustnotsleep"

In development of RAMster, I have frequently been bitten
by indirect use of existing kernel subsystems that
unexpectedly sleep.  As such, I have hacked the
following "debug" code fragments for use where I need to
ensure that doesn't happen.

DEFINE_PER_CPU(int, mustnotsleep_count);

void mustnotsleep_start(void)
{
	int cpu = smp_processor_id();
	per_cpu(mustnotsleep_count, cpu)++;
}

void mustnotsleep_done(void)
{
	int cpu = smp_processor_id();
	per_cpu(mustnotsleep_count, cpu)--;
}

and in schedule.c in schedule():

if (per_cpu(mustnotsleep_count))
	panic("scheduler called in mustnotsleep code");

This has enabled me to start identifying code that
is causing me problems.  (I know this is a horrible
hack, but that's OK right now.)

Rather than panic, an alternative would be for the
scheduler to check mustnotsleep_count and simply
always schedule the same thread (i.e. instantly wake).
I wasn't sure how to do that.

I know this is unusual, but still am wondering if there
is already some existing kernel mechanism for doing this?

Rationalization: Historically, CPUs were king and an OS
was designed to ensure that, if there was any work to do,
kernel code should yield (sleep) to ensure that those
precious CPUs are free to do the work.  With modern many-core
CPUs and inexpensive servers, it is often the case that CPU
availability is no longer the bottleneck, and some other
resource is.

The design of Transcendent Memory ("tmem") makes the
assumption that RAM is the bottleneck and that CPU cycles are
abundant and can be wasted as necessary.  Specifically, tmem
interfaces are assumed to be synchronous... a CPU that is
performing a tmem operation (e.g. in-kernel compression,
access to hypervisor memory, or access to RAM on a different
physical machine) must NOT sleep and so must busy-wait
(in some cases with irqs and bottom-halfs enabled) for events
to occur.

Comments welcome!
Dan

---

Thanks... for the memory!
I really could use more / my throughput's on the floor
The balloon is flat / my swap disk's fat / I've OOM's in store
Overcommitted so much
(with apologies to Bob Hope)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ