lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Thu, 8 Jul 2004 07:59:09 -0700
From: Michael Wojcik <Michael.Wojcik@...rofocus.com>
To: BUGTRAQ@...urityfocus.com
Cc: Andrew Daviel <advax@...umf.ca>
Subject: RE: Suggestion: erase data posted to the Web


> From: Andrew Daviel [mailto:advax@...umf.ca] 
> Sent: Wednesday, July 07, 2004 2:30 PM
> 
> A recent New Scientist article referred to the fact that
> "sensitive data" may persist in computer memory, and be 
> swapped to disk and persist after a power-down.
>
> I had observed a while ago that text such as credit card numbers
> entered into a form in Netscape could persist in RAM after the
> application exits, and this seems to be still true for Mozilla.
> 
> As discussed earlier in Bugtraq ("When scrubbing secrets in 
> memory doesn't work", 19 Nov 2002), in Linux/Unix the mlock() call can
> be used to discourage swapping (MmLockPagableSectionByHandle ? in
> Win32), while overwriting can be used to erase freed memory (as is done
> in Gnupg).

This conflates two related but different security issues: sensitive data
left in RAM, and sensitive data left on disk after having been paged out.
While the underlying problem (sensitive data outlasting the scope of its
proper use) is the same, the parameters are quite different (for example,
power-cycling generally takes care of the former but doesn't affect the
latter), so they need to be analyzed and addressed separately.

> It occurs to me that, while an unprivileged process cannot read system
> memory directly, that a simple allocation of a large chunk of 
> memory might get data freed up or abandoned by previously running 
> processes.

That shouldn't happen with any modern general-purpose OS.  There's an Orange
Book requirement called "object reuse" that requires all systems certified
at C2 or above remove old data from an object (such as a region of memory)
when it is reallocated to a new process.  That's a simple guarantee for a
virtual-memory OS to provide when allocating memory, so all the significant
players do.

> Given the now common practice of leaving computers powered on with
> "high-speed" internet access, and the recent appearance of 
> trojans such as Bankhook.A and Pwsteal.Refest, I suggest that best
> practice be updated to include the erasure and protection of "sensitive
> data".

Many people would say they already do include that.

However, I think simply naming "best practices" is not particularly useful,
in the long run.  What you need is a weighted threat model, so you can
address threats in an appropriate order.  (The exact metric is debatable,
but it should probably combine attack probability, likely degree of damage,
and at a lesser weight the effort of implementing defense.  And, of course,
where it's trivial to protect against a threat, it's worth adding that
protection even if the threat is unlikely.)

For the general populace, I suspect "phishing" and other technically
unsophisticated social-engineering attacks are likely more prominent than
and equally damaging as the considerably more difficult class of attacks
that involve gaining unfettered read access to physical memory and disk
paging areas and trolling them for sensitive data.

Of course, it's trivial to memset over a sensitive area when you're done
with it, so programs ought to do so.  Locking pages to prevent them from
being written to disk may be more difficult: if it doesn't require special
privilege then it's a potential DOS against physical memory resources, and
if it does, then you may have to grant programs more privilege than they
should have, creating a worse security hole.

> One (probably very CPU-intensive, for some apps) way to enforce this
> behaviour for malloc'd memory would be to make free() do an erase
> operation as a system option. Creating "secure_free()" would 
> be better.

I doubt you'd notice the CPU cost for nearly any normal application.
However, as I noted above, the "object reuse" C2 requirement is widely
satisfied and makes clearing memory when it's freed largely unnecessary.  A
much better approach is to have each program clear its own sensitive data as
soon as possible.

-- 
Michael Wojcik
Principal Software Systems Developer, Micro Focus


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ