lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 19 Feb 2012 21:55:23 +0100
From:	Egmont Koblinger <>
To:	Bruno Prémont <>
Cc:	Pavel Machek <>, Greg KH <>,
Subject: Re: PROBLEM: Data corruption when pasting large data to terminal

Hi Bruno,

Unfortunately the lost tail is a different thing: the terminal is in
cooked mode by default, so the kernel intentionally keeps the data in
its buffer until it sees a complete line.  A quick-and-dirty way of
changing to byte-based transmission (I'm lazy to look up the actual
system calls, apologies for the terribly ugly way of doing this) is:
                 pty = open(ptsdname, O_RDWR):
                 if (pty == -1) { ... }
+                char cmd[100];
+                sprintf(cmd, "stty raw <>%s", ptsdname);
+                system(cmd);
                 ptmx_slave_test(pty, line, rsz);

Anyway, thanks very much for your test program, I'll try to modify it
to trigger the data corruption bug.


On Fri, Feb 17, 2012 at 22:57, Bruno Prémont <> wrote:
> Hi,
> On Fri, 17 February 2012 Pavel Machek <> wrote:
>> > > Sorry, I didn't emphasize the point that makes me suspect it's a kernel issue:
>> > >
>> > > - strace reveals that the terminal emulator writes the correct data
>> > > into /dev/ptmx, and the kernel reports no short writes(!), all the
>> > > write(..., ..., 68) calls actually return 68 (the length of the
>> > > example file's lines incl. newline; I'm naively assuming I can trust
>> > > strace here.)
>> > > - strace reveals that the receiving application (bash) doesn't receive
>> > > all the data from /dev/pts/N.
>> > > - so: the data gets lost after writing to /dev/ptmx, but before
>> > > reading it out from /dev/pts/N.
>> >
>> > Which it will, if the reader doesn't read fast enough, right?  Is the
>> > data somewhere guaranteed to never "overrun" the buffer?  If so, how do
>> > we handle not just running out of memory?
>> Start blocking the writer?
> I did quickly write a small test program (attached). It forks a reader child
> and sends data over to it, at the end both write down their copy of the buffer
> to a /tmp/ptmx_{in,out}.txt file for manual comparing results (in addition
> to basic output of mismatch start line)
> From the time it took the writer to write larger buffers (as seen using strace)
> it seems there *is* some kind of blocking, but it's not blocking long enough
> or unblocking too early if the reader does not keep up.
> For quick and dirty testing of effects of buffer sizes, tune "rsz", "wsz"
> and "line" in main() as well as total size with BUFF_SZ define.
> The effects for me are that writer writes all data but reader never sees tail
> of written data (how much is being seen seems variable, probably matter of
> scheduling, frequency scaling and similar racing factors).
> My test system is single-core uniprocessor centrino laptop (32bit x86) with
> 3.2.5 kernel.
> Bruno
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists