lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4947DBFA.9050108@redhat.com>
Date:	Tue, 16 Dec 2008 17:48:58 +0100
From:	Gerd Hoffmann <kraxel@...hat.com>
To:	Ralf Baechle <ralf@...ux-mips.org>
CC:	linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
	linux-api@...r.kernel.org
Subject: Re: [PATCH v3 0/3] preadv & pwritev syscalls.

Ralf Baechle wrote:
> On Mon, Dec 15, 2008 at 09:57:24PM +0100, Gerd Hoffmann wrote:
> 
>>> It fixes the alignment issue but still won't work; on MIPS 32-bit userspace
>>> will pass the 64-bit argument in two registers but the 64-bit kernel code
>>> will assume it to be passed in a single registers.  It'd be ugly but passing
>>> a pointer to a 64-bit argument would solve the issue; something like this:
>>>
>>> sys_preadv(unsigned long fd, const struct iovec __user *vec,
>>>                   unsigned long vlen, loff_t __user *pos);
>>> compat_sys_preadv(unsigned long fd, const struct compat_iovec __user *vec,
>>>                   unsigned long vlen, loff_t __user *pos);
>> Suggestion from the s390 front was to explicitly pass high and low part
>> of pos as two arguments.  A bit ugly too, but should work fine as well
>> and it avoids the user pointer dereference.  What do you think about this?
> 
> That's what the wrapper which you deleted, was doing ;-)  So yes, I like
> it.

Yep, but I'm trying to find a way to have it work without per-arch
wrappers ...

> It just raises one new problem, endianess - are arguments being passed
> as low/high or high/low?  On MIPS we've been solving the issue with the
> merge_64() macro which is defined depending on the byte order:
> 
> #ifdef __MIPSEB__
> #define merge_64(r1, r2) ((((r1) & 0xffffffffUL) << 32) + ((r2) & 0xffffffffUL))
> #endif
> #ifdef __MIPSEL__
> #define merge_64(r1, r2) ((((r2) & 0xffffffffUL) << 32) + ((r1) & 0xffffffffUL))
> #endif
> 
> The actual syscall wrapper could use it like:
> 
> asmlinkage int compat_sys_pwritev(unsigned long fd,
>        const struct compat_iovec __user *vec,
>        unsigned a3, unsigned a4, unsigned long vlen)
> {
> 	loff_t offset = merge_64(a3, a4);
> ...

i.e. the ordering of the splitted argument depends on the os endianness?
What is the reason for this?

I'd prefer to have the ordering coded explicitly instead, like this:

asmlinkage int compat_sys_pwritev(unsigned long fd,
       const struct compat_iovec __user *vec, unsigned long vlen,
       unsigned pos_low, unsigned pos_high)
{
	loff_t pos = pos_low | (loff_t)pos_high << 32;
        [ ... ]

cheers,
  Gerd


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ