Thursday, March 27, 2014

Weapons of mass-pty considered harmful (trickery!)

Fixed a bug in enabler which is part of pam_schroedinger
that made it exit() when no more pty's could be allocated.
That's wrong of course, we just need to continue dictumerating
(enumerating via dictionary) the account. 500 parallel
su/sudo are of no problem.

enabler allows you to mount dictionary attacks using su,
sudo, passwd or alike. You can stop this by using
pam_schroedinger, or something like introducing an
enforced RLIMIT_PTY and having su, sudo etc. call
isatty(0), otherwise socketpairs etc could be used too.

I also went ahead, signing my github stuff with
this key. Any release tag containing an s at the end
of the version is a signed tag. Also, all commits will
be signed in future.
You can verify this via git log --show-signature or
git tag --verify TAG after having above DSA key
imported into your gpg keyring.

Friday, March 7, 2014

crypto shell trickery!

I recently imported crash into my github. It features
IP6-ready SSH-like remote shell, using strong public key
authentication and TLS-encrypted transport. It does not
rely on SSL/TLS internal X509 cecking but compares
hostkeys bit-wise. It runs on Linux and embedded derivates,
Android, BSD, Solaris and OSX/Darwin. It does not require root
and has back-connect and trigger modes built in. It can
also be invoked as a CGI.

Update: Pushed a fix into git to use SHA512 rather than
SHA1 for signing authentication requests. That makes
it incompatible with earlier versions. Also fixed a bug
where crashc did not properly distribute SIGWINCH to the
remote peer. Now you can use your ncurses porn and resize
your xterm and it gets properly adjusted! Also tested
authentication RSA keys of up to 7500 bit in size. That
should resist upcoming (TS//SI//REL) QUANTUMFUCK computers.
I need to find the time to enforce cipher-lists and add
ephemeral keying though. (done)
Also good news: crash also integrates with sshttp!

Friday, February 28, 2014

lophttpd OSX trickery

I ported lophttpd to OSX/Darwin (10.8 tested).

As OSX/Darwin is almost POSIX-compliant (live_free() || die())
and I already separated the low-level stuff to the -flavor
files, this was not overly complicated.
Now it pays that I chose to do it that way, rather than
having a dozen of #ifdef stacked around.
lophttpd now cleanly builds on BSD (untested for some time),
Linux, Android and OSX/Darwin.

What nerves most is the various integer-size issues you
have with size_t, off_t, suseconds_t etc. and the corresponding
format specifiers with the *printf() family. However you do it,
one OS shouts at you for passing wrong sized parameters to
*printf(). Despite of any standards. Live free or die.

You can easily build it on OSX/Darwin by installing Xcode
and then installing its command-line tools. Thats not
gcc AFAIS, but it should also build if you manage to install
gcc toolchain there.

I had to disable warnings about deprecated use of OpenSSL
in OSX and I have hard times not commenting on that in light
of gotofail.
Live free and die. :)

Thursday, January 16, 2014

Fernmelder to the rescue

I've been experimenting with mass DNS resolving lately.

Imagine you have some large list of DNS names (FQDN's)
which you want to map to its IPv4 or IPv6 address.
That could be a GB sized zone-file or an enumerated list of
names for some double-flux network when you research
how you can take down a botnet. In either way, sometimes
you need to do that in finite time and clearly
gethostbyname() in a loop is not the way to go!

For asynchronous resolving the glibc already has
getaddrinfo_a() but it turns out that this function is
entirely useless because its using threads. So, for
every request you send, a thread is cloned() which does
not scale well. [The glibc aio_ functions also use threads,
its a pitty that glibc async support is so toast!]

So I hacked up something from scratch that works for me.
Its on my github. The output resembles that from dig
and from the zone files you know.

The problem is to find right parameters for the amount
of requests to send in a row and the amount of usecs you
want to usleep before doing that again. Otherwise you will
just hammer the DNS server and gain no response. The default
values are sane enough that it yields some good result.
The better your uplink to your recursive DNS, the
smaller amount of time you need to usleep().
You can also distribute the requests across multiple DNS
servers by using more than one -N switch. The more reliable
DNS servers you have, the better because you do not run
in any rate limiting.

Friday, November 8, 2013

Killing Schrödingers Cat

This post is about the so called FoxAcid/QI system apparently
used by an agency to exploit browser sessions.

I first read about FoxAcid in an article by Bruce Schneier,
who made the distinction between Man in the Middle (MiM) and
Man on the Side (MoS) attacks. Although, if
properly implemented, the referenced slide
shows a setup where no packet race exist (therefore a MiM),
there seem to be use-cases for MoS attacks.

I am only discussing HTTP/HTTPS case here, as for VPNs etc,
you clearly need MiM and the aim of FoxAcid seems to
be the exploitation of web browser client sessions.

Deploying MiM on the backbone requires quite large and expensive (both financially and technically) setups. In most cases you require the coop of the ISP or someone who made the firmware of
the routers along the path. Nevertheless, if possible, MiM is clearly
the way to go, as it allows to intercept and 'handle' encrypted
communication channels. MoS on the other hand fails to
'handle' SSL connections, as its not possible to spoof
a HTTP redirect into the session.

But MiM is easy to detect and hard to deploy
in foreign networks in the large scale,
since you basically try to add a new router
(or even transparent-proxy) to the network infrastructure.

So you have some kind of lightwight-MiM, called MoS.
Since most connections will be either HTTP or
initiated by HTTP, even if 'upgraded' to HTTPS later on,
MoS buys you a lot of benefit.

MoS does not require to deploy new router hardware,
firmware or routes to be added to the running configuration.
It works by simply plugging the MoS-box to a port that
mirrors all packets seen for 'diagnostic purposes'.

You need a second, normal uplink plug, in case the mirror
port does not accept packets for sending, but thats doable.
I am not familar with backbone routers and their mirror port
capabilities, but I guess thats easily done.

The MoS attack can then act upon seen SYN packets
(completing the handshake) or seen GET requests. The later
requires to track the connection and therefore synchronous routes
back to the client (to see the SYN|ACK). The former does not,
but then in turn does not allow to redirect to the expected
location in some cases, as its missing the Host:
information from the client request.

I implemented both cases here. At least this is how I
would implement a QI/FoxAcid framework, there might be
different ways. However, acting in a packet-race (you
cannot modify replies) leaves not too many options.
It can be easily tested in your home (W)LAN and the
FoxAcid will show you by color which requests it

The captured GET request is sent Base64-encoded (green) to the
FoxAcid server, which uses this info (blue) to properly reconstruct the path and Host: parameters.The red part
is sent to the client in order to exploit and redirect
the browser to the original destination afterwards.
(No, I am not using this browser and the green part is smudged
in order to prevent accidental info-leaks as I cannot
read Base64 on the fly, but its the Base64 encode of the blue

MoS is also interesting if you have capabilities of
breaking 3G or 4G (or wifi) crypto in realtime, since it allows
you to spoof the replies to the sending station directly,
circumventing the network structure entirely (in opposite
to deploying a MiM somewhere behind the BTS/AP or
replacing them). If you are on foreign ground that might
be easier with good RX/TX equipement and a laptop rather than
to setup and integrate a whole BTS on the roof top of an embassy. :)

Monday, October 28, 2013

sshttp IPv6 trickery

During last hackweek, I added IPv6 support to
sshttp. Courtesy of IPV6_TRANSPARENT in recent Linux
kernels, it works as you know it from IPv4.

Beside that, it's now also possible to add IPv6 backends
to the frontend reverse proxy which is part of lophttpd,
and to run it on an IPV6 address to the outside.

Thursday, September 19, 2013

lophttpd seccomp trickery

Hey guys, you know what?

I added seccomp sandbox to lophttpd. It is an experimental
Linux-only feature, enabled by -DUSE_SANDBOX compile time switch.

I really should add that feature to the frontend reverse
proxy too as well as getting in touch with FreeBSD's
capsicum in order to support multiple platforms.

The benefit is that, even if lophttpd already runs unprivileged
in a read-only chroot, the impact of potential RCE vulnerabilities is even more restricted. The sandbox also
covers the OpenSSL code, so it is not necessary to use
SSL privilege separation any longer.

To my knowledge lophttpd is the only webserver that supports
seccomp sandbox.

Additionally, I removed any EC or RC4 based cryptography from
the SSL code. Basically what you get now is RSA+AES+SHA
which is believed to be a cipher secure from NSA unlike
NIST based ciphers or probably ECC entirely, not just with
the NIST curves.

Thursday, August 1, 2013

PrivSeb trickery

During recent anual 743c PrivSeb conference meeting, I ported
some of my tools to Android. Some of them require root
and some have been ported to work as shell user.

The former allows to bridge out interfaces so you can use
your tools as-they-are locally in the lab to access devices that
appear somewhere on a remote smartphone. The benefit
is obvious.

The later tool is a sshd-like management
for devices. I yet need to figure out some details for
mass administration and stronger theft-protection.

We also enjoyed discussing good old times, tried
to estimate how exploit development on Android will be
in future and exchanged some SHA hashes for the good. :)

After that, the (meanwhile well established part of the Con)
outdoor event Race-Condition was fun too. I might publish
some pictures of that later this year.

Ported tools wont be published, yet if you want to discuss
feel free to send me an email.

Thursday, July 4, 2013

Portshell crypto trickery

I re-polished some of my old tools and imported them
to github. This time it was psc.

The port-shell crypter (psc) allows you to 'upgrade'
previously unencrypted sessions, no matter whether
its a multi-hop SSH session, a portshell without a tty,
telnet, rlogin or minicom.
It gives you a full pty with end-to-end encryption.

You can also use it as a kind of two-factor authentication
with SSH, if you add psc-remote as shell inside
/etc/passwd or use OpenSSH's ForcedCommand option.

Friday, June 28, 2013

Tunnel trickery

I just added fraud-bridge to my github. It was worth
coding even if there exist a lot of DNS and ICMP
tunneling tools.


o tunneling of TCP-connections, keeping TCP-state
o via DNS: on UDP or UDP on IPv6
o via ICMP or ICMPv6
o HMAC (MD5) protecting of tunnel content
o transparently patching announced TCP-MSS to prevent
  fragmentation or DNS packet splitting
o using EDNS0 extension for DNS-tunneling to achieve good
  througput (larger DNS TXT-replies fit into one reply, honouring
  announced MSS)
o cope with bind9 limits/quota and still having good latency
  for interactive sessions and good throughput
o once started as root, continues to run as unprivileged user
  inside a chroot

If you want to know how a fraud-bridge looks like, check
current blog entry picture, taken during one of my lost-places