Friday, July 6, 2018

A Path to Extreme Longevity

Standard Disclaimer:  I'm a computer scientist, which of course also makes me an expert in fields of optics, holography, and cryobiology.

At a high level, I propose that the infrared holography technologies being developed by Dr. Mary Lou Jepsen at Openwater as a possible solution to the "unfreezing problem" being worked on by Alcor, a cryonics lab that suspends human heads with the hope of one day reanimating them.  This is, of course, only one small piece of the greater puzzle, and even after we record a full human head at the appropriate resolution, the process of digitally reanimating the head is still going to be a massive open problem, though tangible enough that it can be worked on at scale.

I believe this solution is ideal because the infrared holography scanning can be done in a completely noninvasive, nondestructive way.  It's great for Openwater because my understanding is their technology has a lot of trouble with movement, and the infrared holography has trouble with blood absorbing the scanning radiation.  Movement should not be a problem due to the freezing, and since Alcor replaces blood with medical grade antifreeze, that should vastly improve the potential depth of the infrared scans.

Milestone 1: Calling for Volunteers

A key difficulty for this process is finding a volunteer to be the first head to get scanned.  Alcor has a number of members signed up for cryopreservation, but for this project, volunteers would need to understand that in the very best case scenario, every thought, every memory, every detail of their life would be public domain, and picked over by countless scientists, potentially for generations to come.

Milestone 2: Developing the Scanning Process

A proper scanning harness would need to be developed to perform the most detailed scan possible, while also ensuring minimal damage to the subject.  The subject head(s) may need to be retrieved from cold storage and scanned regularly as scanning technology improves, so minimizing damage is essential to having progressively more detailed scans.

Milestone 3: Open Source a Head

Once the head scan is complete, the full scan data should be published as widely as possible.  I imagine this effort could be similar to this generation's Human Genome Project, with researchers around the world digging into the available data to make sense of it all.  This publishing of head data should ideally happen at some regular cadence, or any time there is a major imaging breakthrough.

Milestone 4: Memory Reconstruction

We have no idea how much precision is required to reconstruct a human memory, but we do know that they're in there somewhere.  Maybe the first memory researchers manage to extract will be along the lines of "Arizona is hot", but as the technology improves, researchers might be able to recreate the full scenery of memories as they were remembered.  The subject could even be asked to memorize some string of data, like a password, and researchers could be challenged to reconstruct that memory.

Milestone 5: Full Network Reconstruction

I imagine that eventually researchers will build a compiler to transform raw imagery into a representative neural structure.  It will be hard to know at first how much lossiness is acceptable to recreate full human cognition.

Milestone 6 : Reanimation

So, now what?  We have a massive neural network representing a full human consciousness, at some point we have to hit the "go" button and see what happens.  This will likely require an unimaginable amount of compute resources, and will likely at first run orders of magnitudes slower than typical human interactions, but what we learn from these experiences will likely inform subsequent generations of "brain compilers" and compute architectures.

Sunday, July 23, 2017

Crawling and Exploiting Forms with SQLMap

Exploiting POST variables with sqlmap has always been a bit of a pain in the ass.  Here's a great one liner to very quickly crawl, detect, and exploit forms with sqlmap:

sqlmap -u <target> --forms --batch --crawl 1

Of course, you can crawl deeper if you want, but if you know where the login form is, then no need to waste time.  This is almost always much more convenient than digging through the form code, and using --data with sqlmap.  Enjoy!

Friday, April 28, 2017

Ants Don't Have Blood

Ants have something called "hemolymph" which is a clear fluid that flows without the assistance of a circulatory system, but that's probably the least derpy thing to come out of the latest chapter in the anti-Jihan drama war.

There is no blood here..

Now, I love me some good bug hype as much as anyone, so when http://www.antbleed.com came online a couple days ago, I took notice.  Especially since I've got some ANTMINER S9s sitting in my garage that may be vulnerable to the issue.  Unfortunately, the number of red pixels on that website isn't really justified for this class of bug.  First off, the "bleed" suffix, referencing the old Heartbleed bug, has been reserved since then for memory disclosures.  This means vulnerable systems that you can hit in a funny way, and they disclose important information to you.  Most recently these are bugs like SSHBleed, or CloudBleed, etc.  There's no blood here though, both metaphorically in terms of urgency, and literally in terms of memory disclosure.  These miners have a feature built in that checks a web service to see if they are stolen, and if they are, they refuse to mine.  That's it.

Anti-Theft Telemetry

So what is Anti-Theft Telemetry?  This is a technology built into most phones, many new cars, and all sorts of embedded electronic gadgets that phone home regularly to determine if they are stolen, and if they are, the devices can be disabled.  If a mining rig gets stolen, the owner can report the theft to BITMAIN, and they can flip a switch, and your average thief will have a hard time getting the device working for them.
Now, I'll be the first to say that telemetry technologies are stupid, and in many ways, invasive, but it's also an extremely common, and often requested, theft deterrence feature.


Central Control

The thing that a lot of people have been freaking out over is the idea that Jihan, owner of BITMAIN, could shut down a huge part of the Bitcoin mining network if he wanted to, since a large portion of it is running on BITMAIN hardware.  While it is true that yes, he could screw over all his customers if he wanted to, it would damage his company irreparably, and for what?  A vast majority of the affected customers would be back online within a couple hours.  This, however, is no different than a large mining pool deciding to divert hashing power, or block users, except a mining pool could then steal its customers bitcoins as well.

The Man in the Middle

What I think is a much more serious concern is the Man in the Middle problem.  A malicious actor (and we've seen quite a few recently) could hijack the telemetry service and use it to make a political statement.  The derp-de-doo who implemented this feature didn't use HTTPS for the telemetry connection, which opens it up several points of attack.  Still though, the worst case scenario is denial of service, and since no one uses TLS for their mining traffic either, these points of attack are exactly the same as those that would hijack mining traffic itself, like the attacks we saw in 2014 that are still just as possible today.  Again, these attacks would net actual bitcoins, and are therefore much more likely for a profit driven attacker to go after.  The only threat (and it is a serious one) would be from those who would want to hurt BITMAIN's reputation.


Who done it?

One question that I feel isn't getting asked enough is, who did this?  We all know that Codenomicon found Heartbleed, Qualys found SshBleed, and Tavis found CloudBleed, but the AntBleed website has a distinct lack of identifying markers.

Besides there being nothing on the actual site, a quick whois will tell you that the site was registered with Namecheap, a registrar that allows you to register domains with Bitcoin.  It's also WhoisGuard protected, so whoever registered the domain didn't want anyone to know who they are.  The site is also being hosted on GitHub under an anonymous "antbleed" account which was used exclusively for setting up this site.  Luckily someone cloned the repo before the antbleed user deleted all their history, or we wouldn't even have that.

Clearly, whoever is promoting AntBleed doesn't want to be identified, which solidifies the suspicions that this was less of a bug report, and more of a pure political hit piece.  Jihan, owner of BITMAIN, upset a lot of people a couple months ago when he started speaking out against how the core Bitcoin developers were behaving, and began pointing the hashing power of his mining pool towards an alternative implementation, undermining the current core development team.  The retaliation has been swift, and strong, and most of all, shocking.

Wednesday, April 12, 2017

MyriadCoin : The Untold Story of the Invincible Blockchain

One of my favorite security mechanisms built into a cryptocurrency is the concept of the Multi-Algorithm PoW.  This means separate, floating difficulties that automatically adjust to ensure that miners on all algorithms get paid out equally.

The first coin I ever saw do this was RuCoin, released back in 2011, which worked with sha256 and scrypt.  I first heard about it in 2013 at the Bitcoin conference in San Jose.  ASIC miners for Bitcoin were finally starting to ship, and there was quite a bit of discussion around whether or not Bitcoin should switch up its Proof of Work algorithm, and exactly how it should do that.  As a huge proponent of trust agility, this idea that a cryptocoin could have two PoW algorithms with independently floating difficulties really blew my mind.  To extend on that, I really liked the idea of pluggable PoW algorithms that could be added and removed from a blockchain as needed, and could be reincentivized on a schedule.

Enter the Myriad

For some reason, RuCoin never released source, and eventually RuCoin died, but in early 2014, the hero of our story arrived, and it was called MyriadCoin.


Myriad was really interesting to me because it didn't just have two PoW algorithms, it had *five*, all floating independently.  To understand why this is a massive security advancement, you first need to understand how 51% attacks are executed.

How to perform an effective 51% attack:

ASIC Coin: One of the key threat actors in this scenario is a nation state able to manufacture custom ASICs to attack a network.  Algorithms that are ASIC friendly are, by design, extremely cheap to implement in hardware.  This also means that the security of the network is 100% dependent on the production lines of ASIC manufacturers who may or may not be open to the public about their product.

GPU Coin: These coins have algorithms that are a bit more expensive to implement in hardware.  The cheapest way for an attacker to attack a GPU Coin would likely be to spin up the required number of GPUs in Amazon's EC2 environment for just long enough to perform a double spend.

CPU Coin:  These coins bring crypto currencies back to their roots, as the most efficient way to mine these coins is on a CPU.  Usually this means that they require significant amounts of memory, or memory bandwidth, that isn't generally available on GPUs.  Unfortunately, what this means is that an attacker with a large botnet would not have a lot of trouble dominating the network for short periods of time, since they often can control 10s of millions of CPUs at any given time.

How MyriadCoin defends against 51% attacks:

Here's where the magic happens.  MyriadCoin has five different proof of work algorithms that all adjust difficulty dynamically.  Two of them ASIC algorithms, two of them GPU algorithms, and one of them a CPU algorithm.

sha256/scrypt: First up, sha256 and scrypt, our favorite PoW algorithms from Bitcoin and Litecoin.  MyriadCoin can also be merge mined with Bitcoin and Litecoin, so you can use your ASICs to mine Bitcoin and Litecoin, and also get some extra MYR on the side for free.

groestl/skein: Groestl and Skein were actually both SHA3 finalists, and the SHA-3 competition required that the algorithms could be cheaply implemented in hardware.  This means that they could be mined with FPGAs, or even ASICs some day, but they are currently being mined with GPUs.

yescrypt: I take some personal pride in this one.  YesCrypt is a CPU centric hashing algorithm created by SolarDesigner, infosec legend and creator of the John The Ripper password cracking toolset.  It was created for the Password Hashing Competition and was a finalist.  It was heavily inspired by scrypt, with a lot of extra defenses against TMTO attacks.  When MyriadCoin launched, an algorithm called Qubit was sitting in this spot, but I pushed for yescrypt pretty heavily on IRC and Reddit for a long time, and it finally got included.

The important part of this is that for any feasible 51% attack, an attacker would need to pin down at least three of the five algorithms, and very few attackers are capable of such feats.  Nation states who might be able to attack ASIC algorithms, and corporations who might be able to attack GPU algorithms, typically don't have the ability to operate large botnets, and those with the ability to operate large botnets generally don't have the physical presence required to operate large GPU or ASIC mines.  Furthermore, for any attack of extended duration, any algorithms that are getting pinned down would be deprioritized during the regular difficulty adjustments, so even pinning down three algorithms would not work for long.

The Tragedy of Vertcoin

A interesting case study in this area is that of Vertcoin.  It has had not one, but two major PoW changes in its history.  The first to run from ASIC mining, and the second to curb the threat of botnet takeovers.  Vertcoin was marketed, from the beginning, as the GPU forever coin.  It launched with a modified version of the scrypt algorithm that used an N-value of 11 rather than the N-value of 10, used by Litecoin, Doge, etc.  They also had the ability to easily bump to higher N values as needed for some extra memory requirements to presumably avoid the impending ASIC apocalypse that Litecoin and family were facing.

Unfortunately, in a move that no one expected, when KnC released their Titan ASIC miner for Litecoin, they included with it hardware support for a TMTO attack (mentioned earlier when discussing yescrypt) that effectively made the Titan miner work on Scrypt coins of any N value, notably targeting Vertcoin.

At that point, Vertcoin had no choice but to fork, and switch to a Password Hashing Competition finalist, Lyra2.  This worked well, but this wasn't the end of their problems.  Lyra2 was designed to be run on CPUs, so it became so popular on Botnets that they needed to fork once again in 2015 after it became clear that a single botnet was controlling more than 50% of the mining power.

Lessons for Bitcoin

With all this AsicBoost drama, and renewed talk of PoW switching, I still think that if this ever were to happen on the Bitcoin blockchain, there would need to be a gradual transition.  Maybe at first 99% of the mining rewards would still go to the SHA256 miners, and 1% would go to MAGICHASH, the magical perfect PoW algorithm that everyone wants to switch to.  The cut received by MAGICHASH could be gradually increased, and after a year it could be 50/50 rewards, and after two years, SHA256 could be phased out completely.  Of course, if the Bitcoin community can't even agree how to scale block size, it's hard to imagine that they'll be modifying their PoW algorithm any time soon.

Tuesday, March 28, 2017

LaceNet : A set of suggested implementations of Neural Lace technology

Well, he's done it.  Elon has finally launched the venture he's calling "Neuralink".

http://www.theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-computer-interface-ai-cyborgs

Prerequisites:  If you have not heard of Neural Lace, you have homework:

https://gizmodo.com/scientists-just-invented-the-neural-lace-1711540938
https://www.technologyreview.com/s/602488/injectable-wires-for-fixing-the-brain/
http://www.teslarati.com/elon-musk-teases-neural-lace-announcement

Basically, there exists a technology such that a tiny mesh can be injected into a human brain.  Neurons are attracted to, and grow on to the mesh.  Neurons that have grown to the mesh can then be individually addressed by a wire coming out of the Brain.  As of last July, it was announced that a research group was able to do the injection without causing any harm to the test subject, and without the test subject rejecting the mesh by forming scar tissue around it, which has been a huge problem with previous iterations of Deep Brain Stimulation setups.

Disclaimer: I am a Computer Scientist who likes to get lost in Wikipedia articles.  I have no formal training in human brain anatomy, nor do I have any inside information on how Neural Lace technology functions beyond the articles mentioned above.

The LaceNet Module:

I'd like to share my vision for what I'm calling the LaceNet Module.  The LaceNet module is a small Bluetooth micro-controller, approximately the size of a dime, with an on board battery that should last a week or longer.  It will be capable of communicating with other LaceNet Modules, storing configuration data, and pairing to a cellphone, where it can be configured with a custom LaceNet App.

The LaceNet Module should be attachable via neodymium magnets to the LaceNet Base, which is a dumb analog multiplexer attached to the back of the skull.  Any number of Neural Lace meshes can be attached to the LaceNet Base.  Probably just a few at first, and then more as new utility is explored.

The theory is that LaceNet Modules can be easily detached, and the user will simply revert back to Human Brain 1.0.  The modules will also be easy to upgrade, and so long as there are standards for how the modules communicate with the base, you can even have competing companies building lighter weight, longer lasting, or more capable modules without needing any sort of intrusive surgical procedures.

The easiest way to program early versions of LaceNet modules will be with a regular smartphone, and the app.  It shouldn't necessarily need to always be connected with the phone, but if you want to schedule something like recurring stimulation montages at certain times of day, you would be able to easily control timing from the app, which would then load the schedule into the module.

The Shared Neuron Architecture:

The obvious first application of Nerual Lace technology is Deep Brain Stimulation.  The important parts of the brain are well mapped, and with this new technology, stimulation can be cleaner and more controlled than it has ever been in the past.  What I'd like to talk about instead is something I haven't heard anyone else talking about, and what I'm calling the Shared Neuron Architecture.

Imagine having a mesh implanted in your prefrontal cortex in an area commonly used for problem solving.  Now imagine that a colleague has done the same.  Now imagine that any neural stimulation recorded from the neurons on your mesh get sent to the mesh of your colleague, and vice versa.  It might take your two brains some time to adapt to the confusion, maybe even years, but once it does, problems that you are thinking about could inspire solutions from your colleague, and problems your colleague is thinking about could inspire solutions from you.  In time, this would allow the two of you to directly draw off each others experiences.

Access control would be critical for a system like this, as well as training.  These things could be managed in the LaceNet App, but ideally a recording of per user neural affinity would be stored in the module itself.  I'd imagine when you're at work you'd be more interested in sharing neurons with coworkers, and maybe at different times of the day you would prefer to share more with friends or family.

Human brain latency is relatively high.  High enough that, should you feel like it, you could share neurons across the internet, though early use cases would probably be sharing neurons module to module.  Sharing the same neurons with multiple people should also be completely possible, so long as you adjust for relative amounts of influence.

Now imagine a 5-8 graph, that is, a large, connected graph, where everyone is connected to somewhere between 5 to 8 people.  I don't think it would be feasible to share neurons with more than that many people, as it would get extremely noisy.  However, if your connections were arranged in a graph to filter out less interesting problems and relay more interesting problems, you could, in a sense, be connected to thousands, or millions, or billions of people simultaneously.  At that point the LaceNet becomes a globally distributed problem solving machine able to take advantage of the entirety of human consciousness to solve any problems it comes up with.

Training Audio Channels:

Maybe I should save this for the next blog post, but I have a number of ideas around how to solve a much more difficult problem, direct communication.  The Shared Neural Architecture is great for expressing bursts of thought in the form of analog pulses, but it's useless for any sort of detailed expression.

I feel like the most straightforward way to deal with this problem is to re-purpose the parts of the brain used for transmitting and receiving audio.  This is going to require a lot of training at first, but will enable us to communicate at high speeds with the people and machines around us, and it can all go over the same LaceNet architecture described above.

Saturday, March 11, 2017

Now you're thinking with Qubes!

So I've finally done it.  This is may be my fourth attempt to use QubesOS, and I think it's really going to stick this time.  After yet another Ubuntu boot failure due to their inability to QA day to day LUKS usage, I've spent an uncomfortable week forcing myself to adapt to all the fun little quirks that come with an ultra secure operating system, and I think I'm finally getting the hang of it.

I thought it would be fun to write up a summary of the problems I encountered in my first week of using Qubes, and how I realigned my way to thinking to come up with a workable solution.

Quick Architecture Summary:

For those of you not super familiar with QubesOS, it's essentially a desktop Linux distro that makes heavy use of the Xen hypervisor to compartmentalize the ever-loving crap out of every activity that you would normally want to do on a computer.  There exist Template VMs which contain the base operating systems, and there are App VMs which are made to contain your apps.  In your App VMs, anything you touch outside of your home directory gets wiped at reboot, so if you want to install stuff, you need to install it in your Template VMs.  Template VMs are also firewalled such that they can't touch anything on the internet except for update servers.  There are also Service VMs that manage your network and firewall configurations, but you typically don't need to touch them.


Problem #1  Decoration time!

One thing that I actually *really really* like about Qubes is that every compartmentalized VM uses colorized window decorations to give you an instant, visceral understanding of the privilege level you're in the window you're typing into.  For example, all if your work windows could be blue, and all your personal windows could be green.  The mappings of which colors go to which VMs is always configurable at any time.

I also need to figure out how I want to compartmentalize my data, which is the key feature of Qubes.  This machine is my personal desktop system at home which I use for things such as:
  1. Shitposting on reddit
  2. Browsing random onion sites
  3. Playing with interesting new crypto-currencies
  4. Playing around with weird machine learning stuff
  5. Managing random servers via SSH
  6. Downloading and serving up TV shows to my various devices
Why have I been doing this all on one machine for so many years you ask?  Shut up, that's why.  There are obviously some clear security wins to be had by breaking some of this out.  I started by making a VM called "browsing", which I colored green.  Then I made a VM called "media" which I made purple.  Then I decided to make my cryptocoin VMs yellow (It's a nice, golden yellow), and figured I'd make my SSH VM and weird code VMs all blue.  This is where all my SSH keys go.  It's nice to know that if my browser gets popped, I don't lose all my keys too.

An awesome thing about the latest release is that QubesOS 3.2 now comes by default with a VM called "anon-whonix" for Tor browsing, which is colored red.  It uses the same workstation/gateway model that Whonix uses, and it all works just beautifully out of the box.

Problem #2 Redhat Sucks!

I dunno, for one reason or another, I've never been a RedHat fan.  I know Joanna loves it, but it's just not my thing.  This has killed me in my previous attempts at running Qubes, but this time, there was a simple, one line solution.

[user@dom0 ~]$ sudo qubes-dom0-update qubes-template-debian-8

If you want something else, check out the template documentation on the qubes documentation page : https://www.qubes-os.org/doc/  You can actually install Ubuntu, Kali, Arch, even Windows as a template VM.  Debian works find for me now though, so moving on.  Another neat thing here is that I didn't need to rebuild my browsing VM.  Since my AppVM is really just a home directory, I easily swaped the underlying template from Fedora to Ubuntu, and everything was good to go.

Problem #3 USB is hard.

The last time I tried QubesOS was the previous release, 3.1.  Back then, if you wanted to use a USB stick, it was open heart surgery time.  USB sticks had to be wired to AppVMs manually on the command line, and if you forgot to detach any USB devices, and rebooted, you'd get some crazy cascading failures that would prevent even the Service VMs from coming up.  I'm happy to say, that's all changed now in 3.2.  Now you just plug in a USB device, right click on the App VM you want to attach it to, and you're golden.  It even works nicely with my LUKS encrypted USB sticks.  I just open the file browser from the drop-down menu, and I can see my encrypted device.  When I click on it, it prompts for my LUKS passphrase in a nice, graphical, password prompt.  How handy!

Problem #4 FIghting with Plex

I use Plex as my primary media server, which led to some complications.  First of all, there isn't really a Debian version of Plex available, so I ended up just using a Fedora template.  Remember that things you install in your AppVM don't stick around on reboot.  Also, if you install Plex in your Fedora template, then any time you boot any AppVM that uses Fedora as a base, you be running a happy little plex server in your AppVM too, which is undesirable.

I ended up creating my own Template VM for Plex.  Sure, it seems a bit silly to have a Template VM in use with only one App VM, but if it's stupid, and it works, it's not stupid.

This worked pretty well, but I had another problem.  Plex likes to store all its data in /var/lib/plexmediaserver, so it gets wiped at each reboot, requiring me to reconfigure the server each time I restart the media VM.  I originally solved this by just configuring Plex in the Template VM.  One problem is that I needed internet access to set up Plex with my online account  (remember Template VMs typically only access update servers).  There's actually a button in the Firewall Configuration for the VM to allow full internet access for five minutes.

Still though, my viewing data was not being saved across reboots, so everything would need to re-thumbnail, re-encode, and show up as new.  The solution that finally hit me was simple.  From the Template VM:

cp -avr /var/lib/plexmediaserver /home/user/plexmediaserver
mv /var/lib/plexmediaserver /var/lib/plexmediaserver-bak
ln -s /home/user/plexmediaserver /var/lib/plexmediaserver

Since the plex data is now in my home directory, when I configure it from my App VM, now the data will persist.  Everything works exactly as expected now.

Problem #5 It's getting crowded in here!

I like to install my base operating systems on SSDs, but unfortunately, the SSD in this system is only a few hundred gigs.  The bitcoin blockchain alone is over 100 gigs these days, and media servers tend to fill up quick.  I've got this nice 6tb drive sitting right here too, but none of my AppVMs can access it, except one at a time.

I'll admit that I spent way too long scheming on fancy bindmount setups, or some shared filesystem situation.  I also almost broke down and re-partitioned my 6tb drive so I could share out mount points individually, but I didn't want to risk data loss.

My eventual obvious breakthrough was simlinks.  I ran this from Dom0 after shutting down all running AppVMs.

cp -avr /var/lib/qubes/appvms /mnt/6t/appvms
mv /var/lib/qubes/appvms /var/lib/qubes/appvms-bak
ln -s /mnt/6t/appvms /var/lib/qubes/appvms

I actually came up with this solution before the final step of my Plex configuration, and you'll note how similar they look, but I figured it would be weird to revisit the Plex thing later.

I also edited my crypttab and fstab to make sure the 6tb drive gets attached at boot.  After this, I was able to go into the AppVM settings for each AppVM and set the disk storage to be as large as the AppVM would need to get.  There seems to be a 1tb maximum unfortunately.  I don't know if that's a Xen limitation, or just an arbitrary value that some Qubes developer thought would be more data than anyone would need, but I do have a VM that wants 4tb, though I can make due without it for a while.

Problem #6 OMG the CIA is hacking everyone!

Conveniently, the Vault 7 Wikileaks leak happened this week.  A few clicks later, and I had my shiny new red Wikileaks AppVM.  This was pretty neat, because in the AppVM, I was able to apt-get install qbittorrent, download the torrent file in my browsing VM, send it over to the wikileaks VM, and download it there.

Once the passphrase was announced, I could dig through it, without fear of any browser exploits or PDF exploits.  Not that Wikileaks would ever release malicious files, but it felt really good to be able to dig through them in a completely isolated environment.

This is also when I got really used to how copy and paste works in Qubes.  This is actually super neat.  From my green browsing VM, I could go to twitter, pull up the wikileaks tweet with the crazy long 7zip password, and then "ctrl+c" like normal.  Now, this put the password into my "browse" copy buffer.  Then, with the browse window up front, I pressed "ctrl+shift+c" which indicated to qubes that I wanted to pass around my copy buffer to another VM.  Once I alt-tabbed over to my wikileaks VM, I pressed "ctrl-shift-v" to load the password into the copy buffer on the wikileaks VM.  Then I right clicked and said "paste" in the context menu.  This was so much slicker than how VMWare does copy and paste when it works, and much less frustrating than trying to hand type data across VMs.  It's also pretty damn secure.  A shared copy buffer across all AppVMs would be a disaster, but this method seems very precise and simple enough that after doing it a few times you start to do it without thinking.

Problem #7 Oh crap, I probably just said too much

#YOLO.  I'm a firm believer in Kerckhoffs's principle.  That is to say, knowledge of how I deploy my security should only serve to discourage any potential attackers.  Even a potential exposure on my end in the name of education is a net win for everyone.  If you understood most of what the hell I was taking about in this post, you're probabally ready to try out Qubes.


Friday, March 10, 2017

EmpirePanel : Hack with Friends!

Intro:

You may be familiar with what was once called "PowerShell Empire", and is now referred to simply as "Empire".  It's the hot new post-exploitation framework with a lot of fancy features.  One major drawback, however, is that Empire lacks any real multiplayer support.  I noticed that Empire is, at its core, a Flask app, so why the hell not extend it into a fully functional web interface for collaborative hacking?


Goals:

  • A nice, pretty, web interface for Empire
  • Mulitplayer support
  • Functional parity with command line version
  • Minimal invasiveness to the existing Empire code base


High Level Architecture:

EmpirePanel is based on AdminLTE, and is decorated with AngularJS.  It is implemented entirely in HTML and JavaScript except for a minor tweak to enable the new routes in the core Empire code.  All interaction with Empire is done with JavaScript via the provided Empire API.


Walkthrough:

git clone https://github.com/pierce403/EmpirePanel.git
cd EmpirePanel/setup/
./setup_database.sh
./cert.sh
cd ..
./empire --rest --username admin --password admin

Then from a browser, visit https://127.0.0.1:1337/, and log in with the user/pass you set.

You should then be presented with a page looking like this:


Awesome, now lets start hacking.  First we need to create a listener.  I'm using 192.168.174.1 for this listener because that's the IP for my vmware host.


Great.  Now once we have the listener, we click into it, and generate a launcher:


This launcher is the command we run on our target system.  Once we run it, we see an agent pop up.



Okay, let's click into it and see what sorts of things we can do :





Things That Work:

  • Creating and destroying listeners
  • Generating a launchers
  • Collecting agents
  • Running shell commands on agents
  • Running modules on agents

Things That Don't Work Yet:

  • Some agent commands, like rename, ps, etc
  • UI layout consistency
  • AngularJS syncing issues

Fixing Things That Don't Work:

Of course, everything is on github (https://github.com/pierce403/EmpirePanel) and I accept pull requests.  All of my EmpirePanel work is concentrated on two files, empire.js, and index.html in the /static/ directory.

Future Plans:

This demo is only compatible with the 1.5 version of Empire.  Hopefully the API for 2.0 will stabilize soon and I'll be able to port it over, and hopefully, one day get the code upstream.  Maybe this work will simply inspire someone who knows what they're doing to come along and do it all the right way, who knows?  I guess that's all part of the magic of Open Source.  Enjoy!