Pwnagotchi and MultiPass news…

So I spent more time than I care to admit this weekend trying to prep a Pwnagotchi platform (RPiZW and e-paper). The code (by @evilsocket) hasn’t been released yet, but I want to be ready. My mistake was only ordering one e-paper HAT from Waveshare. I should have taken failure rates into account. At least I’m 75% certain it’s a DOA unit. I tried numerous approaches based on numerous experiences by randos on the internet, and I couldn’t get that damned thing to display SQUAT. If it fed me back any data, it was just “e-paper busy.” Not helpful, Waveshare. Not helpful.

On the plus side, I was watching a video comparing different levels of USB microscopes for soldering (GreatScott!), and noticed that it was sponsored by JLCPCB, and they were offering a too-good-to-be-true deal on new PCB orders. So I went ahead and placed an order for ten MultiPass boards. Some of those will be up for grabs when they arrive. WAY cheaper than expected. I remain cautiously optimistic.

BadgeBuilding

So a few of us came to a consensus of wanting to work on Hackerbox #0046 “Persistence” for the upcoming September meeting. If you want in, bring one, and bring soldering gear. I have specialty stuff, like an electric solder removal tool and a hot air tool, but bring your basics — iron, solder, etc.

I am REALLY tempted to order boards and parts to assemble the DC27 Multipass badge, since the Gerber/Eagle files have been released along with the software. It will be a bit of a challenge for some — but we’re all about challenges, right? There are like 70 0603-sized SMD parts on it. I have more than half the parts in my lab already, but some parts I’ll need to order. If there’s enough interest, I can order stuff for the October meeting. Cost of the bare boards is about $36 each from OSHPARK, ordered in sets of three. If there’s enough interest I’ll price out the BOM and you can decide if it’s worth it to you to play.

In case you’re not following the Meetup Group…

It has been decided that for the foreseeable future, meetings will be held in my hackerspace basement (hackerspasement?) just a few blocks from the Gum Spring Library.

I (Bob) am looking to grow this group and its members. I am also looking to transition it into more of a cooperative and less of me being the main driver. I love hosting these things, and I’m more than happy to keep doing so, but I thrive on entropy. So for the future, I would love to see:

  • Someone step up to help out with comms for the group. A social media presence maintainer, so to speak.
  • Someone (or hopefully more than one) step up to offer to teach us something new. Along those lines, maybe for the September meeting we can collect a list of our collective weak points, and move forward from there in the act of bolstering them. Examples:
    • I’m reasonably strong on linux exploits, server hardening, network device hardening, and getting there on hardware hacking.
    • I’m weak on Windows exploits, buffer/stack overflows and reverse engineering. Anything that makes that knowledge more easily transmissible (shortcuts) is a good thing.

WOPR Jr update: It’s All Good.

Crossover cable came late today, and as suspected, when my laptop is connected through the gigabit switch and directly into the WOPR (as opposed to through the wifi), and I create a blank VM with BRIDGED networking through the ethernet cable, I’m able to deploy a fully-updated Kali instance to the VM (via FOG) in just over two minutes.

Shutdown and startup scripts are reliable. A quick shutdown script via ssh key tells WOPR to first SUSPEND all the VMs, then shut down itself. All the machines startup automatically when I power on the unit. The network is usable within a minute of powering it up.

Packing everything up for tomorrow’s meeting now…

  • WOPR Jr (check)
  • Wifi router (check)
  • Gigabit switch (check)
  • TPLink Wifi Extender (for attempting to piggyback on Library internet) (check)
  • Entertainment (check)
  • Swag (check)
  • …what am i forgetting?…

See you at the meeting. Or at the pregame. I’m going to hit the hot tub and then hit the sack. It’s been a busy weekend.

WOPR Jr, Part 4: It ties the room together.

This is a status report. It’s two days prior to the August meeting, and I’m working hard to get the most rewards for my efforts.

I realized late Thursday night that getting all the VMs converted to KVM was a strategy I shouldn’t have tried to include in such a tight timeline. Fortunately, the system has two 2TB M.2 SSDs, so I didn’t have to disrupt my efforts to temporarily abandon the KVM strategy — I just installed ESXI 6.7 onto a USB thumb drive, booted that, and made the second SSD its storage volume. After that, a quick SCP to get all the working VMs into it, and then some enhancements:

  • I built a FOG server and captured a fresh Kali install to it. Then I updated that Kali image (over 1000 packages) and captured that. So now, if you show up at a meeting and want to play on the CTF, but don’t have a Kali VM, you can quickly get a current one. (Assuming you have the capability of creating a VM and PXE-booting it). Not fast going through the wifi (up to 30min to deploy) but I have a crossover adapter coming tomorrow and should be able to route it through a small gigabit switch so that wired imaging will be lightning fast.
  • Automated startup, and an easy shutdown script to suspend the VMs and avoid data loss.
  • Most importantly, to combat the issue from last month, the system has been tested to restore from complete power loss, and comes up in a fully usable state. So I have reduced prep time for the meeting to the following:
    • Plug in, power up the three devices (WOPR Jr, wifi router and gigabit switch)
    • Set up TP-Link Wifi Extender using the laptop, piggybacking on the library’s wifi
    • Plug the DC540 wifi router into that for egress.
    • Clamp the defcon flag for display

Interestingly, this graph shows the performance of the NUC with lots of VMs running. The higher memory line represents 17 linux VMs and 5 Windows VMs running. When the memory drops lower, that’s when I shut down the Windows VMs.

WOPR Jr, Part 3: “If you don’t eat your meat, you can’t have any pudding!”

I know, I know. I’m excitable. I see this new, delicious toy in front of me and start daydreaming of all the magic I can do with it. And yes, it’s all magic. So right now, all day, I’m copying the .vmdk and .vmx files from the old server and getting them ready for importing.

Because Phase 1 is actually “Getting The Thing To Work Exactly As Well As The Thing It’s Replacing.” Which is a self-contained VM server with two networks connected to a wifi router.

Phase 2 is having the wifi bridged into the VM server and eliminating the wifi router. WITH (hopefully) the ability to piggyback off of locally-available Internet wifi.

Phase 3 is for the innovations, like the stuff mentioned earlier. Maybe integrating the Skull LED controls into the CTF scoreboard software itself, so that visual feedback is provided when a challenge is solved, or when ALL of the challenges are solved. Maybe adding Mac VMs.

So that’s my day. I’m at work all day getting things done while hundreds of gigs of VM flat files are being migrated to the NUC.

Oh yeah, and BLING. There will be bling.

WOPR Jr, Part 2: “Dad, you’re so extra.”

Because what good is doing something if you’re not pushing the envelope?

Controlling the NUC Hades Canyon Skull LED from Ubuntu

I initially found a set of libraries and got excited about it, but they didn’t work — then I came upon this write-up, which indicated that a rewrite was needed for the Hades Canyon — and lo, there it was. With some handy code to help understand the awkward method of setting skull and eye colors and making things blinky.

To come: let’s use those eyes to trigger visual feedback for events… Or maybe blink morse code clues during a challenge, see if anyone notices…

Speaking of triggering…

You know, VMs can be precarious, and the last thing you want to do is shut down a machine that’s running a bunch of VMs. So maybe let’s take advantage of some of this amazing computing power, and, for example, use udev to trigger a startup of all VMs (in order, of course) when a particular usb key is installed; also to trigger a shutdown of the same VMs (and perhaps the entire machine!) when the key is pulled… complete with windows on the screen confirming what’s going on.

Or maybe adapting a bluetooth camera remote to do the same thing. Anything to not have to physically log in to initiate a shutdown script, right?

WOPR Jr, Part 1: Configuring a NUC Hades Canyon as a CTF Server

If you’ve been following along, we had a working CTF server on a beefy retired desktop server. It was brought to the July meeting in hopes of its debut, but a dead CMOS battery thwarted us.

Since I never like to do anything halfway, I got to thinking about what it would take to make that system more portable. This series of posts will document the process of bringing this to fruition.

Hardware

The old system was an i7-920 (4 cores, 8 threads, 2.66GHz). It was about nine years old, hence the dead CMOS battery. It was loaded with 16GB of RAM and a 500GB HDD. Skimpy by today’s server standards, but plenty of horsepower to run a dozen vulnerable VMs and the infrastructure needed to support CTFs.

I set out to get the most possible bang for my buck, with the primary factor being portability. I looked at SFF desktops and USFF desktops, and then someone I work with suggested a NUC. I started looking at specs, and realized that the latest NUC, the Hades Canyon, could FAR exceed the capabilities of that giant desktop, while fitting into a messenger bag. I started a GoFundMe to help make the purchase, since this is for the group, not for personal use, and several members donated. Not being someone who is big on patience, I jumped the gun and made a purchase, but I’m leaving the GoFundMe up. Hopefully as members and meeting attendees see the benefit of this platform, more of them will step up and offset the costs.

What I ended up with, I bought in kit form. I bought the 100W NUC8i7HVK barebones chassis, which includes the case, the motherboard, an i7-8809G processor (4 cores, 8 threads, 3.2GHz, overclockable to 4.2GHz — 82% faster according to UserBenchmark), two ethernet ports, and lots of options for video and USB. Max RAM for these is 32GB, which I ordered as Corsair Vengeance modules — so, twice the RAM of the retired desktop. Lastly, the HC comes with two M.2 slots (three, actually, but one is taken up by the wireless card). I populated the remaining two with a pair of Intel 660P NVMe SSD units in the 2TB version. I’m not even going to estimate the improvements of NVMe SSD over platter drives.

I did a bit of research before choosing an operating system. I would have been perfectly comfortable with ESXI, like the old machine, but the HC has a really gorgeous color-controllable skull LED on the top cover that probably wouldn’t be controllable from a VM. I know that’s kind of a vain reason to choose a virtualization platform, but KVM performance these days, on the right hardware, is probably right there on part with ESXi, and arguably easier to write management scripts for. Combine that with the benefit of having a front-end OS for display, monitoring and LED management, as well as configuring of external connections, and I think I came out on the right side.

I ended up choosing Ubuntu for the base OS, because it seems like enough adventurous nerds have done the research on getting the most out of the NUC with it that it wouldn’t be a long pull to get it rolling, and I think I made the right choice there. I followed these instructions to get Ubuntu 18.04LTS installed onto the NUC (I also had to upgrade the firmware of the NUC itself, and add NOMODESET to the boot options during install and permanently) and making use of the Vega M integrated graphics. I read elsewhere that Ubuntu 18.10 would install with much less effort, but I really wanted an LTS release for this project.

I purchased a tiny monitor to continue portability and allow troubleshooting on-the-fly, along with a folding keyboard.

The install went well, and the libvirt/KVM install was completely uneventful. Stay tuned for Part 2, where things start to come together, take shape, and get really exciting…

The WOPR Jr is coming…

We are replacing the clunky desktop ESXi server that failed at last month’s meeting with a Hades Canyon NUC. i7-8809G, 32GB RAM, 2TB SSD. Should literally run everything we can throw at it.

We’re up to thirteen vulnerable linux VMs of varying sorts. Most have been tested. If anyone has any experience with CTF management software, let me know, otherwise we might have to write something. I’d like to track users and time-to-solve for each of the VMs to establish some baselines.

There are also a few Windows VMs. Not specifically designed for vulnerabilities, hope to have Metasploitable 3 soon, but just basic unpatched Windows machines.

I doubt the WOPR Jr will be in play by the August meeting, but most assuredly it will be in play in September. It takes time to build these things right, and I’m really considering building it out not just for vulnerable VMs, but as a full-on virtualization environment, with multiple virtual hosts, VSAN, and all that. Really it depends on when it arrives and how much spare time I have.

I am happy to be able to provide this benefit to our group. I am also happy to accept donations to help pay for it. If you or your employer would like to become a sponsor of the WOPR Jr. CTF platform, please talk to me. Don’t be under any illusions that Defcon groups are subsidized in any way. Until we find sponsors, this stuff is coming out of my own pocket. Here, have a GoFundMe.

https://www.gofundme.com/f/hack-my-shiz&rcid=r01-156595663746-5871399006ba463d&pc=ot_co_campmgmt_w