My love-hate relationship with 3D printers

Ever since the first time I read about 3D printers, I knew I had to have one. Something about creating things out of filament, and imagining and designing those things, has always appealed to me.

If you’ve read anything here, you know that my first was the Anet A8. Someone (probably in an Anet forum) said “Don’t buy an Anet if you want a 3D printer. Buy an Anet if you want to learn how to build a 3D printer.” Boy was that true.

Now I have the Creality Ender 5. The price came within a comfortable reach, and the reviews have been stellar, both public and via word-of-mouth. It’s been rocking pretty steady since I got it, around the beginning of this year.

But the more you drill down into a 3D printer, the more you see how things can be improved. I soon realized that the questionable first-layer problems I have occasionally might not be a problem with my practices after all… It might just be about barely-perceptible warps in the bed, which are common. So I thought, what the hell, the BL-Touch bed leveling sensor is cheap, I’ll just get one, install it, and all my troubles will be gone.

Except that 3D printing, like most amazing things in technology, is based on rickety scaffolding and band-aids, and just about everything is more complicated than it looks. Here’s what I encountered, in semi-sensible order.

  • Adding BL-touch support requires making tweaks to the Marlin build.
  • Making tweaks to the Marlin build requires re-flashing the firmware.
  • Since the Ender 5 (Creality V1.1.4 motherboard) does not include a bootloader, I had to flash a bootloader prior to flashing firmware. The bootloader allows for flashing firmware via USB.
  • Flashing the bootloader requires using a USP-ISP or an Arduino Uno flashed with ArduinoISP to connect to the ISPC programing headers on the motherboard.
  • I used an Arduino Uno. I lost significant time to learning that a 10uF capacitor needs to be connected to the RESET and GND pins of the Arduino so that (as I understand it) the reset signal won’t be interpreted by the inline Arduino but passed forward to the target device. But yay, once that was done, I was able to reliably upload the bootloader. More importantly, I could take my laptop out of the equation and upload the firmware to Octoprint and use the firmware updater plugin to apply new firmware. This is much more streamlined because there’s no constant disconnect/reconnect of cables.
  • Meticulously following a guide for my specific printer, I was surprised when the orientation changed. Home/0,0 used to be in the back left on this printer with the stock Marlin 1.1.6 firmware on it. Imagine my surprise when my first test print started printing as if it was oriented in the diagonal opposite corner. This has cascading effects, from my calculation of nozzle to probe offset, to the move-out-of-the-way-to-pose-for-a-photo behavior for Octolapse, to the location of my Wyze Pan-Cam mount.
  • The whole concept of direction and home on a 3D printer, once it becomes disrupted, is extraordinarily confusing, and if you get it wrong, your shit will try to slide out of range and make a bunch of noise, and cause unnecessary wear on your parts.
  • The good news is, the probe “works” — as in it deploys and retracts, and senses surfaces. I still have more work to do with directions, inversions, and home locations before it correctly knows what’s going on, though.
  • Oh, and I also had to learn how to navigate VSCode/PlatformIO, because there are “issues” compiling this firmware via the Arduino IDE, which I had always used in the past.
  • I also had to disable certain less-than-necessary components of Marlin to build a firmware image that would fit in my ancient-ass 8-bit board. This probably means it’s time to replace the board with the fancy new 32-bit board, but as long as I can get this one to print, I think I can wait on that one.

All in all, I think I’ve dumped about six hours so far into this “improved experience” modification.

Improvements so far:

  • Incomplete BL-touch support
  • Disrupted orientation that still isn’t fixed
  • Marlin firmware went from 1.1.6 to 2.0.8.1. That’s got to be good, right?

Fun with the Pico and C

When I finally got my hands on some Pico microcontrollers, I was excited to see what they could do. But I was used to the Arduino infrastructure and wanted to explore the Pico on its own terms.

Obviously MicroPython is the easy choice. Once the Pico is flashed with MicroPython firmware, you can write your code in Thonny and just save it straight to the Pico from within Thonny. Easy peazy.

Coding in C requires a bit more effort, especially at the start. You need to get the compiler installed, which varies by OS, and since I develop on a Mac with the M1 Silicon chip, it’s even more obscure for me. Fortunately, it’s all right there in the docs. Specifically, the “Getting Started” document, chapter 9, page 37. An architecture flag allowed the brew command to work, and I was off to the races.

Pin stuff is easy. I know I can do that. I wanted to skip to some wacky complicated stuff. So I started with the Waveshare demo for the Waveshare Pico-LCD-1.14 display hat with buttons. I decided i wanted to modify that demo to display my own image instead of the stock image. After some trial and error, I was sort of able to do so.

The image is stored as a c hex array. Getting that file exactly right was time-consuming. I didn’t see an image converter in their examples arsenal, so I found this one online. By tweaking the settings a bit I was able to get an image to display .. sort of. The color mappings are different somehow. Maybe 65K colors isn’t that many after all. I’ll have to mess with it with a more legible photo.

Original photo for reference:

The display is 240×135 and 65K colors. I’ll update this post when I figure out more. This is just to make it a little bit easier for those who want to get into C programming on the Pico but don’t know where to start.

Also, these kids will show you the general build environment / compile process completely if you live in Ubuntu-land. Amazing.

Prepping for 2021’s first in-person meetup

Next week’s meetup is slated to be DC540’s first in-person meetup this year. We decided to schedule the in-person meetup in the backyard. Since we made that decision, the CDC advised that vaccinated people should be able to gather indoors maskless. But we’re fine with outdoors. Especially since, when we scheduled it, it was looking like it might rain Monday evening, but since then the probability has decreased steadily. Looking very promising now.

We’ll have enough Raspberry Pi Pico microcontrollers to go around, I’m trying to pre-solder a bunch of headers so you don’t have to waste time soldering and you can get started right on deploying programs onto it. I’ll also have some breadboards and LEDs to play with if you’re so inclined. If you want to be prepared to play with that, install Thonny on your laptop and consider bringing a standard MicroUSB cable. MicroPython is WAY easier to get started with on the Pico than C. Plug it in with Thonny running, Thonny will prompt you to flash MicroPython firmware onto it, and then you can just save your python programs straight to the USB-connected Pico and run them. Easy as Pi.

More fun with Raspberry Pi Pico

Our next meeting will again focus on the Raspberry Pi Pico. We are looking at, weather permitting, an outdoor in-person meeting next Monday. Stay tuned in the Discord to see if it’s happening.

So, our regulars will know that one of our founding members, in a moment of extremely questionable judgment, purchased an entire REEL of Raspberry Pi Pico microcontrollers. If you’re out of the loop on this device, it’s closer to an Arduino than the previous iterations of the Raspberry Pi. While the Raspberry Pi 2, 3, 4 and Zero are all tiny computers onto which you install an operating system, the Raspberry Pi Pico is a microcontroller, onto which you flash firmware and code.

The Pico, as a microcontroller, has a lot of things going for it. It’s very small, it’s light weight, and the castellated edges provide a lot in terms of mounting flexibility. You can either mount it thru-hole or surface-mount it on pads!

The easiest way to use it is with MicroPython. By flashing the Pico with MicroPython firmware, it provides and environment that allows you to very easily drop new MicroPython programs onto the Pico just by pressing the boot button and having it mount to your computer as a storage device — programs that can easily interact with LEDs, sensors, servos, etc… basically anything that a microcontroller can do by sending and receiving data on its I/O pins, this little baby can do. See the recent post on the vintage powered breadboard for an example of the Pico in action.

This also includes using tiny OLED displays. For example, the Waveshare 1.14″ display for Pico — this is available in a “hat” format, meaning it sits right on top of the Pico once the headers are installed in the correct direction. At that point you can easily build a 3d-printed housing for it or include it in a larger project’s design. It conveniently includes four buttons to drive any menus you come up with or provide some other sort of input.

Then there’s the GPIO expander, also by Waveshare. It’s a single board, on which you mount the Pico, and it splits all of your GPIO pins into left and right versions. So if you have two different devices you want to connect (and there are no conflicting pins), it’s pretty easy to do that.

If you were around a few weeks ago, our man Kevin provided a really cool demo of reverse engineering using the Pico.

If the weather holds out, our next meeting maybe in person, outdoors, and we’ll have some cool Pico stuff to demo, build, and play with.

LED 8x8x8 Matrix: I’m Very Confused

If you followed my previous post, you’ll know that I’ve been battling with one of those $20 LED 8x8x8 matrix kits recently. It was very frustrating, there’s little support out there and confusing information, and when you’re in that state you start to doubt yourself and assume you must have done something wrong. But I tested everything I could possibly test and I was convinced I did everything right.

At that point, all signs pointed to a chip that needed programming, and I tried with numerous sets of instructions and at least four separate programming adapters. Nothing was working, and the display stayed in its broken state.

I came up with two final plans. First, I would buy a new 12C5A60S2 IC, and see if that one would program correctly using the various adapters I have. Failing that, I would fork over the $20 for an entire new kit, build the base and test it extensively before attaching the LED faces to it.

The replacement chip was $5 shipped, so I pulled the trigger and waited. The chip arrived last night, but I wasn’t motivated last night, so I hit it first thing this morning.

Keep in mind, this is a “new” chip, allegedly.

I pried out the old chip, and carefully inserted the new one into the socket. I powered the unit up, thinking I’d spin around in my chair to access the programming interface (I’m using a Pi4 with the USB programming interface because linux supports the drivers natively). Imagine my surprise when the animations start scrolling on the LEDs.

What the everloving fuck? I bought what I thought was a brand new chip, assuming I would have to program it. Only to install it and discover it already has the animation firmware I need for my project?

Best as I can guess, this project is by far the most popular reason for people having these chips, and either it’s a return that someone programmed, or they preprogram all of them as a burn-in test. I might actually have to follow up with the ebay seller, because this has broken my brain.

Anyhow, I’m glad it’s finally working.

Analog Archaeology: Elite 3 Circuit Design Test System

A few weeks ago, I came across this vintage powered protoboard system on FB Marketplace. It seemed to be priced reasonably for its functionality, and the owner stated that it was working. So I snagged it. I admit I was tired of working with individual breadboard strips and cheap chinese power supplies, and wanted something larger and more stable. The fact that this provides 12V and 5V and a number of lamps, switches and buttons made it much more attractive.

I started looking online for a manual for it. It’s not a complicated unit, but I wanted to know what the manufacture intended for use case and workflow. So far I have been unable to find much on it other than a brochure on archive.org and a YouTube video from “IMSAI Guy” who picked one up for free at a junk drop-off location in Santa Clara. IMSAI Guy’s video was extremely helpful, as it gave me some clues about usage and expectations. Much of the board is unlabeled, but fairly intuitive. +12V and -12V terminals on the lower right, and two bare GND terminals near the bottom of the unit, are the only terminals that are labeled.

On the video, IMSAI guy shows two pairs of red terminals near the top, one on the left side and one on the right, and it sounded like he was saying that they are 5V terminals tied together. Here’s where mine starts to differ. Rather than two pairs of red, I have two red/black pairs. I powered it up and grabbed the multimeter. Measuring from red to ground gives the expected 5V. Black to ground gives zero. Is this another ground? Hmmm not quite. Red to black gives 4.7V. At this point I’m a bit confused. It’s definitely the same model as in the video, but it’s different.

So I went through and tested the other features. All twelve of the lamps are functional and pre-grounded with a 4.7K resistor inline, all you need to do is tie them to power >1.5V and they light up. The ten switches are nicely configured, they provide patch points for normal on and normal off. Same with the four momentary button switches.

There are edge PCB connectors on each side which I can’t imagine using at this point in time, and a pair of BNC jacks on the left side. Those could be interesting.

So I built a couple of Raspberry Pi Pico example circuits and powered them up, just for the photo op, and to put the thing through its paces.

But I was still perplexed about the black terminals. Why does mine have different terminals than IMSAI Guy’s? I made a mental note to open it up later and figure this out.

Then last night I looked at it from a different angle and noticed something I hadn’t seen before. There’s a DIN-5 connector on the left face of the unit that I hadn’t noticed before. What possible use for a DIN-5 connector would this thing have? I opened up IMSAI guy’s video again, and watched as he spun the unit. Nope, I don’t think his has this. Now I HAVE to open it up.

WOW. So mine is a modified unit. Getting the docs probably wouldn’t help at this point, unless this is a later model and how it was released by the manufacturer. (shrug).

I found another expired auction listing for one of these, and it did NOT have the DIN-5. So either it’s not stock or it’s a later model.

Opening up the unit, I find another board that’s not on the other two examples I’ve found. Over to the right is a more modern power supply than the brick transformer it came with. An E59712 board. And now the light bulb in my head goes on. Maybe, since this board seems to have adjustable voltage at R21, maybe it’s feeding the black terminals somehow, providing an alternative to just +/-12V or +5V. Further research required.

Add to that the fact that this subassembly is tied to the main power supply, the DIN-5 connector AND the surface board, and I’m starting to get a picture of things. I’m going to have to test more, but I feel like either this is a supplemental board designed to give more flexibility to the unit, or the main power supply died and this is a more modern replacement that was shoved in. I really am curious about the DIN-5 use case, though.

Mine had a SUNY asset tag on it, in case you’re curious. Anyhow, more digging later.

Update: Here’s a PDF of the original brochure. Not as good as a manual, but useful just the same.

TNM_Elite_1_2_3_dynamic_breadboarding_systems_-_E_20170907_0185

Bashbunny — still fun in 2021? (part 1)

I decided to dust off my Hak5 field kit and refamiliarize myself with all the tools. I have the bashbunny, the LAN turtle, the rubber ducky, and a bunch of utility adapters. I also have a wifi cactus in there, but I’m pretty sure I picked that up separately.

I started with the Bashbunny, since it’s so versatile. I won’t address advanced topics like locked PCs in this post, this is very basic bashbunny talk. So the scope here is “some dumbass left me unmonitored access to a PC.” Either unattended, or “here, you drive while I go get a drink.” Yeah, don’t do that with someone who might have these tools and tendencies.

So the first thing I noticed was that it was out of date. Fortunately, Hak5 has very usable instructions and tools for making it current.

So I went through all that process, bringing my payloads and firmware up to current levels. It was a fun exercise.

The first script I ran was recon/MacProfiler. I set the Bashbunny to Arm, copied the payload.txt into switch1/, ejected it, switched the Bashbunny to position 1, and reinserted it.

Ran once, and it left the bashbunny mounted. The second time I ran it, it successfully ejected itself, which is important if you’re trying to be a bit stealthy. At some point I’ll investigate that further.

It worked well. It gathered a list of all of the /Applications on my MacBook Air, a list of all users, and all the networking information I might need. Oh, and a list of things that startup automatically. All of this is tremendously useful for recon, so that you can craft a later attack for next time you have access to the same PC.

Next, I tried macinfograbber. Similar concept, but it’s specifically crafted to grab a copy of any spreadsheets (xls/xlsx) in the user’s Documents directory. By extension, of course, this could mean whatever type of files you’re specifically aiming for.

(arm) (eject) (switch) (reinsert)

OK, this did some stuff, then ended with a red LED indicator on the bashbunny. This translates to “no files found” according to the script. Kind of surprising. Do I really have no xls/xlsx files in my Documents directory? Let’s see… Hmmm, yep. I do. Why did it fail? At first I thought maybe it was spaces in the filename and a poorly-written script, but I renamed it to a single word and tried again and it continued to fail.

So I dug deeper. Here’s the command that macinfograbber uses to grab those files:

cp ~/Documents/{*.xlsx,*.xls,*.pdf}  /Volumes/BashBunny/loot/MacLoot/xlsx/

And here’s the problem. I’m assuming these scripts were written back in 2017 when the Bashbunny was fresh. In 2019, Apple switched from bash to zsh on the Macs. And apparently, zsh fails this command if any glob fails for safety reasons. So that line will need to be rewritten, or just broken out into individual commands.

More on the Bashbunny later. I plan to dig deep through the whole payload library for a 2021 refresh, because it’s still useful. Although you might want to remember to take your USB-C adapter with you for modern MacBooks. 🙂

CPanel’s “Plus Addressing” feature is specifically weird and problematic.

I got off on a tangent recently, wanting Gitlab’s “Service Desk” functionality to work. This feature allows remote users to open issues via a crafted “Plus addressing” email address, i.e. gitlabaddress+gitlab-project-identifier@yourdomain.com. I did everything I was told to, and was struggling with why it wasn’t working. It just wasn’t detecting new emails at all.

So I logged into the webmail of the domain on which I had set up the email account, it’s a cPanel website by one of the big commodity shared hosting providers. Sure enough, nothing is in the inbox. Hmm. Maybe Plus Addressing isn’t as ubiquitous as I thought. I mean it’s been a thing with Gmail for a while now, but maybe…

Nope, research showed that cPanel has indeed adopted it.

But wait. Ooooooh, cPanel, you think you’re crafty, don’t you? Rather than just allow the email in and rely on the user to filter it, cPanel actually immediately routes the email to a folder named for whatever you throw in after the plus sign. If the folder doesn’t exist, it just creates one. That’s sure convenient! Except for one thing — the user has no way of knowing that folder exists, at least via webmail, because the user is not “subscribed” to new folders by default. The only way I was able to find the emails is to go into “Manage Folders”, then they show up in the folder list, with “subscribed” unchecked. So I subscribed, then viewed the emails, then dragged them into the inbox, where they were promptly picked up by Gitlab.

A unique problem that may require a unique solution… I’ll have to think on this one a bit. Ideally, I would want these emails to enter the inbox normally. I know they think they’re doing the right thing for users wanting to use this as a spam filtering mechanism, but by having different behavior than other vendors supporting the extension of email address with a plus sign, they have created a dilemma for vendors who choose to make use of this functionality in their features.

Tested also on my Thunderbird client. Sending to a nonexistent folder hides the message in a newly-created unsubscribed folder. No hint to the user that it exists. Sending to an existing folder adds a new unread message to that folder.

So you can put things into a recipients mailbox without anyone knowing they are there…
You can take up SPACE in a recipient’s mailbox, causing a denial of service, without them knowing why unless they go out of their way to look for unsubscribed folders. What if I sent 10,000 emails to that newly-created unsubscribed folders. Or even more annoying, 10,000 randomly-created folder names.

Fascinating.

Fixing a broken Defcoin mining pool; a saga

Follow along in my journey of fixing a broken NOMP/MPOS Defcoin mining pool. It wasn’t a public pool, it was my own personal solo mining pool. The idea was that it would eventually become public, but you know how it is, sometimes it takes time to get around to doing things. Doing something for me is easy, doing it for public requires much more careful thought and planning.

Careful thought and planning that I wasn’t executing last year, sometime between February and April, when I haphazardly ran an apt upgrade on the Ubuntu 18.04 VM that was running my pool. I didn’t think anything of it. It was a busy time. I was in Vegas for a while in February, then I came home and went to BSides Nova, then the world shut down and mining Defcoin was just not on my mind.

I noticed that my wallet wasn’t getting fatter, so I logged in to take a look, and realized it was 100% out of space. The shares table was 3GB. It wasn’t important at the time, so I abandoned it in place.

Cut to this week, when bashNinja and others are talking about doing some work on Defcoin. Pools are popping up, people are getting excited again, there’s talk of forks, and I’m right there paying attention, because sure, I want my pool up and running again. But man, I’m not looking forward to figuring out this software made of black magic and rickety scaffolding and held together with government cheese. I barely got it running the first time, I clearly didn’t understand it.

So, reluctantly, I started digging. First, my defcoin core wallet is not talking to any peers. It only has one peer address and it can’t connect. Well, it has been a year. I asked on bashninja’s discord about that, and got a quick and easy response. I was pointed to a post in /r/defcoin that contained a list of peers that can be manually added via the defcoin-qt debug console window. Once I did that, it started to talk to peers again, and began to wriggle its way towards 2021 on its own time.

Second, the shares table. Nothing can work with that table in that state, everything’s just running too damn slow. 17 million rows. So…

Let’s clear that table. At this point I have no idea whether it will prevent the rest of the system from working. [Keep in mind that I never gained a full understanding of how the system is strung together, I just got it working and let it go. So at this point, I’m reverse engineering something I slapped together myself.] But in case I need it, I’ll back it up. So… create table shares_manual_backup like shares; insert into shares_manual_backup select * from shares; Then, once I confirmed everything copied, delete every row from shares. This allowed me to navigate, and allowed the WebGUI to respond again. I needed that, there’s valuable troubleshooting info hiding in there.

So browsing around the GUI, I see that all the cron jobs have been disabled. It took me a while to remember where to find and fix that. I don’t know why an interface wasn’t created for it. How it works, I learned, is that if one of the cron jobs and their subtasks fail, they update or add a row in the monitoring table to indicate that the job is disabled, then they no longer run from cron, forcing the administrator to address the underlying issue before it gets worse.

I tried enabling them and running them, they just revert back to disabled. So I dug around to find where MPOS logs results of those cron jobs, and I found them. /home/(username)/mpos/logs/(jobname)/log_(date)etc. I found very strange results in those log files. Problems with scripts that I hadn’t changed. Curiouser and curiouser.

So again this took a while, but eventually I happened up on a clue. A script failing because a command had been deprecated in PHP 8. So now it’s starting to dawn on me that my update might have caused this. Also, it’s having trouble finding memcached, which I know is installed. I don’t quite understand, until…

OK, I’ll add a phpinfo file to the public-facing web area of MPOS. Go to it. Sure enough, no memcached. But wait. This says we’re running PHP 7.3. How can this be? Back to command-line. php -v shows PHP 8.0. What is this trickery??? OK. Since the problem is clearly in the command-line, because the cron jobs are failing, let’s try backing this version down to PHP 7.3. This can be done with update-alternatives.

That worked. Now we’re getting different errors.

021-04-14 18:53:49 - ERROR --> Failed to update share ID in database for block 1273177: SQL Query failed: 2006
2021-04-14 18:53:49 - ERROR --> Failed to update worker ID in database for block 1273177: SQL Query failed: 2006
2021-04-14 18:53:49 - ERROR --> Failed to update share count in database for block 1273177: SQL Query failed: 2006
2021-04-14 18:53:49 - CRIT --> E0005: Unable to fetch blocks upstream share, aborted:Unable to find valid upstream share for block: 1273178
2021-04-14 18:53:49 - INFO --> |    23103 |    1273178 |           24.75 |            |                           | []              |                 |          any_share |
2021-04-14 18:53:49 - ERROR --> Failed to update share ID in database for block 1273178: SQL Query failed: 2006
2021-04-14 18:53:49 - ERROR --> Failed to update worker ID in database for block 1273178: SQL Query failed: 2006
2021-04-14 18:53:49 - ERROR --> Failed to update share count in database for block 1273178: SQL Query failed: 2006

OK, this makes sense. Of course it can’t associate share IDs with blocks, I’ve wiped out the share table! So let’s look closer at the share table, because I’m really hesitant to dump 17 million records back in. Looking closer at the data, the way it associates a share ID with a block is the “solution” field in the shares table, which maps to the “blockhash” field in the blocks table. A couple of quick count queries reveal that of the 17 million records in the shares table, currently relocated, fewer than 7,000 contain a populated solutions field. Those are the shares that resulted in a blockhash. So, on a hunch, I select just those rows back into shares and run the findblocks command again. Lo and behold, it’s not failing. It’s taking its time, though. About two seconds for every three records. So roughly this “fix,” assuming it works, will take a while.

I let it run for a while, and then I tentatively give the pps-payout script a poke, since that’s another one that was failing instantly because it wasn’t finding any shares that matched its criteria. Sure enough, it’s able to chew on the data that findblocks is now fixing. Good.

So the way the scripts are re-enabled is, you fix the underlying problem, then run the script with the -f argument. If it succeeds, it re-enables the cron job. So it’s important to check that, because any problem can cause a cascade of further problems that eventually kill the system.

I probably won’t know until midnight tonight whether I’m finished with my NOMP/MPOS deep dive, but I will sleep well knowing that I’ve taken it far from the broken state it was in, and I’ve learned a lot along the way. Oh, and I documented everything I found in my personal Gitlab issues and Wiki for the project, so even if I unlearn it, it’ll be less painful next time.

Illuminated Latching Switches on a budget

When I first saw this DIY Raspberry Pi Cyberdeck, I knew I wanted to build it. I love the aesthetic, and I already have most of the parts. Element14 was kind enough to present most of what I need via direct links.

And then I saw the price. Those beautiful rectangular switches to the right of the screen? Illuminated latching switches, $20 each! I just can’t stomach blowing $100 on switches for a DIY case.

So I started shopping. And damned if I didn’t find them ALL quite pricy. I must have shopped for over a week in my spare time, and really couldn’t catch a break.

Until I found these:

Amazon. $10.89 for five, around half the cost of ONE everywhere else. Ordered March 25, shipped from China, arrived today. Not bad for China.

Wish me luck.