Two Gitlab books briefly reviewed

If you’re following along like a good little do-bee, you’re already aware that i’ve been evaluating Gitlab as a functional equivalent to (much of) my Atlassian infrastructure, due to unforeseen events I will no longer vent about.

This required me to actually LEARN gitlab in the process.

In my usual fashion, the very first thing I did to start learning it was to install it. In my infrastructure. No planning, no strategy, just follow the install doc and get it up and running so that I can start playing with it.

That alone was so easy that I got cocky. Again, with no planning and just the barest hint of strategy, I integrated it with my FreeIPA ecosystem. No problem.

Then, following the simplest of breadcrumbs, I was able to migrate both my existing Bitbucket infrastructure AND my existing Jira dataset. Some of those subsets of data referenced the same internal projects, so it was fun and informative to sort through that.

So here I am with 92 projects, many with open and closed issues, some with git repositories. Seems good. I’ve already started working through issues and generating new ones.

But now here I am with a mostly unfamiliar new interface. I’ve been around, I’ve used many interfaces and I’m reasonably competent with git, but I have yet to figure out what else Gitlab can do for me to improve my life.

So I picked up The Gitlab Cookbook and Gitlab Repository Management to see if they would expand my knowledge.

They did, to an extent. But neither of them were perfectly suited to my needs. This is my gripe with most of the computer books out there. The widest audience for a book is going to be people who are new to the product, the technology or the paradigm. There are very few books out there that are capable of taking you into the stratosphere — the deep tracks of a product, where hearts and minds are conquered, lives are changed forever, destinies altered…

So yeah. These books covered installation, user management, creating projects and issues, etc. I was able to skim through most of that. The CI/CD sections will probably prove useful at some point, but that’s not exactly where I’m going right now. I guess what I want is all the cool little timesavers that improve our lives and the quality of the data retained and created by these products. Neither of these books really got into that.

As an example, I wonder why neither of these books chose to explore “Quick actions.” This is the kind of deep knowledge I need. When I can open an issue, and type “/spend 1h” in the description box to document the fact that I spent an hour on something, that means a lot to me. When I can type “/shrug” to append ¯\_(ツ)_/¯ to a comment, these are the important things I need to know.

So now I know. I don’t need a Gitlab book. I need a Gitlab Quick Actions Cheat Sheet.

And so do you.

And here it is. https://docs.gitlab.com/ee/user/project/quick_actions.html. You’re welcome.

The rest of Gitlab is mostly pretty intuitive, or else completely dependent on your knowledge and understanding of git itself.

HELP! I’m surrounding myself with plastic!

At a certain point, 3D printing shapes just for fun and novelty takes a back seat to utility.

I’ve been battling the accumulation of AA and AAA batteries for the various remotes, conference badges and other random gadgets in my life. For a while I kept them in an old empty checkbook box, then in a plastic box that once housed resistors, but finally I decided to do something useful and solve the problem.

I printed this lovely thing I found on Thingiverse. It’s much more stable than cardboard, and holds more than the plastic parts bin. No more random batteries rolling off the desk and onto the hardwood floor. Now I feel like I should print another one, in another color maybe — one for fresh alkaline batteries, and a separate dual container for to-be-charged rechargeables.

Total cost of printing, I’m going to guess around $2 worth of PLA plus electricity.

I Refuse to Admit Failure… YET.

I finally picked up one of those 8x8x8 LED cube matrix kits. I’m a sucker for blinkyshit, all the DC540 regulars know that. I’m doing the rare thing here in documenting before the resolution of all of the issues, just because the processes deserve documentation, I think.

I am by no means a hardware expert. I stand on the shoulders of the entire internet when it comes to mucking about with programming microcontrollers. I’ve gotten better, but it’s still not innate to me the way other aspects of technology are. There are just too many microcontrollers, and too many ways of poking at them. I2C, SPI, JTAG, sometimes it seems almost overwhelming.

But here we are, with this STC12CA60S2 microcontroller, already installed on the PCB. I went through all the steps over the weekend of soldering all 512 LEDs and the other chips and small parts. I don’t know about you, but when I get close to the end of a project like this, the anticipation starts to really kick in. If I’m not careful, it’s easy to get sloppy and make a stupid mistake. But I didn’t, this time. I did find myself short on LEDs. The kit came with extras of most of the small parts, but inexplicably, only the exact number of LEDs, and two of them were DOA. So I had to order replacements from another supplier, and I didn’t think to order long-leg LEDs for the replacements, so I really had to work a bit to fit them in.

So here we are, it’s all assembled, looks great from a distance, but up close you can see my sloppy skills. This is how the Captcha protections should work, they should evaluate us on our assembly skills. Clearly I am not a robot.

From the instructions I found, the STC12 is supposed to be pre-programmed, and I should just be able to apply power and see the animations. No such luck. It illuminates a block of LEDs, but no animation. To be thorough, I double-checked all the chip orientations, and double-checked all LED paths by using my bench power supply and applying 3V to each power vertical and grounding each ground horizontal to confirm that every LED is “addressable.” I suspect from Internet research that they lapsed and sent me an un-programmed STC12, because it’s documented that this happens. Not a problem, I’m up for the challenge, I’ll figure this out.

Let’s see. It wants a UART USB TTL serial device. Four-pin header. VCC, GND, P30 (RX) and P31 (TX). Well, I don’t have the Adafruit programmer they recommend, but I do have a FTDI FT232R. Let’s give that a shot… Nope, it doesn’t seem to recognize the power cycle, it stays on “Waiting for MCU…” even though I cycled power. NOTE: during this process, the devices is powered, 5V, by the USB programmer. Interestingly, and the Internet backs me up on this, the power light remains dimly lit even with the power button off. Several sources report that parasitic power leaking from the TX line can interfere with the power cycle reset process, preventing this from working. It’s possible this is only an issue on these FTDI programmers, and maybe the problem will go away when I use the recommended Adafruit programmer, which arrives today.

But I’m impatient, I WANT IT NOW! So I started scouring the lab to see if I have any other options available to me. Hmm, I have a Bus Pirate, the Swiss army knife of microcontroller programmers. I spent about an hour last night learning it and futzing with it. The Bus Pirate is interesting but cumbersome. You plug it in, then you serial directly to it (I use screen on the Macbook) and configure it for the purpose intended using a manu system. Then I exit screen and do what I would normally do with a dedicated programmer.

The Bus Pirate doesn’t seem to handle the power situation correctly either, but in a different way. It doesn’t seem to know how to power cycle correctly in UART mode. Even if I set power on before running the stcgal command, it shuts power off when I initiate the sequence and never turns it back on again. What if I disconnect power and ground from the programmer to the board and use the cube’s external power supply? I’ll try that after this post, but I don’t have a lot of hope. I tried this tactic with the FTDI and didn’t see any difference. I wonder if part of the process is the programmer detecting voltage via the same pins it provides voltage on. UPDATE: Tried that on the Bus Pirate, no luck. Also tried another suggestion, putting a 10K resistor inline with TX to keep that parasitic power at bay. No luck. Hopefully the Arduino programmer will work.

Another option is that I have one of those ZIF-socket chip programmers. That’ll be a last resort. I prefer not to pull chips off the board, even though they’re socketed, because of the potential for excessive bending and possible breakage of the pins.

Oh well, one way or another I’ll update this already-too-long shitpost later today. I’ve got at least two paths left to explore today.

Is Gitlab a viable Atlassian alternative? Spoiler: maybe?

Maybe you’re one of those stubborn people like me who insists on self-hosting everything. Maybe it’s a requirement due to sensitivity of data, or maybe it’s just pride. In any case, that’s what I was doing. I was proud of my Atlassian setup. I happily paid my $10 each for 10-user licenses of various Atlassian products. Jira, Confluence, Bitbucket.

Everything was fine, and everyone was living happily ever after.

UNTIL.

And this is where I sacrifice my personality for professionalism. In my humble opinion, Atlassian made a huge error in judgement. They decided to end support for their “Server” line of products in favor of “Cloud” and “Data Center.” No more $10 10-user licenses for self-hosted apps. 10-user licenses are FREE now — in the cloud. You want to host it yourself? Fuck you, go get the Data Center version. How much is it? Well, if you have to ask…

And yes, I was holding back. I’m a little bitter.

So here I am, exploring ways I can take my business elsewhere. I’m a simple man with simple needs. I don’t need all the workflow bells and whistles that Jira offers. Hell, we don’t even use most of that at my job. At the core, I need projects and issues. Gitlab has that. And of course Gitlab can do everything that Bitbucket does. What’s left? Hmm, Confluence. Well, I’ll explore that part later. I do know that there’s a “Markdown Exporter” plugin for Confluence that will export “markdown” documents in a way that can be imported into Gitlab, Github and other apps. I just don’t know what the paradigm equivalent is for it just yet.

So let’s start with eradicating Bitbucket.

OK, I built a VM. CentOS 8. Gitlab’s installation instructions are crystal clear. A few prerequisites, an update, and a repo install, then a package installer. Nice, that’s how I like it. OK, they include a LetsEncrypt cert deployment by default. We’ll have to get rid of that, I have my own CA internally, and I issue certs from that. Done, not so hard. Next, SSO. I have FreeIPA in my infrastructure and had integrated the Atlassian products with that. Can I do that with Gitlab? Shit yeah. Easy as chocolate pie. A little bit of finagling with the .rb file and I’m in.

So now on to Bitbucket. Well, they just went and built in the integration/import functionality, just like that. I can give it my bitbucket login and password and import ALL of my bitbucket projects in one session. Lovely. I’m in tears over here. Literally ten minutes after getting Gitlab up and running in my environment, I’ve got all my git repos imported.

How about Jira? Well, it used to be a pain in the ass, when I first looked into it it sounded intimidating. “Well, you’ll need to do REST API queries to both services to translate everything blah blah blah”. Nope. Not anymore. The latest Gitlab has an importer built-in. It’s a little weird and roundabout, but it farging works. Go to, or create, a project. Go to the Issues page within that project. Click the “Import from Jira” button. Here’s where it gets weird. You have to re-enter the Jira integration details for each project before you can import that project’s issues. It would be nice if you could do it once, map the Jira projects to existing projects and choose to ignore or create the rest, and click it. But no problem. It brings them in, correctly lists some of them as closed. etc. It’s just going to take some time, thought and planning.

Confluence integration is going to require its own post, because getting all the confluence data over, including attached files, is going to be important to me. I use it as a home for a whole lot of documentation that I refer to frequently, and I can’t afford to lose it. So stay tuned for more on that.

I’d love to hear what other people are doing. I can’t be the only one dealing with the loss of the nearly-free Server products.

Adding a Wyze Cam-Pan to Octopi

I’ve been using Octopi with my 3D printers for almost as long as I’ve been printing. The whole concept of printing from SD cards just seems alien to me, when Octopi/Octoprint jumps through all the hoops for you. I mean ALL the hoops. Upload your gcode to a web interface, set the print, watch the print, manage temperatures… there is even a spaghetti detection plugin!

One of the biggest benefits is camera integration. Why? To monitor progress, to create stunning time-lapses. The technology has advanced so much that the Octolapse plug-in can detect when the Z-layer changes, move the extruder to an out-of-the-way corner, and take a snapshot of the current state of your print, then continue printing as if nothing happened. This results in beautiful yet creepy time-lapses where the object simply appears to grow out of thin air.

The typical thing to do is integrate directly with the Pi camera. It is perfectly utilitarian, and does the job. I haven’t been super happy with the Pi camera, however. Until recently, I had cobbled together systems to hold a camera in place using helping hands, or whatever other makeshift device I had on hand. Then I 3D-printed a mount to do the same thing. But the quality’s just not there. The resolution is inferior, it doesn’t handle low-light well, and it can’t pan, tilt or zoom, so you’re stuck with manual adjustments.

Then I saw that someone had created a frame mount for a Wyze Cam-Pan for the Ender 5. The Cam-Pan can be found for under $30, and has full HD and PTZ capabilities. Also records and speaks audio, not that I’d need that here. So I printed one, and ordered one, before researching how to integrate it.

Well, by default, the Cam-Pan wants to work like a Ring camera and send its output to the cloud. SAAS is king, apparently. But wait. Wyze offers RTSP firmware for it. That makes it simple, right? Well, not so fast. It makes a decent stream, but it doesn’t seem that Wyze’s RTSP firmware offers the still-image function which is required by the Octolapse plug-in.

Another option Wyze offers is USB Webcam firmware. But that requires a clunky additional wired connection, a USB-A to USB-A cable, from the Pi to the Wyze camera. HATE IT.

Started talking with Kevin about reverse-engineering the Wyze firmware to see if there was hidden functionality, but then I remembered that everything has already been done. So I googled Wyze camera reverse-engineering. First I found some very confusing custom firmware made for the Wyze V1 and V2 cameras. This was getting closer, but I’m specifically looking for the Cam-Pan version. I started reading the “issues” section of the github for that release, which hadn’t been updated in three years, and people were wondering why it exists at all, since at its core it’s a no-further-benefit fork of Xiaomi Dafang firmware for Wyze, which is better documented, more thorough, specifically known to support the Cam-Pan, and was updated just four months ago.

Fast forward a couple hours, and I did it:

  • I flashed the custom bootloader. It’s smart. if it detects the SD card with the custom software on it, it runs that. Else it runs the built-in Wyze version. BRILLIANT.
  • I created the custom SD card, editing wpa_supplicant.conf to connect to my wifi.

I booted it up, and was dismayed. It clearly “works,” in that ssh, web and rtsp ports are open by default, but this was clearly written before the great clampdown on TLS. A self-signed cert with an untrusted root cert, whose name will never match on the common name. I spent at least an hour going down that rabbit hole and trying to bypass it, but here’s the thing. Not only does it have to work on my browser, but it also has to work from the Octoprint installation on the Pi. Since I now have SSH access to the Wyze camera via its new firmware, I logged in to see if it would be easier to just “replace the certs.”

Sure enough, it all hinges on just a cacert and a lighttpd cert. I figured I had little to lose at this point, so I generated a new cert for it, signed by my infrastructure (what, doesn’t everyone run FreeIPA in their basement in 2021?), and dropped these new and authenticated certs into place. I power-cycled the camera. IT CAME BACK UP. And now, at least it ALLOWS me to bypass SSL/TLS errors (apparently my FreeIPA server isn’t too smart about daylight savings time, so I still have twenty minutes before that cert is valid). (This was incorrect, as I discovered today. My infrastructure was using an ntp server that had been removed from my network, and has been WRONG for some time now. I fixed that today!)

The point of the whole TLS exercise is so that I can add the camera to Octolapse, which actually does a verification check to ensure it can read both the stream and the “capture current pic” snapshot link. I suppose I also could have figured out how to turn off https entirely on lighttpd, but I was already nearly done when I thought of that option. Also, it’s not a full linux deployment on that firmware. It’s a busybox/mips stripped down linux with all the configs living on the SD card.

I ended up disabling SSL support anyway, because why not?

The next problem — it looks like Octoprint/Octolapse, between the two of them, need the feed in lots of different ways. The main “control” screen in Octoprint needs it in http/mjpeg — no other feed type will work here and show the current moving image. There are -three- settings within Octolapse — base address, stream and snapshot. And I’m pretty sure the stream option here would take an rtsp stream. The snapshot, however, is ghoulish, in that it needs an actual snapshot function. Why not just build in a function that takes a curent snapshot from the stream? Oh well, I don’t know the capabilities that well, there must have been a good reason for that choice.

Meanwhile, the camera won’t do http mjpeg with any of the firmware I’ve tested so far. The workaround for this seems to be back to the beginning thoughts — use ffmpeg and ffserver on the Pi itself to suck in the rtsp stream and serve it on a different port locally as mjpeg. Again, it seems like an awful lot of load to do something so seemingly simple. But packetwise, they’re staying local, it would be sucking that feed into Octopi anyway. BTW, ffserver requires a specific older version of ffmpeg and is a 2-4 hour compile on the Pi.

After managing to get it “working” with the mjpg custom firmware, I wasn’t happy with the result. The network overhead of the Pi streaming the video from the camera and then re-streaming it internally was too much, and the image ended up being glitchy and problematic. I also think having ffmpeg and ffserver was adding a notable load to the Pi.

So I broke down and bought the USB-A to USB-A cable and flashed the Wyze camera with the Wyze USB-Cam version of the software. Then I ran raspi-config to turn off Pi Camera support. After setting Octoprint back to the Webcam profile and rebooting it, it just picked it up naturally. I did install the uvc software so that I could tweak the settings, but I’m not 100% that was necessary for the camera to work. In any case, I have great video quality and can now tweak some of the exposure and white balance settings on the camera for more dramatic timelapses.

I guess that’s a wrap. Check out the DC540 youtube for future timelapse videos.

3D-printed lithophane “art”

When I first saw examples of 3D-printed lithophanes, I thought it was a great development of the technology for uses that may not have been originally intended. Lithophanes have a long history, predating all of our high-tech methods.

https://itslitho.com/itslitho-blog/one-of-the-most-unusual-artworks-from-the-early-19th-century-the-history-of-lithophane/

But some of the articles I read at first said that doing them with PLA via FDM was impractical, and you’d really need SLA for the increased accuracy and complexity available. But then I recently started finding articles that showed examples with quality white PLA, so I tried it. Sure enough, it’s fantastic.

And it’s ridiculously easy to get quality results. Just upload your image to https://3dp.rocks/lithophane/ and it will make an STL for you. Here’s the tutorial I used:

https://sovol3d.com/blogs/news/tutorial-how-to-print-lithophane-on-your-3d-printer

Lithophane as printed , still on print bed.
Same lithophane held up to a light source.

Latest utility print: Phone mount for Jeep JKU

I’ve tried vent mounts, seat bolt shaft mounts, and others. Everything sucked. But some kind soul designed a phone mount specifically for the area of the dash above the radio. It clips into the dashboard tray and the gap just above the radio, without disrupting buttons if you aim it just right. And it has a cutout for a wireless charger, specifically it seems to be deisgned for the Vinsic extra-slim wireless charger. Got one on the way to try out.

One comment says that a hot day caused enough meltage for his to fall and warp. He was going to try it in another material (PETG) to see if that helped. I haven’t printed in anything besides PLA yet, but it’s about time for my experimentation to move to the next level, so I’ll keep an eye on that.

It printed in three parts — the main front panel and the two side mounts. Fitting the mounts to the front piece was a VERY tight fit, I actually had to shave the mounts a bit with a razor, and still had to tap them in. Nice solid piece overall, and fits my Pixel 3 snugly.

Source: https://www.thingiverse.com/thing:2769593

Cornstarch solves more household problems

Ever since adding the two LACK shelves to my music lab, using the computer in there has been annoying. Actually, it was annoying before. The table is too low to use it while standing, so I either had to bend a little bit or sit. And since adding the monitor, sitting is a non-starter, because I’m staring up into the abyss.

I tried placing the keyboard up on the shelf, but the shelf is fairly narrow, and the monitor base got in the way. So off to Thingiverse I go, yet again, in search of Apple keyboard risers. I found this one, which looked promising. Initially I was looking for something I could adapt to have a slight overhang at the front end, so that it would be anchored to the shelf. But this one, bless the designer’s heart, has three insets for 5mm rubber beading. Two on the bottom, to keep it from sliding on a surface, and one on the top, to help keep the keyboard itself stable, although the channel for the rear undercarriage should keep it well in place.

Now the keyboard is right in front of the monitor, perfect for the occasional standing Google search or setting it up to monitor Octoprint. Rubber beading is on the way, but it’s reasonably stable without it!

Source: Thingiverse: https://www.thingiverse.com/thing:2811956

Filling my life with specialty cornstarch…

Yesterday I decided I was tired of using random nearby objects as a riser for purposes of elevating my airpod case in the wireless charger to where it can reach the level of the charger.

I started looking on Thingiverse for random airpod cases to see if one could be adapted for this purpose. Lo and behold, I found one. This one was offered as a Tesla-branded holder, but I chose the unbranded alternative that was also offered.

It sits nicely in the cradle, and elevates the case to just the right level to receive a charge.

Did you know that PLA is a thermoplastic derived from cornstarch? This was a recent revelation for me. It’s even compostable!

I find myself paying more attention to household annoyances and organizational challenges that can be solved with 3D-printed objects. My eye is frequently with the intent to design something, but every time so far, I have found something on Thingiverse that can be used without having to design from scratch. I am grateful to Thingiverse for providing this platform, and to all of the makers out there that put their designs out there for us to use and remix.

“Making” useful things

I printed a bunch of these today. Dual purpose, really. The intended purpose is to solve the “tangled filament” issue, where loose filament on the spool backs up when not under tension, and crosses under another row, and when being fed, it sometimes catches and stops feeding. If you don’t catch it, it will fuck up a print for sure.

Dragon clips

They’re called dragon clips, and they clip to the side of the spool, and have a smaller clip into which the filament clips snugly, keeping it taut when not feeding the printer.

Dragon clip in use

But when I printed this, I had an additional use in mind. My UV LED strips around my music lab tend to fall away from the wall mirrors they’re attached to as the adhesive backing fails over time. The strain of cables pulling on them tends to amplify the problem.

I was really hoping these clips would fit behind the mirror and hold the LED strips in place, and it looks like they’re going to work fine for that purpose.