mastodon.ie is one of the many independent Mastodon servers you can use to participate in the fediverse.
Irish Mastodon - run from Ireland, we welcome all who respect the community rules and members.

Administered by:

Server stats:

1.8K
active users

#homelab

99 posts72 participants15 posts today

Just finished testing that Wifi is working in the house after I moved the coaxial Internet connection to the basement. Now for the real fun part. Running Ethernet so that the TV's and Xbox are on an Ethernet connection and my office is connected to my server rack. I'm not sure if I'll have an easier time going through this drop ceiling or from the attic and down the walls for my connections.#Homelab

Continued thread

I'm also just generally wondering what it is with Raspberry Pis and active cooling over the last two generations. For both the 4 and 5 models, the Pi foundation was saying "definitely do active cooling", and I think for both, the official case had an active cooler.

But I've been running both without active cooling without any issue at all. Sure, it takes a nice chunk of metal to achieve this, but it's not like it's a grotesque amount. The cooler doesn't even change the footprint.

After confirming that the NVMe performs exceptionally well, next I had to have a look at cooling. I started out a bit worried, because I was seeing about 50 C at complete idle.

I've now run "stress -c 4 -t 600", confirmed that the 4 CPUs stayed at their max 2.4 GHz the entire time. At the end of the run, the temp was at 78 C, which I find very acceptable. If any of these machines actually has to run at all-core full tilt for 10 minutes, something else is wrong anyway.

Second attempt to play with #kubernetes with 6 nodes #k3s running as VMs on my #proxmox. 3 masters with #kubevip and 3 workers with #longhorn (Kubernetes cluster storage)

So far, so good, but I haven't deployed any services yet. Thanks to Proxmox snapshots, which allow for easy reversion after mistakes

On my todo list: Backup on my Synology, Nginx,…

My plan is to test the migration of my services to k3s in this virtual environment. After that, I will gradually move to dedicated nodes
#homelab

Helm charts are great for managing applications but a massive pain to prep for initial deployment, am I doing this wrong?

With a new chart, I spend time reading through chart values and reading over the templates to see if certain features are supported by the chart or not. Reading go templates isn't exactly how I enjoy spending my time

When you create a chart, the default template supports a lot of standard features (overriding labels, setting TLS details for the ingress, securityContext, etc) which is fantastic... When I find a ready made chart on GitHub, I find that people have renamed some of these or stripped them out. I'm using NFS storage which means I think about process UID/GID a lot and typically create PVCs manually to be mounted in specific places which sometimes works great and sometimes not so much

I feel like I missed the boat with k8s-at-home which had an incredible collection of ready-made charts but it's a shame it's gone dark since :sadness:

Huh so currently I have DHCP updating DNS records for each lease it gives out

So question for the ipv6 inclined:

But I've been reading up on SLAAC and stateless dhcp6

Is there a common system to update DNS based on ipv6 records in such a system or would I need to use stateful dhcp6?
#Homelab #Networking #IPv6

Okay, the first measurements are in. And, ehm, if this holds up, I'm pretty sure etcd will find all the IO it needs.

The headline figures, comparing an NVMe attached to a Pi 5 with officially unsupported PCIe gen 3 enabled vs a Pi 4 with a SATA SSD attached via USB. FIO executed with four jobs, randrw, 4k blocks, direct IO using libaio and iodepth of 32, randrw on a 20MB file:

IOPS Pi 5: 151k
IOPS Pi 4: 10k

Speed Pi 5: 618 MB/s
Speed Pi 4: 41 MB/s

This is a typical FreshPorts daily database backup being rsync'd from AWS to the #homelab :

It just so happens the the output of pg_dump for the #PostgreSQL database more-or-less keeps the data in the same order each time. So the actual daily transfer SEEMS to be only the new data.

On disk, I rely upon #ZFS compression to do the disk savings for me. That compression also speeds up disk throughput - the CPU can uncompressed faster than the disk can provide the data.

In today's case, the amount copied down seems to be 360MB.

This speed-up by rsync, which recognizes what has and has not been transferred, is also why do not dump in compressed mode.

dumping freshports.org
receiving incremental file list
postgresql/
postgresql/freshports.org.dump
3,621,784,708 100% 77.73MB/s 0:00:44 (xfr#1, to-chk=1/4)
postgresql/globals.sql
3,963 100% 58.64kB/s 0:00:00 (xfr#2, to-chk=0/4)

Number of files: 4 (reg: 2, dir: 2)
Number of created files: 0
Number of deleted files: 0
Number of regular files transferred: 2
Total file size: 3,621,788,671 bytes
Total transferred file size: 3,621,788,671 bytes
Literal data: 355,867,046 bytes
Matched data: 3,265,921,625 bytes
File list size: 142
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 481,559
Total bytes received: 356,171,297

sent 481,559 bytes received 356,171,297 bytes 6,925,298.17 bytes/sec
total size is 3,621,788,671 speedup is 10.15

So I'm a bit of a drive horder and thus don't throw away hard drives that still work fine, even when they're replaced with larger/newer drives. As such I have quite a lot of storage in the form of old-but-working-for-now drives. I'm kinda tempted to set them up in a separate storage array and run them in like, RAID1c3 or something and just run them as disposable "I don't care if this dies" storage.

It's kind of like dreadnoughts in Warhammer. "Even in death, I serve"

Success: I got frigate (home security camera NVR software) running in kubernetes, with GPU acceleration and a Coral TPU passed through to the pod. One less thing to manually manage ❤️

(thanks Intel for the intel-device-plugins stuff, it all JustWorked(tm))

The little display I bought for my Pi work already paid off. I had forgotten to put my SSH key on my new USB stick I use for the initial boot, and now I could just attach a keyboard and the screen and quickly fetch the key.

First Pi 5 build successfully. If anything brings me to switching to to Tiny/Mini/Micro machines for my Homelab, it's going to be the fiddlyness of the Pi hardware. That PCIe ribbon cable was no fun at all to get into both slots. Nothing for a guy with my rather lackluster motor skills.

I also found that my worries about screw lengths were warranted. The screws are long enough for this setup, but are not long enough to also accommodate the base plate of my 19" Pi rack mount.