Backup Network Version Number I Forget

I’ve been writing a lot about my tech setup lately because I’ve done quite a bit of work on it. I’ve been meaning to share my current private cloud backup setup for a while now.

The backbone of my private cloud network is still Resilio Sync. While I rely on it a bit less these days, it remains a core part of my strategy.

Right now, I’m using Resilio to replicate a full set of data from my Synology DiskStation to a Raspberry Pi 4. I also replicate a subset of this data—everything except the media center—to an SSD on my laptop. Soon, I plan to set up another Pi 4 as a backup for the same subset of data I have on my laptop.

At this point, I no longer keep any replica data at friends’ houses. I probably should, but when my last setup failed, my friend had to bring the device back to me when he visited from the States. Ultimately, it wasn’t worth buying new gear just to ship it back to him. Instead, I signed up for Amazon Glacier Deep Archive (or whatever they’re calling it now). It’s a cheap, long-term storage option where data is locked in for six months without modification or deletion options. My Synology DiskStation has a built-in client that made it easy to set up a backup of my personal data to Glacier. I still need to test a restore, but for now, I see Glacier as my remote storage solution. At about $1 per terabyte per month, nothing else comes close to that price. Setting up another Pi with a friend would cost around $150–$200, which makes Glacier far more cost-effective over a three-year period.

Because I’m still a bit unsure about restoring from Glacier, I’ve also started using Proton Drive for critical data, including my entire family photo and video library. Once I’ve uploaded the photos, that dataset stays pretty static, so Proton Drive makes sense. With our 3TB plan, I can gradually copy large, mostly unchanging files that I want securely backed up. Since there’s no automated way to sync this, it’s not my primary backup, but it adds another layer of protection.

Recently, with T in high school (or middle school if we were in the States), she’s been using the computer more often. It made sense to subscribe to the family plan of Office 365, which gives each of us 1TB of storage on OneDrive. I’m experimenting with Cryptomator encryption to securely store a subset of our backups on OneDrive. I still need to fully implement this, but it’s something I plan to sort out soon.

In addition to these replica copies, I take monthly RSYNC snapshots to a separate directory on my DiskStation. I have two scripts—one for odd months and one for even months—so I always have two recent copies. I also keep an annual copy of everything. It’s a bit less automated, but it works.

I’m also considering setting up another Pi as a remote Resilio node. Another option is to get a storage VPS again. The previous deal I had expired, so I canceled it last year. That’s partly why I’ve been relying less on remote Resilio replicas. When I got rid of my last remote Pi, I switched to a VPS running Resilio. Now, I’m debating whether it’s worth setting up another VPS instead of piecing together backups the way I have been. At around $80 per year for 2TB, it’s an option I’m keeping open.

Overall, the system works. When I had a catastrophic failure on my DiskStation before upgrading to my current one, I was able to verify that all my data was backed up somewhere. In the end, I didn’t need to restore because I managed to salvage the array on the DiskStation, but it was a valuable exercise to go through.

UPDATE: I wrote this before Christmas. Since then I have built a new Pi with a 2TB SSD and need to deploy it somewhere other than our house as a backup. I have also found a new cheap(ish) VPS storage provider. I have a 2TB VPS in Germany were I am now replicating my main Reslio shares to. I have stopped using Glacial since i haven’t been able to properly test it.  It is still by far the cheapest backup option out there however without being able to verify it works to easily fully recover i was a bit concerned.  The new VPS i have is a few pounds more per month but not outrageously expensive.

Building My Own VPN

I started writing the background of this blog entry. I looked at my own archive and realised I had stopped using remote access software sometime in 2016. I think I got spooked by the changes that logmein did to their free plan or that it got bought by someone.  I forget.  As an alternative I started with remote SSH to remotely manage my growing network of raspberry pi’s. As my setup evolved, I eventually upgraded to OpenVPN for my home network. This way, when I was out with my iPad or laptop, I could connect to my home network and manage my media center.

When WireGuard came along, I switched to that because it was so easy to set up. I’ve been using it ever since for those rare occasions when I need remote access to my house.

Recently, I started experimenting with Tailscale, which is a mesh network implementation of WireGuard. The concept sounded great, and their free plan supports up to 100 devices across three users, which is more than enough for me. I set up Tailscale on my workstation and most of my Raspberry Pis. Now, instead of using WireGuard to connect to my home network when I want to access the media center, I just log my iPad onto the Tailscale mesh network, giving me seamless access to all my services. To make things easier, I use CNAME records with one of my domain names, so I don’t have to remember the cryptic Tailscale-provided domain names. It’s all been working smoothly.

With M and the girls away this week, I’ve had time to play around with Tailscale’s exit nodes. This feature allows me to route all my internet traffic through any Tailscale client I set up as an exit node. I found this intriguing because it lets me browse the internet as if I were at home, even when I’m out. I also experimented with setting up an exit node on my VPS in Texas, so I could route my traffic through there.

I recently noticed Tailscale offers Mullvad VPN exit nodes as an add-on. Mullvad is a solid VPN provider; if I didn’t already have Proton for other services, I’d probably use them. This add-on is essentially a full Mullvad VPN plan for five devices, allowing me to configure Mullvad exit nodes. I’ve been testing it over the past few days, both at home and on the go with my phone and iPad. Like any VPN, there’s a bit of overhead in terms of latency and bandwidth, but I’ve been using the London exit node and haven’t noticed any performance issues.

What’s great about this setup versus a traditional VPN is that I don’t have to toggle anything off to access my home network—my connections just work. This setup is letting me keep a VPN on all the time when I’m out, which I prefer. The Mullvad add-on costs an extra $5 per month on top of the Proton services I already use, but it’s been worth it so far. With a single click, I can switch the exit node to any other Mullvad location or one of my own, like my home network or VPS.

I’m actually so happy with this setup that I’m considering configuring the girls’ iPads to have always-on VPN through Tailscale.

an extra $5 per month on top of the Proton services I already use, but it’s been worth it so far. With a single click, I can switch the exit node to any other Mullvad location or one of my own, like my home network or VPS.

I’m actually so happy with this setup that I’m considering configuring the girls’ iPads to have always-on VPN through Tailscale.

Since I had some extra free time this week, I bought an additional Raspberry Pi 4 specifically as a VPN exit node for the house. I’d been experimenting with an existing Pi 4 as the exit node while it was handling other tasks, but I ran into some routing issues and didn’t want to troubleshoot on a device already in use. So, I spent about £50 on a new Pi and case. I do have a couple of Pi 3s lying around, but I didn’t want to use them due to their 100meg network bandwidth limitations. A Pi 5 seemed like overkill for this purpose, though I did pick one up for another project (which I might write about later).

So far, I’m very pleased with my new mesh VPN setup!

Let The Waiting For The Raspberry Pi 5 Begin…

So of course I pre-ordered the Raspberry Pi 5 (8gb) after I saw that it was announced. Sadly I missed the announcement by a day or two. I then saw the guidance that by the time I pre-ordered it would have to wait till sometime in early 2024 to receive it.

I also saw even after the launch if you subscribed to either of two Pi magazines you could get yours straight away. Yes I almost subscribed, but i am proud of myself for not doing it and waiting patiently. Or am I?

The Story of My Upgrade Partially Pi Powered Backup Network

I have written a few posts on using Resilio Sync to replicate my personal data as a backup network. Currently I have several nodes running at home on various devices. I have one remote nodes running. It is on a VPS that I may write about in more detail separately. I had another remote Pi at a friends house for years. With the cost of the VPS being so cheap and easier to manage remotely I gave up on the extra node with my friend.

Instead I have 2 Pi’s running Resilio at home. In addition to a ODROID HC2 and instances on my laptop and NAS. Every device does not have all the data on it except for the NAS. Some of the shares are so big I had to shard them out. Only the NAS has all the data. However all of my data is replicated at least twice in the house. All, but my videos are replicated to the VPS.

I also started using Amazon Glacial Deep Freeze to backup (approx $1 per tb) some shares. Deep Freeze is so cheap my intention is to add bigger shares to backup. I just have not gotten around to it yet.

The Raspberry Pi’s photographed are the Pi 2’s (white cases) and Pi 3’s (Grey cases). The current generation of Pi’s I am running are two Pi 4’s. One with four gigs of RAM and the other with eight. I have a third Pi 4 with four gigs of RAM that I am playing around with alternative configurations on. I still have the second and third generation Pi’s. I use the third-generation ones periodically. Most recently one was a dedicated Pi-hole, however I recently stopped using it.

Pull disclosure, pictured is the older P2 and P3’s.

The State of My Private Cloud in 2019

I have been maintaining my private cloud network powered by Resilio Sync for a few years now. I have talked about it before. See this search for all those posts. When I built the original version of my private cloud the intentions were for it to provide a 321 backup solution for my stuff. The effort involved in maintaining the system turned out to be more time involvement than I would like. Overall even with more work than i thought it still has been largely a success for me.

At the time when I built the network my intention was to use Raspberry PI’s as my remote nodes. As my use of the system evolved that stopped being a viable solution. One of my first Raspberry Pi remote nodes had to be replaced. The drive i deployed just wasn’t big enough. That wasn’t a Pi specific issue. The next thing that happened was I ran into significant challenges around the amount of memory available on the a Pi II. Resilio would crash the Raspberry Pi. The reason was the app would consume all of the available memory until the OS froze. I had the same challenge on my Synology disk station at one point. That was fixable with a $15 4 gig memory upgrade. I was not able to do anything like that with the raspberry pi II.

To work around the limitations of the Raspberry Pi 2 was that I bought more powerful and thus more expensive computers. The two remote machines that I had running were fanless zotac z-boxes. They were great. The only downside was the cost that was significantly more than a pi. I bought a low-end Celeron version of the Zotac for around $150 plus memory and drives. The costs were about 4 times as much as a similar Pi 2 setup. At the time I had no good alternatives.

Then someone at work put me onto buying a Hardkernel ODROID-HC1 that was designed as a personal cloud type machine. It came with a case to put an internal hard drive in. The beauty of these machines were they had two gigs of memory and were not that much more expensive than a Pi 2 at around $50. I think I maybe spent $70 including memory card etc, not counting the hard drive. The hard drive was an internal one so cost to get one was cheaper than using an external one for the PI.

I purchased two ODROID’s within a year. One was at a friends house. The other was replicating data at home. I had problems with what I think was corruption of the OS on the SD card on both machines. The remote host had to be rebuilt twice. By the 3rd time it had a problem I gave up. I just didn’t want to spend the time troubleshooting it. I’m not sure why they continued to get corrupted. I still have one of them at home that has been pretty stable this year. I gave the remote one to my friend who hosted it for me. He was going to see if he could use it for something. The ODROID was a good idea however it did not turn into a long-term solution for me.

When I first started this private cloud project the public or consumer file storage services did not really offer zero knowledge encryption. The only service at the time that was financially viable for me to use was MEGA. I tried that out and it wasn’t seamless for me so I abandon a public cloud solution. I went with my private cloud. Today there are a few service providers that cater to people looking for zero knowledge encryption for remote storage. There still aren’t a lot of them however I was glad to see the landscape had evolved since I started this project.

I’m not sure what triggered my research into public clouds again. I started looking at what the cost benefit would be to go with a zero knowledge encryption public cloud provider instead of continuing to build my own network over last summer. I found a provider I liked, Tresorit. They ticked all the boxes for me on what I was looking for. The challenge was for 2 TB monthly cost over £20 a month. There only cheaper solution was not enough space for my needs.

When calculating the lifecycle of the hardware I buy for my own private cloud network versus the service costs of the provider it’s probably cheaper to keep doing it myself. Originally that was not true. From when I started this investigation in moving to a service provider until today there was a change in what kit was availible. The Raspberry Pi 4 came out. Having a need to replace the ODROID and possibly one Zotac at a minimum in the next 3 years would have been several hundred pounds. The Pi 4 was clocks in for the 4gig model at around £60 for the computer and all the accessories I needed minus a hard drive. I am recycling a hard drive so there is no additional cost there. When they announced the latest pi4 I immediately put in order for one of the 4gb models. My hopes were that it would perform well enough to use in my private cloud network. On paper it solves the memory usage issue of the Pi 2 & 3.

At the time of writing this I have had my first Pi 4 running in “production” for almost 3 months. The software has been pretty stable. I am running it within a docker container on a Pi 4. So far the system is consuming way less than 50% of memory. Ussually somewhere between 1 to 1.5 gig. One of the other clean up things i did was consolidate the many shares I had into 5 total shares. The Pi replicates 4 of them.

With the extra space i have on a remote node can also take local copies of the replicated data on that remote machine. That should complete my 321 backup strategy. Since I want to add extra resiliency into my plan I will continue to take annual point in time offline copies of most of my data.

Since I am reusing hard drives right now (i over bought on size I needed on the last upgrade and the drives are great) that means i can get another Pi 4 for £60 pounds and have a refreshed pair of remote nodes. I continue to use my Synology, my laptop, and a Linux server for the other nodes at home.

My costs this year are on target to be £60-£120. That is half the price of one year of cloud storage service. The new machines should give me 2 to 3 years of service easily. Especially since I’m deploying them with 5 TB drives and I’m only using about 1.3 TB for what I’m backing up today.

I am pleased that the build my own system is cheaper and continuing to work out vs the public cloud option. As long as maintaining the system is not a lot of trouble I picked the right option.

Pi Net Expands

My new Pi 3 B+ arrived today.  All my other Raspberry Pi’s are 2’s so this one should be significantly more powerful.  I didn’t really “need” it however I wanted to play around with it.  I haven’t written longer posts in a while however I now am using a Pi for RetroPi game console, an OSMC (Kodi Open Source Media Center) and two other ones I am trying to setup a docker swarm with.  I hope to write more about my projects later.  Now off to install Rasbian.

Inbound Network Lockdown With an SSH Proxy

Ever since I started working on building my backup network using raspberry pi’s and BitTorrent Sync I’ve started a list of other home projects I want to do with technology. One of the things that’s been in my head however not high on the list actually do was create a VPN endpoint with my home router so I could VPN in while remote. I tried to play around with open VPN and ran into some pickups. Didn’t have all the time I really needed to sit down and figure it out so I gave up on the project. Even while I was trying to set up an inbound VPN friends of mine at work were saying it was probably overkill anyway.

At least one if not more people recommended that I set up a SSH proxy on one machine and use that to connect to all the other resources. I like the idea but never gave it much focus until recently. I have a Zotac ZBox C Series Mini Computer that I have been running Ubuntu Linux on for a while. I’ve been baking it in as a next-generation BitTorrent Sync machine for my network. I hadn’t deployed it yet and figured I would try using that as my SSH proxy.

The proxy itself was trivial to initiate to the box. Deciding how I would configure my computer was not difficult however it took some thought so I could be connected to the proxy in one web browser and not affect all other Internet traffic. I opted to try FoxyProxy in Firefox. I do not normally use Firefox on a day-to-day basis so being able to dedicate that browser for direct proxy connections to my home network seemed reasonable.

The setup worked with less than 30 minutes of configuration. Once I was able to prove to myself that I can do this and maintain I needed to figure out what my permanent solution would look like. The Zotac likely won’t stay at my house and I’m using it for other things. If ongoing have a proxy I use often I want isolated and basically have it do one thing only. I opted to set up one raspberry pi as a dedicated SSH box. At the moment I have enough spare pi’s to dedicate one. I initially had concerns about the 100 Mb limit on the network card however I doubt I’ll be doing anything of high traffic that I should worry.

My set up for now is simple enough. I have a plain-vanilla Rasbian install on a Raspberry Pi 2 with a 16 gig SD card. I have the pi plugged into an ethernet jack on my router. Besides SSH I installed Fail2ban to protect myself from potential attacks on the Internet. I also used a password of significant complexity for the login details. I have a dynamic DNS entry set up so it’s easy to connect from anywhere.

This setup works well on my laptop however I am not sure if I could get it working using my iPad. That’s one trade-off with this configuration however most if not all of the services that I previously exposed to the Internet should be fine with this limitation. If anything I can use remote desktop software from an iPad to connect to a local machine and then bring up those services.

The next thing I want to do involve making it easier to access my home network while on my laptop remotely. That mainly involves configuring Royal TSX sessions to use my proxy details. I also would need to set up the proxy connection within royalty TSX. I also need to finish creating localhost entries for my home network services as well as bookmarks within Firefox to make accessing everything easier. As much as I want to do that all up front it’s a little bit of effort that I will probably just take care of as I need it.

An additional enhancement I would like to make is to go beyond having Fail2ban and a strong password to enabling to factor authentication. That will require A bit more skill for me to learn and at least one hardware USP token. For now I consider that a reach goal.

I still want to find some time to play with inbound VPN configuration. Even if it’s just to show myself I can do it. For now however the SSH proxy more than meets my needs and is working today. There are other projects on my “Technical Maker Board” that I set up that I’d like to get to next.

BitTorrent Syncy Network Phase II

Since around the new year and trying to figure out what next phase of my private cloud backup network would look like. The design was originally leveraging several raspberry pi’s however practice only one remains at a remote location. The remaining remote locations I’m using old Mac Minis. Even the one Pi I do have deployed is inoperable and needs to be rebuilt. I’m not sure if it’s the Rasbian version of BitTorrent Sync are generally anyone expelled for sync but I had several problems with the Rasbian  installs losing their license identity. What happens is I then have to re-add the BitTorrent Sync pro license and reindex all my shares on that node. It’s annoying and I’m concerned though because option at some point.

At the same time I’ve been looking at ways to better secure the remote data. All of my systems are at friends and families houses so endpoint security’s been less of a concern then on the network security however when BitTorrent Sync announced encrypted folders I was extremely curious. After playing around with it for a while I have opted not to use the encrypted folders however it’s something I’m still thinking about for the future.

On a side note I’ve been contemplating a Lenox desktop to complement my Mac. I shopped around and found a nice inexpensive Zotak. I picked one up and put a 120gig SSD and eight gigs of RAM in it. I wasn’t sure if I was use it as a desktop or to replace my BitTorrent Sync Pi. Right now I’m having keep a replica copy of my data at home to test it out. I’m currently running go to 14.04. It’s been running pretty well however I did have one or two anomalies with sync folders so I am not yet ready to deploy it in the wild. My goal in a long-term is to replace all the minis with something like the Zotak. The new boot to install also increases that new to 4 TB of storage versus the raspberry Pi I have deployed with a 1 TB hard drive.

As part of my incremental upgrades I have put a 4 TB drive in one of the remote minis and the other has a 2 TB drive. That gives me some headroom since fully seated backup is around a terabyte.

Holding off on any additional drive upgrades until I can confirm that the Ubuntu based Zotak is working well. If it is I hope to pick up another one two.

The raspberry Pi’s are not going to go to waste. I’m using one of them has an extra replica copy at home for BitTorrent Sync. I have a higher tolerance for failure for that since it’s an extra copy at home of my data. I’m installing some applications on another one. Future projects for the remaining ones include a possible reverse proxy, Wi-Fi hotspot and or WebCam. I just need to find the time for all these projects. For now just want to finish my backup solution upgrade.

And if you’re reading all this thinking yourself do I really need a four node private cloud network the answer is of course not. The other answer is it’s really six nodes if you count the three I’m running at home. In the end I didn’t save the money using a private cloud  since even though a lot of the equipment was lying around there was some upfront costs that I won’t realize unless I use the system for 2 to 3 years. The reason to do it however was more because I can and because I wanted control over my information. I’m glad I’m continuing to tinker with this since I’m learning a lot and a lot of fun.

Pi Net Upgraded

I have been challaged with figuring out an easy way to upgrade the Bittorrent Sync software I use to replicate my files on my Raspberry Pi’s.  The upgrade process on my mac is super easy.  The upgrade process on Raspbian should be easy however the person who has been maintaining the easy to upgrade package hasn’t updated his package in 9 months.

The internet is a small world.  Every single how to i find on the internet about installing Bittorrent Sync on a Pi (I found at least a half a dozen) point to this one repo.  I found instructions on how to setup everything manually however it was not a simple process.  Right as I was reviewing the steps to do the manual install I stumbled accross someone who forked the original package and is maintaining the latest update of the sync software.

After testing the install process on a clean Pi, I ran an upgrade on one of the two Pi’s I currently have at home.  The upgrade worked great.  I just ran the upgrade on the other local Pi and then on the remote units with no major issues.  This new repo says they are in sync with Bittorrent Sync’s release schedule by about 24 hours.  Hopefully I will be able to stay up to date from now on.

I am hoping my upgrade (from 2.0.94 to 2.2.7) solves some minor bugs I have been experiencing with one location.

This was the first major upgrade to all my nodes on my private cloud.  I was a bit cautious however in the end the system didn’t get corrupted and functioned as it should.

Next up I am thinking about another remote node location.