Skip to content

Backing up to Koofr using Restic/Rclone

28-Nov-20

Koofr is a pretty good deal: it’s 1TB of storage for a lifetime through Stack Social, for around $170. If you get a discount from Stack Social (like when I wrote this, during their Black Friday special), you can get that 1TB lifetime for $102—the prevailing discount of 40% for software during Black Friday/Cyber Monday.*

The huge, major upside is that unlike other providers, they support standard protocols: WebDAV for example. They also seem to have some actually useful features: creating a guest upload link (where anyone can upload to your space) and download links (which most places have).

Most interestingly, rclone supports Koofr natively. And restic (the UNIX backup program) can use rclone as a back-end. So, you can back-up your Linux/UNIX systems to Koofr.

Now, the downside is that the speeds are pretty low. I tried an upload of 100 MB using rclone and it went at around 200 kB/s. This slowness could have been my network, but I don’t imagine it will get much faster. That said, since my ISP has a data cap, this slow speed sort of automatically limits how much.

The steps here is that I first set up (under root) an rclone configuration for Koofr, following the steps here, which just amounts to the usual rclone configuration command-line wizard, selecting Koofr as the storage type. I called the rclone configuration koofr.

Then, I created a very long password and initialized a restic back-up repo with:

rclone -r rclone:koofr:restic/server init

This Initializes the repo in a /restic/server directory within my Koofr storage. I did this indirection, in case I want to back-up a different host under the /restic directory in my Koofr storage. Then, backing up was fairly easy:

I put this in a cron job that calls a script called run-koofr.sh:

The sourced file koofr.sh just contains:

I am storing the repo as an environmental variable. This was unnecessary, but I have a similar setup for B2 (Backblaze’s block storage). That setup requires me to set up the Backblaze credentials, which I did not want to place in a cron file, or maybe restic requires to be in an environmental variable (cannot be passed from command-line). It was a while ago, so I don’t remember.

This worked fairly well, except that 1TB wasn’t enough, so I had to pare back and only upload photos and a sub-set of documents.

The main downside to Koofr is the fixed storage. With B2 (which I will still keep as my primary back-up), you pay for what you use, and the storage grows if you need more. So, your backup won’t fail simply because you didn’t have enough storage. And the risk of getting overcharged is minimal, size B2 is so cheap.

Although, not as cheap as Koofr, if this lifetime deal pans out.

* At the time of writing, I do not have an affiliate relationship with Koofr or with Stack Social.

Be the first to like.

Splitting up a Zoom Video File

18-Nov-20

I recently helped out with my wife’s writing conference. During this conference, they recorded Vanessa Brantley Newton doing 1:1 critiques with authors and illustrators.

The conference organizers (my wife and her co-leads) offered the attendees individual recordings of their critiques. So, I had to take the overall zoom download and split it up.

There were a number of files available from Zoom, but the main ones I was interested in seem to be GMT<datetime>_<username>-_1920x1040.mp4 and GMT<datetime>_<username>-_gallery_1920x1040.mp4. The first had a “speaker” view and the second had a “gallery” view. (In both views, screen sharing took the whole screen, and the speaker was reduced to a window.)

The hardest part was finding the time points to split the files up–a little bit because it was a manual process, but more importantly, because it was difficult for me not to get engrossed in the discussion. While illustration isn’t my cup of tea, it is intriguing watching someone deliver constructive feedback.

Anyway, once I had these time segments recorded, I had a few ways to split them up:

  1. MKToolnix: this worked well, but only outputs Matroska (MKV) files. I wasn’t sure if this would be accessable to the people downloading the files.
  2. FFMPEG: it took a while to figure out the syntax, but this could spit out mp4 files, and can copy direct from input to output, so executes very fast.

For #1, I used the MKToolnix GUI. It seemed to work OK, but I did not pursue it very far.

For #2, I used the following (Unix) command line:

This worked OK, except I found that the timing swere off. I said to cut the 1st segment at 4:48 (4 minutes 48 seconds), but it ended up being a bit later (at 4:52). This wasn’t a big deal, except the timing at 21:40 ended up also being later, and the 1st file included a tiny bit of discussion about the 2nd person.

The reason for this timing uncertainty is that ffmpeg can’t really break up a file at an arbitrary location. Since the video codecs are incremental–that is, each frame depends on the previous frame, you can’t just arbitrary break the stream at any point. You can only break it on what are called key frames–points in the stream where the entire image is sent. Unfortunately, the placement of these frames does not line up with where I want them.

I even tried trimming some of the post-critique banter by following the above command with:

ffmpeg -y -i gallery-1.mp4 -c copy -map 0 -t 16:32 “Person 1 – Gallery View.mp4”

This forces a truncation of the 1st segment after 16:32 (16 minutes 32 seconds). While I can’t pick where this segment starts, I can truncate it anywhere I want.

Unfortunately, the only way to split at arbitrary points is to decode and re-encode the whole video stream. (The audio can still be copied.) I used the following command to do so:

This works well, but is much slower. While it runs 9x the frame rate on my Unix server, the copy version is much faster.

This works well, but is much slower. While it runs 9x the frame rate on my Unix server, the copy version is much faster.

I also tried to use the nVidia-accelerated ffmpeg on my Windows PC (with GTX 1050 Ti):

With GPU acceleration, this 2nd verison runs at 20x the frame rate. So, for having to do a pointless encoding, it’s not so bad.

Edit: one more quirk. It seems like the GPU-accelerated (Windows version) of ffmpeg creates output that is always a multiple of 10 seconds in length. I’m not sure if this is a limitation of key frames in the output stream using the Nvidia CUDA-based encoder.

Be the first to like.

Ansible Playbook to Create Windows Ansible User (bootstrap)

16-Jun-20

In my last post, I looked at setting up WinRM to give Ansible remote access in to a Windows computer. Presumably, one would use a regular administrative account to do so. However, there are some reasons a regular user account would be less desirable.

Namely, that in the Ansible hosts file, the password has to be there. Also, if one is using a Microsoft account (as Windows 10 nags about doing) or using a Domain account, the password also provides access to any Microsoft service or (respectively) to computer on the domain (not to mention domain services such as email, file access, etc)—not just the hosts Ansible is trying to connect to.

So, it would be better to have a dedicated account for Ansible’s use only. Ideally, one could push this account to Windows computers: that is, have Ansible create the Windows account (logging in once using existing credentials). Further Ansible processes could use the newly-created account.

I wrote an Ansible playbook to create this account, add it to the Administrators group, and then hide it (using registry settings) from the login screen:

Of course, the password above should be changed (but still set to a really long random character sequence).

One can then edit the host variables in your ansible hosts file to use the above created account.

Be the first to like.

Ansible setup for Windows using WinRM

12-Jun-20

The Ansible docs are a bit difficult to follow on setting up Windows using WinRM as the connection method. They state that there’s an easy-to-use script.

Details about each component can be read below, but the script ConfigureRemotingForAnsible.ps1 can be used to set up the basics. This script sets up both HTTP and HTTPS listeners with a self-signed certificate and enables the Basic authentication option on the service.

https://docs.ansible.com/ansible/latest/user_guide/windows_setup.html#winrm-setup

However, they later warn:

The ConfigureRemotingForAnsible.ps1 script is intended for training and development purposes only and should not be used in a production environment, since it enables settings (like Basic authentication) that can be inherently insecure.

This warning is quite discouraging. However, looking at the script, it is true that Basic authentication is enabled by default. But, it can be disabled, by adding a -DisableBasicAuth option. You can modify their listed procedure of downloading and running the setup script to read as follows:

This script then sets up both http and https listeners for WinRM. NTLM authentication of the WinRM service is enabled by default.

Now, if you wanted certificate-based authentication, the above script wouldn’t do that. But for my purposes, that really isn’t worth the hassle. Since I am using NTLM authentication, my Ansible inventory does have a username/password listed in it. But, that inventory sits on a trusted host to begin with. (If it didn’t, anyone could start modifying my Ansible playbooks.)

You should review the firewall rules after running this script. In my case (caveat: I did a lot of experimenting before I ran the script which may have also changed the firewall rules), it showed 3 rules. The 1st allowed an HTTP listener for Domain & Private networks. It was enabled. The 2nd allowed an HTTP listener for Public networks with local subnet only, but it was disabled. The 3rd allowed HTTPS listener for all networks and was enabled. I modified that last one to be Domain & Private only. So, if I take a laptop out in the field, the firewall should block the rule (based on Public network profile).

Be the first to like.

Folding-at-Home on FreeBSD in an iocage jail

29-Mar-20

Given the urgency for research on the novel COVID-19 virus, I have been running Rosetta@Home and Folding@Home on my Windows machine.

BOINC (which powers Rosetta@Home) runs reasonably well on FreeBSD with some special instructions.

I also wanted to run Folding@Home. I currently have a Linux VM in bhyve to do so. But, that is rather inefficient. How bad would it be to run it in a Linux compat jail? Turns out, it’s not too bad.

Since BOINC is also linux-compat, I basically followed the BOINC instructions (translating the instructions from FreeNAS to stock FreeBSD iocage).

I then installed (within the jail) the rather newly-refreshed biology/linux-foldingathome port.

Here are step-by-step instructions. I am using VNET jails and assign hosts in the jail to the 192.168.2.0/24 network attaching to a vnet1.X (X picked by iocage) interface created on the fly by iocage.

Get a console to the jail with:

Within the jail (root console from previous command), I ran

Then, it’s time to build the new biology/linux-foldingathome, which basically encapsulates the recipe here.

To save a build dependency, I used the same process as the BOINC instructions: use pkg to get as many as possible. In this case, linux-base-c7 is the only one:

Now, I edited /usr/local/etc/fahclient/config.xml and updated my username and passkey. I deferred joining Team FreeBSD (team #11743) until I know things are working correctly.

I also added the following lines to the config file to allow access within my trusted network (192.168.1.0/24):

<allow> 127.0.0.1 192.168.1.0/24</allow>
<web-allow>127.0.0.1 192.168.1.0/24</web-allow>
<password>p@33w0r6</password>

Enable the fahclient service

Now, exit the jail’s console. For some reason I did not have linsysfs loaded (yet), so I had to load it. (I am copying all the things that were previously loaded in the BOINC instructions here):

To make it permanent, add load_linsysfs=”YES” to /boot/loader.conf, linux_load=”YES” to /etc/rc.conf, and security.bsd.unprivileged_idprio=1 to /etc/sysctl.conf. (I already had it added.)

Set up the correct entries in the Jail’s fstab:

Update to look as follows (change the absolute path to correspond to where your iocage jail sits in the host environment):

Now, it’s time to try it out:

Hopefully, you should see something informative, and your CPU usage should start going up with assigned work.

See my stats here.

Be the first to like.

Jailed SyncThing using iocage on FreeBSD 12

07-Dec-19

So, we’ve been hitting the 1TB bandwidth limit in our household. The majority of our use is video streaming. However, while keeping tabs on our usage, I did identify some room for improvement in the use of OneDrive for file synchronization.

They way things work right now is that I upload all my photos (mostly from my camera) to OneDrive. OneDrive is great for this purpose. The app will even notify me when it finds pictures in new folders and asks whether I want to upload those photos, too.

However, because I don’t want to solely rely my OneDrive account—that is, I am preparing for the scenario where someone hacks my account and deletes all my photos—I do want a local copy as well. I do this local download with rclone and pull all the OneDrive folders back to my NAS server.

Things get worse when I then run ON1 Raw for photo editing and it starts downloading every picture I have ever taken on OneDrive to index them. Since I recently ran ON1 when we were at 880 GB (out of 1024 GB) of usage, I shut this down.

For this purpose (having direct access to all my pictures), I wanted to go back to the days where my Android phone would just directly connect to the NAS. That is, I will continue to upload to OneDrive and download a backup copy. But, additionally, I want a direct copy (from my Android phones) to my NAS.

I have an app called Sync.Me that works great for this synchronization. The problem is that it does no encryption (uses Samba directly) and I don’t want plaintext Samba exposed to my Android devices.

So, that is where SyncThing comes in. Syncthing is pretty secure, providing encryption in transit. In addition, devices are authenticated by a device ID (UUID).

However, I want to set up such a service for each person in the house. So I need 5 syncthing services running. I decided to make an iocage template jail and stamp out 5 running jails for this purpose.

Overall Procedure

I followed @vermaden’s instructions with a few changes:

  • Vermaden notes that syncthing is not possible in a jail. For me, this worked, but I had to use a VIMAGE/VNET jail.
  • I made fewer edits to /etc/rc.conf. I only enabled the syncthing service in /etc/rc.conf. iocage already disabled sendmail. I did not change anything else.
  • In addition to the syncthing package, I installed ca_root_nss package. Synchthing was having a hard time with https relay servers without this package. Which makes sense, since ca_root_nss includes the root Certificate Authority (CA) certificates to validate https connections.
  • I followed all of vermaden’s advice on creating /var/log/syncthing.log, including newsyslog and permissions.
  • I did not create a default /syncthing/ directory, since each user will need only a Photos directory that I would mount and define separately.
  • I installed the vim-tiny package in the template jail.

Since I was going to be doing this multiple times, I created a template jail called syncthing-template. I also created a script to stamp out jails for each user’s syncthing service:

create_st_jail.sh

The main thing this script does is set up networking. It does so by defining to VNET interfaces: vnet0 & vnet1. The first (vnet0) is a private trusted network (designated by the IP prefix 192.168.1. and parameterized in the variable prinet). The second (vnet1) is a public untrusted network (designated by 192.168.3. and parameterized in the variable pubnet). This second network is where my WiFi devices live.

The idea here is that since we are running syncthing in a jail, it’s going to be more bloated and more difficult to log in to the admin page. We would need to either forward ports on the jail’s localhost interface or run a web browser within the jail. I don’t even understand (yet) how localhost works in a FreeBSD jail, and definitely didn’t want to install a web browser (and therefore X Windows GUI) in each jail.

This convenience (at some cost of security) was worth it to me. However, I can see others installing X Windows and Firefox in the jail to adminster it. This alternative is especially enticing if you don’t have your networks segmented the way I do.

mod_st_jails.sh

The last line of the above script calls mod-st_jail.sh. This script

  • Runs the jail and therefore the syncthing daemon once to create a config.xml, certificates, etc.
  • Stops syncthing & the jail
  • Sets up fstab entries using nullfs to point to the zfs file systems that store the actual data
  • Edits the config.xml file to use the private IP address for GUI (administration) & turns on TLS for the GUI

Here it is:

Users & Permissions

The syncthing FreeBSD package creates a user named syncthing within the jail. However, this user does not exist in the host system. To make things look nice, I created this user with the same userid as the jail user:

You see that I ran these commands in the host system (called server).

You’ll see that I used user ID 983 for this syncthing user. This was the user ID within the jail template, so all subsequent jails will have this user ID. I would expect that he syncthing package always chooses this user ID.

Next, I gave syncthing exactly the same access as the owner to the libraries I wanted to share/synchronize. In the host system (not in the jail), for example in the directory to sync my LG G8 phone:

If you don’t do this permission-granting step, syncthing will complain that it cannot create the .stfolder hidden directory.

The nice thing about the above is that permissions can be assigned on a case-by-case basis to different folders or files in the target directory. And all of this permission management is outside the jail.

Be the first to like.

IPv4 VPN pfSense tests

17-Oct-16

Looking at Windscribe VPN and wondering how much crypto capability impacts VPN conneciton. One thing I noticed while doing this is that Windscribe seems to load-balance heavily in the Texas area. My IP address would change pretty much with each connection.

First, here’s Windscribe connecting through the IP 75.126.39.93 (SoftLayer). This is a really fast test.

And here’s CPU usage during this test:

procs memory page disks faults cpu
r b w avm fre flt re pi po fr sr md0 md1 in sy cs us sy id
0 0 0 1296M 3473M 0 0 0 0 0 14 0 1 1457 178 5129 0 2 98
0 0 0 1296M 3473M 0 0 0 1 0 14 0 0 11709 262 26320 0 22 78
0 0 0 1296M 3473M 0 0 0 0 0 14 0 0 6260 139 14965 0 10 90
0 0 0 1296M 3473M 1 0 0 0 0 14 0 0 1219 239 4640 0 2 98
0 0 0 1296M 3473M 0 0 0 0 0 14 0 0 1636 266 5506 0 3 97
0 0 0 1296M 3473M 1454 0 0 0 1567 14 1 0 1150 1029 4528 6 3 92
0 0 0 1296M 3473M 1 0 0 0 0 14 0 0 1598 238 5377 0 1 98
0 0 0 1296M 3473M 0 0 0 0 0 14 0 0 1393 144 4923 0 2 98
0 0 0 1296M 3473M 6 0 0 0 0 14 0 0 2944 7754 10633 3 8 88
1 0 0 1296M 3473M 2 0 0 0 0 14 1 0 3340 10100 12767 4 10 87
0 0 0 1317M 3473M 5437 0 0 14 7392 14 2 1 2951 18436 11436 10 20 70
1 0 0 1296M 3473M 8205 0 0 6 9723 14 0 0 2675 8229 10085 5 13 82
0 0 0 1296M 3473M 11 0 0 0 0 14 0 0 2084 4658 8562 2 4 95

Now, with hardware encryption via cryptodev driver. Note that this time, I’m connected via 173.208.68.218 (Nobis Technology Group).

procs memory page disks faults cpu
r b w avm fre flt re pi po fr sr md0 md1 in sy cs us sy id
0 0 0 1291M 3472M 2 0 0 0 0 14 0 0 326 276 2849 0 0 99
0 0 0 1291M 3472M 1 0 0 0 0 14 0 0 7242 275 17068 0 15 85
0 0 0 1291M 3472M 9 0 0 0 0 14 0 0 7491 352 17579 0 11 89
0 0 0 1291M 3472M 0 0 0 0 0 14 0 0 2766 360 7814 0 3 97
0 0 0 1291M 3472M 1 0 0 0 0 14 0 0 2829 139 7868 0 4 96
0 0 0 1291M 3472M 4 0 0 0 0 14 0 0 2018 294 6275 0 2 98
2 0 0 1292M 3472M 2810 0 0 12 3783 14 0 0 987 3296 4316 2 5 94
0 0 0 1291M 3472M 10787 0 0 9 13251 14 1 2 1576 12830 5953 9 15 76
0 0 0 1291M 3472M 2 0 0 1 0 14 0 0 1386 245 4937 0 1 99
0 0 0 1291M 3472M 2 0 0 0 0 14 0 0 1698 171 5529 0 2 97
0 0 0 1291M 3472M 3 0 0 0 0 14 0 0 1599 285 5395 0 2 98
0 0 0 1291M 3472M 0 0 0 0 0 14 0 0 1486 213 5148 0 1 99
0 0 0 1291M 3472M 4 0 0 0 0 14 0 0 513 758 3392 1 1 99
0 0 0 1291M 3472M 10 0 0 0 0 14 0 0 567 1576 3806 1 2 97
0 0 0 1291M 3472M 4 0 0 0 0 14 0 0 187 627 2705 0 1 98
0 0 0 1291M 3472M 1447 0 0 0 1566 15 1 0 63 1098 2376 5 2 92
0 0 0 1291M 3472M 2 0 0 0 0 14 1 0 53 219 2277 0 0 100

There’s a modest decrease in the CPU usage (particularly user and system).

Finally, here’s a Comcast XFinity speed test. Note that this test runs twice: once for Ipv4 and once for IPv6


procs memory page disks faults cpu
r b w avm fre flt re pi po fr sr md0 md1 in sy cs us sy id
0 0 0 1274M 3476M 452 0 0 1 550 5 0 0 244 766 2625 1 1 98
0 0 0 1274M 3476M 2 0 0 0 0 13 0 0 19 94 2130 0 0 100
0 0 0 1274M 3476M 7 0 0 0 0 13 0 0 40 211 2177 0 0 100
0 0 0 1274M 3476M 2 0 0 0 0 13 0 0 58 94 2210 0 0 99
0 0 0 1274M 3476M 0 0 0 0 0 13 0 0 10553 86 23681 0 17 82
0 0 0 1274M 3476M 464 0 0 0 620 14 1 9 14639 735 32217 0 23 77
0 0 0 1274M 3476M 0 0 0 0 0 13 0 5 14529 87 31823 0 21 79
0 0 0 1274M 3476M 0 0 0 0 0 13 0 0 14634 142 32066 0 22 78
0 0 0 1274M 3476M 1 0 0 0 0 13 0 0 14556 89 31951 0 23 77
0 0 0 1274M 3476M 1 0 0 0 0 13 0 0 9084 87 20645 0 13 87
0 0 0 1274M 3476M 1 0 0 0 0 13 0 0 52 93 2191 0 1 99
0 0 0 1274M 3476M 0 0 0 0 0 14 0 0 26 86 2132 0 1 99
0 0 0 1274M 3476M 0 0 0 0 0 13 0 0 1109 138 4317 0 1 99
0 0 0 1274M 3476M 0 0 0 0 0 13 0 0 2115 85 6321 0 3 97
0 0 0 1274M 3476M 1 0 0 0 0 13 0 0 2030 97 6164 0 2 98
0 0 0 1274M 3476M 0 0 0 1 0 13 0 1 2144 84 6385 0 3 97
0 0 0 1274M 3476M 0 0 0 0 0 13 0 0 2322 84 6738 0 3 97
0 0 0 1274M 3476M 3 0 0 0 0 13 0 1 1353 161 4815 0 2 98
0 0 0 1274M 3476M 1442 0 0 0 1568 13 1 0 18 964 2204 6 2 92
0 0 0 1274M 3476M 0 0 0 0 1 13 1 0 5 86 2097 0 0 100
0 0 0 1274M 3476M 4 0 0 0 0 13 0 0 4052 95 10416 0 8 92
0 0 0 1295M 3476M 4952 0 0 13 6745 13 1 0 14211 10939 32191 7 36 57
0 0 0 1274M 3476M 8208 0 0 6 9728 13 1 0 14144 4136 32017 3 32 64
0 0 0 1274M 3476M 0 0 0 0 0 13 0 0 14288 84 31540 0 21 79
0 0 0 1274M 3476M 4 0 0 0 0 13 0 0 14125 95 31417 0 25 75
0 0 0 1274M 3476M 0 0 0 0 0 13 0 0 13866 86 30744 0 24 76
0 0 0 1274M 3476M 2 0 0 0 0 13 0 0 465 145 3040 0 0 100
0 0 0 1274M 3476M 1 0 0 0 0 13 0 0 18 88 2121 0 0 100
0 0 0 1274M 3476M 2 0 0 0 0 13 0 0 203 89 2492 0 1 99
0 0 0 1274M 3476M 1 0 0 0 0 13 0 1 2275 102 6647 0 2 98
0 0 0 1274M 3476M 2 0 0 1 0 13 0 0 2244 88 6588 0 2 98
0 0 0 1274M 3476M 1 0 0 0 0 13 0 0 2270 141 6646 0 2 98
0 0 0 1274M 3476M 2 0 0 0 0 13 0 0 2266 92 6634 0 3 97
0 0 0 1274M 3476M 0 0 0 0 0 13 0 1 2192 92 6487 0 3 97
0 0 0 1274M 3476M 8 0 0 0 0 13 0 0 158 124 2410 0 0 100
0 0 0 1274M 3476M 6 0 0 0 1 13 1 0 58 116 2205 0 0 100
0 0 0 1274M 3476M 2 0 0 0 0 13 1 9 18 146 2138 0 0 100

These results weren’t as conclusive as I’d like. For example, I got wildly varying results using the VPN when I retested. In some cases, the CPU usage was close to 40% (even with hardware crypto). I also think that the result above with hardware crypto isn’t apples-to-apples since the resulting data rates were lower (likely congestion outside of the VPN), and that’s likely limiting the taxation on the crypto—the crypto never gets exercise to the extend of the first test.

10 people like this post.

Making PlexPass Work

20-Dec-15

I’ve been using Plex for quite a while. You have to jump through some hoops (it only supports MKV files, not DVD or Blu-Ray directory structures), but it does in the end work quite well with my Fire TV stick and with the Google Nexus Player (Android TV).

I recently got a PlexPass subscription. This should let me create user profiles for people in the house. (These are called Managed Users or Home Users depending on the documentation.) I should also be able to sync contect (offline copy) to portable devices.

Except for whatever reason, the system was horribly broken in my house. The server would be unavailable for most of the UI’s–especially the one where you designate users and define what server libraries they have access to.

After a good day of debugging (total), I found that there were two reasons this didn’t work.

Security Features

One laudable thing Plex does is try to maintain a secure connection to your server. This is detailed here. Essentially, they own the plex.direct domain and can assign any number of hostnames under that address, all of which direct to your personal server. This is necessary because plex has to create a security certificate that matches the hostname of your server. This hostname additionally needs to resolve to an IP address that works. (A LAN subnet address while at home and an Internet IP address when you’re outside your home.)

The problem is that devices inside my house need to (for example) query the hostname 192-168-0-10.long_hash.plex.direct, and what is supposed to happen is that the DNS is supposed to return 192.168.0.10 (the local IP address of the server, within the LAN subnet).

Unfortunately, in my case, pfSense blocks this from happening because it doesn’t want a fully-qualified domain resolving to something within the house. The fix is to let pfSense know that plex.direct is allowed to resolve locally. This information is detailed here.

But, that didn’t fix the problem. The next thing that happened is OpenDNS (the DNS service I use) then also blocked the IP address lookup. The only way to fix that problem was to disable this option at OpenDNS:

Security setting in OpenDNS to help Plex resolve local addresses.

Security setting in OpenDNS to help Plex resolve local addresses.

NAT Reflection

Curiously, even after I got the above DNS resolution working, my Plex server still didn’t work right. I would get a message saying secure connections aren’t possible and that I need to fall back to insecure connections. This happened even when I was accessing the player web interface on the plex server. How can it not create a secure connection to itself?

I did a tcpdump to investigate. I saw that the Plex server was trying to contact my WAN address. (I had to do port-forwarding to get the server accessible outside my home network.) I assumed that while I was on my subnet, Plex clients (including the web client) would use the LAN subnet address. For whatever reason (bad coding, bad configuration), this local-addressing isn’t the case.

The Plex web client was trying to contact the Plex server through the WAN (routable Internet) address. Most NAT systems can’t do this. Luckily, pfSense can handle it well. I just had to create a NAT reflect rule (with proxy) to accept those connections and redirect them as necessary.

Curve-Ball: Disk Space

The long version of the story is that things didn’t stop there. I still couldn’t access the server. I got farther htan I did before, but the Plex Android app wasn’t syncing content. It wouldn’t transcode; it wouldn’t do anything. In fact, it wouldn’t even play a video. (Although songs were fine.)

What I found was that my Plex server was out of disk space. I basically had a 32GB booot/OS drive in there, and it was full. I did some cleaning and that helped. Then I also noticed that there’s a transcoding directory in the Plex server settings. I presume that this is set to /tmp or to the plex installation path, but in my case, both sit on a single small drive. So, I pointed it to my ZFS system, where there is plenty of space.

This has seemed to clear everything up. Huzzah!

Suspicious Quirk

I also had a long battle with Managed Users. Adding one for my wife (for example) did not show any selectable libraries I could share with her. I ended up blowing away my install, installing with the PlexPass version, and then re-adding users. It’s probably coincidence, but it seemed that when I created libraries in a different order (adding SD quality before HD), things worked. But, it’s very unscientific, and perhaps it was related to the other issues already listed.

10 people like this post.

Setting up an FTP-only user on FreeBSD

31-Oct-15

I recently bought an IP camera. (To be honest, I went on a bit of a shopping spree for IP cameras.)

These cameras support FTP as a storage mechanism for video and snapshots (motion-detecting for example).

As a result, I wanted to set up an FTP user on my FreeBSD machine.

Iniitally, I tried creating a user with a shell of /usr/sbin/nologin, but that doesn’t work for FTP. FTP users need to have a shell in /etc/shells.

I saw this post which talks about FTP requiring a shell in /etc/shells, and that adding /sbin/nologin is a bad idea. Instead, it recommends making a copy in /usr/local/bin/ and adding that copy to /etc/shells.

Instead, I made a link—in case (for some reason) there’s an update to /sbin/nologin, I want the FTP user to get an update.

ln -s /sbin/nologin /usr/local/bin/nologin-ftp-only

I then added /usr/local/bin/nologin-ftp-only to /etc/shells.

To be even more secure, I made the FTP user’s account chrooted by creating /etc/ftpchroot.

Be the first to like.

Inateck USB3 2.5″ enclosure

16-Aug-15

I got an Inateck USB 3.0 2.5″ SATA III disk enclosure. I placed my OCZ SSD in there, and got the following Crystal Disk Mark results:


-----------------------------------------------------------------------
CrystalDiskMark 5.0.2 x64 (C) 2007-2015 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 205.100 MB/s
Sequential Write (Q= 32,T= 1) : 116.950 MB/s
Random Read 4KiB (Q= 32,T= 1) : 63.336 MB/s [ 15462.9 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 88.529 MB/s [ 21613.5 IOPS]
Sequential Read (T= 1) : 188.123 MB/s
Sequential Write (T= 1) : 105.710 MB/s
Random Read 4KiB (Q= 1,T= 1) : 13.939 MB/s [ 3403.1 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 40.063 MB/s [ 9781.0 IOPS]

Test : 1024 MiB [H: 89.4% (99.8/111.7 GiB)] (x5) [Interval=5 sec]
Date : 2015/08/15 20:09:11
OS : Windows 8.1 Pro [6.3 Build 9600] (x64)

1 person likes this post.