What are we doing here?

This blog includes a series of videos and references to help new users or enthusiasts better understand how to use open source and free technology tools. The quick links includes more information for accessing many of the tools covered along with other references to learn more for taking advantage of these tools.

Click HERE to see the full list of topics covered!

Using Alpine as a Docker host

 

Who knew Alpine was a really cool Docker host? 

Alpine Linux is a really interesting, lightweight distribution of Linux that has a minimal foot print that belies some of the power capable in the distro. Not only is Docker easy to install on the platform, there are a lot of nice animations and quality of life features that makes what is normally the distro for building containers, a great distro to host those containers as well. 

The latest version of Alpine features Linux kernel 6.12 which has a ton of nice features around the scheduler and a host of other features. While most of the features may not get used directly by a Docker host, the hosted containers will also leverage the underlying host OS kernel (a big difference between containers and VMs) so could also potentially benefit the application performance as well. 

Some more references are below:

https://www.phoronix.com/review/linux-612-features

https://9to5linux.com/linux-kernel-6-12-officially-released-this-is-whats-new

Creating an NFS share in TrueNAS SCALE

 

 
NFS shares have been around for decades, but they continue to be relevant and used on a daily basis. Whether creating a storage backend for a web server or servers, or building the latest AI model, NFS continues to be a go to file sharing protocol for Linux servers and clusters alike. 

This video quickly explains how to build a quick share with NFS on TrueNAS SCALE. It covers where to click, what considerations to take into account, how the POSIX permissions get set and applied - both from the TrueNAS host and the Linux host. 

Hopefully, the 10 minute video is self explanatory, and feel free to leave comments if there are any questions.

Nextcloud and Ansible

 

This tutorial is really more of a proof of concept of using Ansible and other scripts to more easily deploy Nextcloud from scratch. 

Ansible is an interesting tool for deploying and maintaining server systems. Very much like Docker, it uses YAML to build configuration files called Playbooks, and these Playbooks can be used to automate actions on the managed nodes in the cluster. 

In the deployment, as mentioned in the video, I do not deploy everything that's needed for Nextcloud. The Ansible Playbook doesn't even download and install Nextcloud. This is deliberate because I would think that nodes can be added, taken away, replaced, etc. and the Nextcloud config may want to reside in a more permanent storage environment. For this reason I offer a reference for getting NFS set up in Debian 12, and then mounting it to the web server. 

After the initial web server and Nextcloud install are complete, and the config.php file is modified as needed for Redis, Memcache, etc., subsequent updates and changes to web server nodes can be made. I even noticed, while recording, that I had missed memcached package in the yaml, but a quick change and re-running the configuration brought it in. 

The source for the Ansible playbook, as well as single-node scripts for installing Nextcloud from scratch is available on Github. 

https://github.com/JoeMrCoffee/AutomateManualNextcloudInstall/

Printers and Linux

productName

For years I've been using public printing options to print anything that needs printing. Most all of my data can be digital, and I've become rather fastidious at managing and maintaining digital data. Living in Taiwan, paper gathers dust, gets dirty, yellows, and can easily become moldy. However, there are times that you just need to print something.

In the past (up until yesterday), if I needed to print something I would simply copy the file on my USB drive and plug that into whatever kiosk or computer the convenience store or print shop had and print. I read the waiver for years saying something to the order of "we may keep an image of what is being printed....." for whatever reason, and tapped "Ok" and paid the few cents per page printing fee. I've never liked doing it, but I print so rarely, I thought it was the best middle ground.

However, with all this AI mumbo jumbo becoming ubiquitous, I think the margin for error on the parts of these "tech companies" is just growing. The latest announcement from Microsoft introduces "Windows Recall", and that was the straw that broke my camel's back. The layers of fat that had put up with so much over the years, could protect the poor camel NO MORE. This is a ridiculous, unneeded, and potentially dangerous feature that no one is asking for, aside maybe from investors asking Microsoft why they need to spend billions of dollars on the letters 'A' and 'I'. 

Now, having been a Linux user almost entirely for 6 years, I'm not too worried about this Windows feature. However, where would I expose my data on a Windows system? When I print something, because everyone uses Windows at the Kiosk, at the print shop, etc. Microsoft claims the data is "private and on device", but not if it isn't my device - if a Windows PC is anyone's personal device in the first place. 

More info: https://arstechnica.com/gadgets/2024/05/microsofts-new-recall-feature-will-record-everything-you-do-on-your-pc/

So, with all that said and ranted about, I decided it was time to purchase an actual printer. After exploring some options, I settled on the HP M141W which is an entry-level black-and-white laserjet with a scanner. It has network access, but I decided to just connect via USB. The tinfoil hat remains in place.

The packaging listed all the steps, apps, drivers etc for every platform and claimed to require Internet access. but on Linux the printer definitely works and does not require to connect to the Internet. Simply ensure the packages 'hp-ppd' and 'hpijs-ppds' are installed (at least that is what I tested on PopOS / Ubuntu / Debian). The scanner worked immediately, while the printer needed to be power cycled after the driver package installed. 

Based on my needs, I am hopeful this will last a good while. Printer support in Linux is known, these days to be quite robust thanks to the CUPS driver stack most printers will be supported, but always check. I ensured that HP printers, as a policy, have Linux driver support, and just wanted to share the steps to get it up and running.


File Storage and Sharing for Creators in 2024

This article explores the many ways content creators, photographers, videographers, and production companies can leverage open source tools to store, share, and collaborate on their media assets. Different methods can depend on the size the of the team - from a single individual to a multi-national teams - and examine the pluses and minuses of each approach.

A single creator

Creators making and storing large files have a plethora of options available. The most simple is just to copy files onto ones workstation or laptop, edit, and publish. Often this quickly becomes a problem, particularly for individuals working with 4K or even 8K content. For these editors, often the simplest approach is to simply use an external hard drive(s). This approach is perfectly viable, but can quickly become an issue as users fill up more and more hard drives.

Another issue related to scale, is performance. A direct attach drive over USB or USB-C in theory can handle upwards of 10 Gb/s, but larger drives that are still traditional spinning rust (a normal hard drive that spins) will have a max throughput of around 250 Mb/s - 1/4~1/2 the throughput of the interface. NVMe SSDs are available and becoming ever more cost competitive at the 500 GB, 1 TB and 2 TB sizes, but will be noticeably more expensive than traditional hard drives at higher capacities. However, even though external SSDs are performant, they are not redundant - so manual backups will be needed, and the SSDs will eventually wear out. Always make sure data is backed up in some form. Having external drives over time can also be unwieldy with lots of data since the storage per drive cannot be expanded, meaning content creators will often have a pile of drives carefully labeled with different projects potentially getting spread across multiple drives. Essentially sprawl.

The introduction of 8K footage, for production houses and creators is another major issue. 8K footage is truly massive in capacity creating upwards of over 120 GB of content per minute.* An external hard drive or SSD will quickly get filled potentially within a single shoot. Creators need more storage and in a different format to keep up.

Upgrading to a NAS

Network Attached Storage (NAS) is, as the name suggests, storage that is accessible over a network. What it means in practice is users of a subnet (IP range) can access and share files that are located in a single or multiple servers. In Windows land with Active Directory this feature is just the share file feature in the Windows Explorer. Typically when talking about 'a NAS' usually IT administrators refer to a specific server designed with storage in mind that has a drive management, RAID, a file system, and the ability to share the files using a file share protocol. The most common protocols are SMB or Samba (open source compatible SMB), NFS, and AFS. For most in the creator or video production space, SMB will be the primary protocol because it is well sported on Windows, Linux, and macOS environments. Even iPadOS for iPad devices has some support for SMB in the files app.

Moving from a single or multiple external drives to a NAS has several benefits. First most NAS appliances or software projects will have the ability to create a RAID group to span multiple drives together into a single storage pool. This is useful so that multiple hard drives or SSDs can be grouped into a larger total capacity than any single drive would offer, allow projects to all be grouped together in a single master folder. Additionally, RAID will allow for greater performance for reads and writes as it offers more drives and total bandwidth, plus RAID will offer some level of redundancy to help keep data available. Another advantage of using a NAS is that data can be shared across groups, no direct cables need to be plugged in, and everyone on the network can work off a joint project or folder(s). A NAS is always one of the first steps once creators move from a one man operation to a larger group.

CAUTION: RAID is not a back up, and a second pool that is perhaps larger in capacity, but slower in performance is always recommended to back up the data to.

Choosing or building a NAS

The likes of QNAP and Synology, or Asustor offer entry-level NAS appliances which are good first steps. Typically, the entry-level boxes are rather under powered, however, and very limited in terms of how many drives one can use, etc. For non-technical users perhaps an entry-level NAS makes sense, but building one on perhaps old or leftover hardware with new drives can often have more performance - plus reduce e-waste!

Users interested in building a NAS can look at a variety of open source projects, such as TrueNAS, Open Media Vault, Unraid, and more. Personally, I recommend TrueNAS as it is well supported, has a corporation maintaining the project with Enterprise options for larger organizations, and offers an attractive GUI available via a web browser for setting up and managing the drives, creating users and shares. TrueNAS also has a native implementation of the ZFS file system which is extremely robust with built-in RAID support, copy-on-write operations, unlimited snapshots, and almost unlimited scalability - 256 quaddrillion zetabytes. For perspective, that is similar to buying the entire storage market of all hard drives in a year and connecting them all together. ZFS can also be expanded buy adding more RAID groups (called VDEVs) to a pool so storage can always be expanded. ZFS also has a replicate function called 'zfs send' which can send a snapshot(s) of data to a separate pool either on the same or different host quickly. The second pool or the backup pool, can have completely different hardware, different RAID layout, etc., but the ZFS file structure can still operate and be recovered usually in seconds should there be a need. TrueNAS supports all the major NAS protocols, as well as WebDAV for HTTP/HTTPS transfers, and has the ability to expand functionality with 3rd party projects, VMs, Jails (TrueNAS CORE) or containers (TrueNAS SCALE), making the project quite versatile.

For users interested, more information about getting started with TrueNAS is here.

Cross-site and International Collaboration

In 2020, the world was introduced to lockdowns, disease, and working from home gained unheard of traction and interest. The old adage "necessity is the mother of invention" was never truer. Knowledge workers, including in the creative space, were some of the first to move to working from home leading to a major shift in the office paradigm and a boom in laptop sales. File access was suddenly something that needed to be reinvented.

For users connecting remotely, often a NAS will not be the correct choice, or at least not the total solution for a few reasons. Fist, remote workers are remote and on a different network. NAS protocols - the aforementiond SMB, NFS, AFS - are not built for Internet file access. Most NAS protocols expect a constant connection to the files and will create file locks for open documents. HTTP/HTTPS traffic was designed to handle gaps and mulitiple hops - routing between different servers and routers - when accessing files and is thus the preferred protocol for nearly all Internet-based traffic.

Another important reason not to expose a NAS to the Internet is security. Virtually no NAS provider ever recommends a user to expose the system to the Internet as the appliances are built for back end storage work over a LAN. Especially when using some proprietary systems, there is very little to any auditing being done on the system's firmware and base OS code. Examples abound.**

For multi-site, international collaboration, the most secure and reliable way to access files is via the same medium that gave birth to the Internet - a website. Nextcloud is a total collaboration platform for storing, sharing, and creating documents and files. It includes powerful tooling and apps to track notes, create user / team tasks, manage groups and access, create survey forms, and much more. For creators looking to collaborate with other team members, Nextcloud can even mount an local NAS to the platform so that users on the network editing video, sound or image files from the NAS can then share their results via Nextcloud using secure HTTPS without having to copy the collateral to the platform. The platform has robust file versioning, and with customizable logos and an app-based model for enabling different functionality, Nextcloud can be customized to almost any workflow desired.

Nextcloud is installed as a website and can be run with either Apache or NGINX web servers. The project has several ways to install and get started - raw source, bespoke VM images, or Docker images. Since the platform is built around web servers, it can adhere to the most robust TLS/SSL encryption standards that are well established, with additional security that could be added using load balancers and firewalls possible.

More information on getting started with Nextcloud is here.

Putting it all together

For industries dealing with or creating large files, there are a multitude of ways to store, share, and protect data. For individual users, local storage could be enough, but will quickly fill up and become hard to manage. Networked file storage in the form of a NAS system, make storage and file management easier, and also allow for teams of editors to more easily work together. Growing even larger, or for teams spread out across different locations, Nextcloud is a total platform that is both secure and capable not only for file sharing and storage, but also group collaboration.

Ref:
*8K file sizes https://www.signiant.com/resources/tech-article/file-size-growth-bandwidth-conundrum/
** Synology and WD vulnerabilitys: https://www.securityweek.com/western-digital-synology-nas-vulnerabilities-exposed-millions-of-users-files/
** Asustor vulnerabilities: https://www.theverge.com/2022/2/22/22945962/asustor-nas-deadbolt-ransomware-attack
Get TrueNAS: https://www.truenas.com/
Get Nextcloud: https://www.nextcloud.com/

Using Virt-Manager to Create Base VM Images

This topic is something that I've known about in theory, but never realized until the past couple of weeks ago how easy Virt-Manager (the KVM VM management GUI in Linux) makes this process. To anyone needing to test a lot of things in a Linux environment - or any environment really - and doesn't want to use containers, this will hopefully make your life a little bit happier. 

So I've been creating some self-install scripts for Nextcloud and other software - essentially ways to build up a basic Nextcloud instance from scratch with an easy-to-follow guided flow. In building the script if I hit a snag during testing, I would need to tear down the VM and start over. Thankfully, Virt-Manager can create VM clones. Even better it can leverage the base OS! 

 

If you uncheck the storage option, the image will boot just as the base OS image, and a diff partition will be created. This feature is great for testing, and while there is a notice running this way can be dangerous - essentially if the base OS ever changes the clone could get corrupted I imagine - for testing installs and different package combos, this is really nice. 

Essentially, this feature lets one install a base OS once. Create the user account, add sudo access, etc. once. Then shutdown the base OS, clone it with shared storage and start playing. No need to reinstall the OS, no need to worry if something goes sideways. If the test install somehow has issues, just nuke it, make another clone the good base and start over, or test something else. Really nice!

Why not just use containers?

Docker and podman and kubernetes, etc. are all fantastic tools. However, occasionally, having base feature sets like text editors, systemd, cron, etc, are really helpful. Especially if the instance will be running multiple services with different dependencies etc., using a VM makes a lot of sense. 

How to install it? 

I have a longer video way back in 2020 that goes through the concepts, but if using a Debian variety of Linux (Debian, Ubuntu, LinuxMint, PopOS, ZorinOS, MX Linux, etc.), just run the below command. For RedHat distros just swap 'apt' with 'dnf'.

sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager

Then start the libvirtd service

systemctl enable libvirtd

systemctl start libvirtd

That's all. Virt-Manager will be installed - search for Virtual Machine Manager - and start making VMs!

Using Nextcloud and WebDAV as a backup target

 


This video explores the built-in WebDAV feature of Nextcloud for backing up and sync'ing files from a local client to a Nextcloud instance. There are a couple of reasons why users or organizations may want to implement this feature.

- Nextcloud's own user management makes it very straightforward to separate our different user profiles, authentication, and also data quotas. This means it's very simple to deploy Nextcloud and automate employee data back ups on the network with automated back ups of say 30GB or 100GB per user.

- Using the WebDAV protocol can be more efficient than uploading lots of files to the web interface. 

- The Nextcloud desktop client also perform the same task, and also uses the WebDAV protocol. More information about the client tool is here: https://docs.nextcloud.com/desktop/latest/

 Additional resources:

Object Storage on TrueNAS

 

I wanted to cover the many ways TrueNAS can be used to create and manage object storage. TrueNAS includes Minio S3 object storage out of the box, and the two work very well together. TrueNAS is a scalable, easy way to run and manage ZFS, and Minio is the de-facto way to self host S3 objects. 

Using the standard S3 service in TrueNAS is the quickest way to get setup and running with S3 objects. However, user management, if one wanted to open up remote access to the Minio web console, is a bit more difficult. 

What the video proposes is a couple of ways to silo off and breakdown the S3 service either with a web front end, or simply by making use of jails to host multiple, separate instances of the S3 object storage. 

Here, we'll look more at the details for each option. 

Create a jail to run Minio:

Go to the Jails section on TrueNAS. Create a new jail, give it a name, and set the network as desired - the video showed with DHCP, but static IP addresses are available as well. 

Once created Start the jail. Enter the shell. In the shell type 

pkg update

Accept yes to install.

pkg search minio

pkg install minio-0420....(whatever the current version is provided)

Once installed, make a directory. Could be anywhere, I chose in the /mnt directory of the jail.

mkdir /mnt/miniodata

Start the Minio server with the following

minio server /mnt/miniodata

That will start the service, but if you close the console/terminal screen the service will also terminate. To make this a bit more robust we can run the service with cron. 

Type crontab -e

Insert (defaul is vi, type 'i' to insert, esc to stop inserting, :w to write, :q to exit)

@reboot minio server /mnt/miniodata --console-address=":9090"

Now the service for Minio will start on each boot of the jail, with the console dedicated to run on port 9090 of the jails unique IP address.

Create a web server to run a frontend in a jail: 

To build out a  LAMP (technically a FAMP - FreeBSD, Apache, MySQL, PHP) I followed this excellent guide on Digital Ocean. I did not need the database portion, so that was skipped, though I did install the php-mysqli packages just in case I wanted it in the future. 

Digital Ocean Guide 

Install steps in the jail terminal/shell. 

pkg update -y

pkg install apache24

sysrc apache24_enable="YES"

service apache24 start

Navigate to the jail IP address and check to see if "It works!" appears.

pkg install php81 php81-mysqli mod-php81 php81-simplexml

The php81-simplexml package doesn't come down with the meta package for php81, and this threw me for a couple of hours because it is needed for the AWS PHP S3 plugin we install. 

Initiate PHP with specific settings:

cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini 

Initiate PHP in Apache (you can use either vi or ee as the text editor):

ee /usr/local/etc/apache24/modules.d/001_mod-php.conf

Copy this in the file:

<IfModule dir_module>
    DirectoryIndex index.php index.html
    <FilesMatch "\.php$">
        SetHandler application/x-httpd-php
    </FilesMatch>
    <FilesMatch "\.phps$">
        SetHandler application/x-httpd-php-source
    </FilesMatch>
</IfModule>

Install PHP Composer for the S3 support.

pkg search composer

pkg install php81-composer 

composer require --working-dir=/usr/local/www/apache24 aws/aws-sdk-php

Now all this was to support the POC file I have over on Github. If you want to use it copy it to the jail, or install git in the jail and run a git clone command. Put everything in the /work/src/ folder in the /usr/local/www/apache24/data directory in the jail. Also be sure to modify the S3 endpoint, credentials, bucket and host address to those of the jail's IP and relevant credentials. 

I hope this gives a better overview of using object storage on TrueNAS. It is a really flexible feature, can be plugged into a lot of other environments, or even self-hosted on the TrueNAS itself using jails.

TrueNAS in 2023


 

I haven't had time to look back at TrueNAS in a while. This video is a raw install on a VM to explore the installation and setup of a basic SMB share. I had to scratch my brain a bit to remember some of the options to make some shortcuts. 

Some of the shortcuts that I took are well highlighted in the interface. 

  • First of all I installed on a VM which is not recommended for production. It is known that ZFS doesn't like virtual media, but it is fine for testing and learning the system.
  • Created a stripe pool - I only added one virtual disks because it doesn't matter for virtual medium - but it was nice the UI forces the user to confirm around 3 times before moving forward. 
    • On a standard hardware installation, at least 3 drives - boot and 2 data drives, and more drives can be added to help scale capacity and performance.
  • On the SMB share I didn't add a specific user, but rather just opened it up for anyone to modify. All the ACL flags, and allowing guest access is not great, obviously, for security.
    More information about creating a user to password protect the share is here:
    https://youtu.be/UEiwMIG0W9Q?feature=shared
I am planning to create some more videos about some of the lesser used features with TrueNAS as well as explore some of the newer features that have continued to improve over the past couple of years since I was using TrueNAS regularly.

Choosing a Laptop for Linux

Just sharing some experience I've had using various laptops with the latest Linux builds over the past 5 years.

In 2018 I started using Linux 100% of the time - literally no Windows or MacOS (never used it in the first place). I had a desktop, which I still have and still runs Linux, but I needed a laptop for work. 

I bought a then really cheap Acer S1 which had like a Pentium or Celeron processor and hard 4GB of RAM all soldered on. I thought it was fine as I could just use it for a customer presentation or typing up some things in a coffee shop. After a year, my wife wanted a PC so I gave that to her, and bought something new. Eventually she just didn't use it - preferring her iPad - so that got sold. It was okay up to like 8 browser tabs, but if I didn't have a desktop, I would have gone insane using it. 

As much as I love AMD, I've had issues. I had a Lenovo Ideapad for over a year and normally was great. Started with Linux Mint XFCE and really liked it. After a certain kernel update it didn't work - the screen was rendered completely unusable going from Kernel 5.0 to 5.3. Basically after reporting the issue and just booting to the old kernel each time, I got frustrated and abandoned ship to Fedora. That was great, but suddenly after about a year it started suddenly freezing randomly. After 6 weeks of this, I was writing an important email to a customer []7+complaining about something, and it froze. The laptop and the laptop stand suffered severe dents (punching laptops is not good for their health), and eventually I realized the second RAM slot (the only RAM slot that was removable and upgradeable) was rendered unusable - so only 4GB of onboard RAM to use.

That was donated to a school later....

I tried the Microsoft Surface Laptop Go for a while and it worked pretty well. The battery life on that is poor, but the heat management is bad on Linux, likely worse on Windows. I got a good deal so moved to it, but the performance was always hampered due to heat issues. This has turned into a Windows 11 PC for my wife who claims she needs it, I've not noticed using it.

A few months ago I bought a used Thinkpad X280 with an 8th gen Core i7, 16GB RAM, and and removable M.2 SSD. I messed up the screen - it's a poor 1360x768 px - but I normally connect to a larger monitor that's 1080P. This is the best laptop I've had. Build quality, keyboard, and just reliability. With PopOS it runs great, and Pop will even help show firmware updates for the system and install them with a reboot from the UI <-- something I never thought was possible in Linux. It cost less than USD300, and while it's maybe not the latest and greatest, it makes up for it by being reliable, good build quality, and strong Linux support.

Lessons learned. Older hardware is where Linux is developed, and is the better choice. Intel seems to be the better option long term - though this is not an absolute truth. Focus more on the build quality than the specs - unless you are building kernels, build quality will be more worthwhile than the exact Geekbench score for most use cases. Also consider heat management - make sure the model has vents, etc. Finally remember that Lenovo Ideapads are not Thinkpads <-- build quality, and general use by the Linux community (IBM invented Thinkpads and owns RedHat.....).