What are we doing here?

This blog includes a series of videos and references to help new users or enthusiasts better understand how to use open source and free technology tools. The quick links includes more information for accessing many of the tools covered along with other references to learn more for taking advantage of these tools.

Click HERE to see the full list of topics covered!

Open Source Economic Impact

This is a bit of an op-ed, but given a lot of the events recently, I thought it might be good to think through the pluses and minuses of open source - particularly software, though could really apply to many disciplines. 

I am largely convinced - both from commercial experience and personally - that open source is a net benefit at large, but I wanted to try and take a more quasi-academic approach and research the issue. As someone with History and Economic degrees I still value providing my sources, so I tried to put them in the end notes as well.

Open Source: Economic Impact

With a lot of the world economies on edge due to recent uncertainty, I thought it would be interesting to think through the economic impact of open source tools and services.

Open Source Advantages

Open source software and its proliferation is one of the greatest drivers of technical innovation and value that have helped any number of industries and individuals. The software delivery model has numerous advantages from the obvious freely available, automatically auditable, and modifiable for developers, to less obvious advantages such as helping promote newer and younger developers.

Where Open Source is Employed?

Open source software has for years been the unknown, unseen hero of any number of device manufacturers. Sony, Nintendo, NetApp, EMC, and any number of router providers rely on open source projects, such as FreeBSD, to help develop their bespoke appliances. The Top500 largest and fastest super computers have for decades been entirely running on open source software, primarily Linux, networking drivers, and lower-level kernel. The advantage for all of these companies for using open source software is immensely useful particularly at the low level. First these organizations do not have to "reinvent the wheel", the code is available and works to enable either the WiFi or CPU technology. The second advantage is that much of the lower level technology stack, while absolutely critical, is often very unprofitable to maintain. Imagine the type of work that goes into tracking down memory-leaks or other low-level issues, verse what marketers sell "cloud, cloud, automation, AI, AI". If hardware vendors push and maintain code in an open source community, it helps not only make the code more available, but it also gets tested by a wider community that essentially puts in additional free man hours. Intel's Joe Curley summed it up "It’s not a charity — this work is in everybody’s best interest. If we build innovative hardware, such as adding additional cores to a CPU, it delivers more value and causes people to consume more of our products." ^1

The Snowball Effect

In 2024, the Harvard Business Review released an attempt to estimate the value of open source projects. The paper has a lot of assumptions and mathematical experiments to try and come up with a number.

"The thought experiment is that we live in a world where OSS does not exist and has to be recreated at each firm that uses a given piece of OSS. Using the labor market approach, we calculate the labor replacement cost of each OSS package. To estimate the value for each package, we use COCOMO II (Boehm, 1984; Boehm et al., 2009) at the individual package level and then sum across all package values to obtain a supply-side labor market replacement value. Then, we scale the supply-side value by the number of times firms are using each package while removing multi-usage within each firm to obtain a demand-side value." ^2

What is interesting about the outcome is a snowballing of the level of effort verses the achieved value. Ignoring the specific numbers, which again have a lot of complex calculations and specific assumptions, the total estimated work involved to create all the commits and contributions to the Github projects reviewed was in the billions in USD. However, the value based on where the software was employed was in the trillions. There was a 1000x explosion in value from the work needed to the value the software provided.

The snowball effect in value, while impressive, is by no means hard to comprehend. Exposing code and knowledge freely for other community members to then take and build upon helps propagate knowledge, and that promotes further innovation.

Open Source and Business

Given the immense, inherent value of open source, in some ways it is surprising how difficult it can be to profit or sell open source software. Practically, however, it is quite easy to understand the dilemma: "Why pay for something free". The most common answer is the promise of support, but the real answer is sustainability. Any enterprise relying on a fundamental technology to run their business will want to have the project continue, and supporting the project will help sustain it. As cybersecurity becomes ever more critical at an exponential rate, it is in everyone's best interest to support and sustain projects to ensure the latest patches, best practices, and tooling continues to develop.

Organizations support open source projects in roughly three major ways: purchasing commercial licenses, contributing developer resources, and through donations. Donations of course are almost always welcomed if the project is set up correctly, though they are always hard to rely on. Contributing developer resources are also a vital source, but often they are done in the pursuit of the organization pushing its own agenda, example: Intel offering its C Compiler or drivers to the Linux Kernel. Purchasing commercial licenses model obviously is preferred by the developers of open source software since it gives the vendor more control over budget and terms of the agreement. It also grants the end customer or organization purchasing the license a stake in the project with guaranteed support from the vendor. For purveyors of open source software, unless the vendor is selling hardware that runs open source software, having a business-level engagement via a license makes a lot of sense.

Having worked in companies that sell open source software, the reality is not so straightforward. Often it can seem that "We're our number 1 competitor" when trying to actually sell. That is true, and certain customers cannot be convinced of a sale. However, having also worked in companies that are not open source, I have seen real benefits. Open source software vendors, if their project is popular and well respected, are almost guaranteed sales leads and interest. This organic interest and trust can help propel smaller organizations to international exposure, a level of which other non-open source providers of similar size simply cannot match without significant investment in marketing, outreach and other areas.

Knowledge Drives Economic Growth

In economic theory, a very prominent growth theory is the Solow-Swan model that theorizes that taking into account capital and labor, a key factor for long term growth is a factor of knowledge or technology. The technology could be actual tech, like the adoption of computers and Internet, or better legal policies, or just general manufacturing or production know-how. The model has a lot of empirical evidence, particularly when looking at developing countries.^3

No paradigm better enables the spread of technology and knowledge than open source. By definition it removes barriers to the dissemination of knowledge, making information, including source code, freely available for adoption, and has become a bedrock for growth and innovation.

TLDR

Open source technology injects a huge amount of value and economic benefits at both the micro and macro levels. Businesses large and small can benefit from open source software, whether its a multi-trillion dollar multi-national organization leveraging open source tooling and community hosting and testing, or a small-/mid-size company looking for more exposure, open source drives down costs for both tooling and reach. On an even broader perspective, given the value of community contribution - a 1000X multiplier vs the effort - countries and regions should be well incentivized to help facilitate and support open source projects as a broader economic growth driver.


----------------

1. Arun Gupta and Joe Curley, "How Intel Supports Open Source from the Inside Out", https://www.intel.com/content/www/us/en/developer/articles/community/how-intel-supports-open-source-from-the-inside-out.html

2. Manuel Hoffmann, Frank Nagle, Yanuo Zhou, "The Value of Open Source Software", Harvard Business Review, (2024), https://www.hbs.edu/ris/Publication%20Files/24-038_51f8444f-502c-4139-8bf2-56eb4b65c58a.pdf

3. Abesalom Webb, "Revisiting the Solow Growth Model a Theoretical Examination of Technological Progress in Developing Economies", SvedbergOpen, https://www.svedbergopen.com/files/1720763035_(2)_IJMRE11042024PR37_(p_18-24).pdf


---------------

Additional References:

- https://en.wikipedia.org/wiki/Open-source_economics

- http://www.congo-education.net/wealth-of-networks/ch-03.htm#3-1

- https://opensource.com/article/18/9/awesome-economics-open-source

- https://www.technologyreview.com/2022/04/21/1050788/the-changing-economics-of-open-source/

- https://www.library.hbs.edu/working-knowledge/the-simple-economics-of-open-source

- https://www.sciencedirect.com/science/article/pii/S0164121221002442

- https://interoperable-europe.ec.europa.eu/collection/open-source-observatory-osor/news/first-results-study-impact-open-source

- https://en.wikipedia.org/wiki/PlayStation_4_system_software

- https://www.linuxfoundation.org/research/open-source-funding-2024?hsLang=en

- https://www.svedbergopen.com/files/1720763035_(2)_IJMRE11042024PR37_(p_18-24).pdf

- https://en.wikipedia.org/wiki/Solow%E2%80%93Swan_model

Using Alpine as a Docker host

 

Who knew Alpine was a really cool Docker host? 

Alpine Linux is a really interesting, lightweight distribution of Linux that has a minimal foot print that belies some of the power capable in the distro. Not only is Docker easy to install on the platform, there are a lot of nice animations and quality of life features that makes what is normally the distro for building containers, a great distro to host those containers as well. 

The latest version of Alpine features Linux kernel 6.12 which has a ton of nice features around the scheduler and a host of other features. While most of the features may not get used directly by a Docker host, the hosted containers will also leverage the underlying host OS kernel (a big difference between containers and VMs) so could also potentially benefit the application performance as well. 

Some more references are below:

https://www.phoronix.com/review/linux-612-features

https://9to5linux.com/linux-kernel-6-12-officially-released-this-is-whats-new

Creating an NFS share in TrueNAS SCALE

 

 
NFS shares have been around for decades, but they continue to be relevant and used on a daily basis. Whether creating a storage backend for a web server or servers, or building the latest AI model, NFS continues to be a go to file sharing protocol for Linux servers and clusters alike. 

This video quickly explains how to build a quick share with NFS on TrueNAS SCALE. It covers where to click, what considerations to take into account, how the POSIX permissions get set and applied - both from the TrueNAS host and the Linux host. 

Hopefully, the 10 minute video is self explanatory, and feel free to leave comments if there are any questions.

Nextcloud and Ansible

 

This tutorial is really more of a proof of concept of using Ansible and other scripts to more easily deploy Nextcloud from scratch. 

Ansible is an interesting tool for deploying and maintaining server systems. Very much like Docker, it uses YAML to build configuration files called Playbooks, and these Playbooks can be used to automate actions on the managed nodes in the cluster. 

In the deployment, as mentioned in the video, I do not deploy everything that's needed for Nextcloud. The Ansible Playbook doesn't even download and install Nextcloud. This is deliberate because I would think that nodes can be added, taken away, replaced, etc. and the Nextcloud config may want to reside in a more permanent storage environment. For this reason I offer a reference for getting NFS set up in Debian 12, and then mounting it to the web server. 

After the initial web server and Nextcloud install are complete, and the config.php file is modified as needed for Redis, Memcache, etc., subsequent updates and changes to web server nodes can be made. I even noticed, while recording, that I had missed memcached package in the yaml, but a quick change and re-running the configuration brought it in. 

The source for the Ansible playbook, as well as single-node scripts for installing Nextcloud from scratch is available on Github. 

https://github.com/JoeMrCoffee/AutomateManualNextcloudInstall/

Printers and Linux

productName

For years I've been using public printing options to print anything that needs printing. Most all of my data can be digital, and I've become rather fastidious at managing and maintaining digital data. Living in Taiwan, paper gathers dust, gets dirty, yellows, and can easily become moldy. However, there are times that you just need to print something.

In the past (up until yesterday), if I needed to print something I would simply copy the file on my USB drive and plug that into whatever kiosk or computer the convenience store or print shop had and print. I read the waiver for years saying something to the order of "we may keep an image of what is being printed....." for whatever reason, and tapped "Ok" and paid the few cents per page printing fee. I've never liked doing it, but I print so rarely, I thought it was the best middle ground.

However, with all this AI mumbo jumbo becoming ubiquitous, I think the margin for error on the parts of these "tech companies" is just growing. The latest announcement from Microsoft introduces "Windows Recall", and that was the straw that broke my camel's back. The layers of fat that had put up with so much over the years, could protect the poor camel NO MORE. This is a ridiculous, unneeded, and potentially dangerous feature that no one is asking for, aside maybe from investors asking Microsoft why they need to spend billions of dollars on the letters 'A' and 'I'. 

Now, having been a Linux user almost entirely for 6 years, I'm not too worried about this Windows feature. However, where would I expose my data on a Windows system? When I print something, because everyone uses Windows at the Kiosk, at the print shop, etc. Microsoft claims the data is "private and on device", but not if it isn't my device - if a Windows PC is anyone's personal device in the first place. 

More info: https://arstechnica.com/gadgets/2024/05/microsofts-new-recall-feature-will-record-everything-you-do-on-your-pc/

So, with all that said and ranted about, I decided it was time to purchase an actual printer. After exploring some options, I settled on the HP M141W which is an entry-level black-and-white laserjet with a scanner. It has network access, but I decided to just connect via USB. The tinfoil hat remains in place.

The packaging listed all the steps, apps, drivers etc for every platform and claimed to require Internet access. but on Linux the printer definitely works and does not require to connect to the Internet. Simply ensure the packages 'hp-ppd' and 'hpijs-ppds' are installed (at least that is what I tested on PopOS / Ubuntu / Debian). The scanner worked immediately, while the printer needed to be power cycled after the driver package installed. 

Based on my needs, I am hopeful this will last a good while. Printer support in Linux is known, these days to be quite robust thanks to the CUPS driver stack most printers will be supported, but always check. I ensured that HP printers, as a policy, have Linux driver support, and just wanted to share the steps to get it up and running.


File Storage and Sharing for Creators in 2024

This article explores the many ways content creators, photographers, videographers, and production companies can leverage open source tools to store, share, and collaborate on their media assets. Different methods can depend on the size the of the team - from a single individual to a multi-national teams - and examine the pluses and minuses of each approach.

A single creator

Creators making and storing large files have a plethora of options available. The most simple is just to copy files onto ones workstation or laptop, edit, and publish. Often this quickly becomes a problem, particularly for individuals working with 4K or even 8K content. For these editors, often the simplest approach is to simply use an external hard drive(s). This approach is perfectly viable, but can quickly become an issue as users fill up more and more hard drives.

Another issue related to scale, is performance. A direct attach drive over USB or USB-C in theory can handle upwards of 10 Gb/s, but larger drives that are still traditional spinning rust (a normal hard drive that spins) will have a max throughput of around 250 Mb/s - 1/4~1/2 the throughput of the interface. NVMe SSDs are available and becoming ever more cost competitive at the 500 GB, 1 TB and 2 TB sizes, but will be noticeably more expensive than traditional hard drives at higher capacities. However, even though external SSDs are performant, they are not redundant - so manual backups will be needed, and the SSDs will eventually wear out. Always make sure data is backed up in some form. Having external drives over time can also be unwieldy with lots of data since the storage per drive cannot be expanded, meaning content creators will often have a pile of drives carefully labeled with different projects potentially getting spread across multiple drives. Essentially sprawl.

The introduction of 8K footage, for production houses and creators is another major issue. 8K footage is truly massive in capacity creating upwards of over 120 GB of content per minute.* An external hard drive or SSD will quickly get filled potentially within a single shoot. Creators need more storage and in a different format to keep up.

Upgrading to a NAS

Network Attached Storage (NAS) is, as the name suggests, storage that is accessible over a network. What it means in practice is users of a subnet (IP range) can access and share files that are located in a single or multiple servers. In Windows land with Active Directory this feature is just the share file feature in the Windows Explorer. Typically when talking about 'a NAS' usually IT administrators refer to a specific server designed with storage in mind that has a drive management, RAID, a file system, and the ability to share the files using a file share protocol. The most common protocols are SMB or Samba (open source compatible SMB), NFS, and AFS. For most in the creator or video production space, SMB will be the primary protocol because it is well sported on Windows, Linux, and macOS environments. Even iPadOS for iPad devices has some support for SMB in the files app.

Moving from a single or multiple external drives to a NAS has several benefits. First most NAS appliances or software projects will have the ability to create a RAID group to span multiple drives together into a single storage pool. This is useful so that multiple hard drives or SSDs can be grouped into a larger total capacity than any single drive would offer, allow projects to all be grouped together in a single master folder. Additionally, RAID will allow for greater performance for reads and writes as it offers more drives and total bandwidth, plus RAID will offer some level of redundancy to help keep data available. Another advantage of using a NAS is that data can be shared across groups, no direct cables need to be plugged in, and everyone on the network can work off a joint project or folder(s). A NAS is always one of the first steps once creators move from a one man operation to a larger group.

CAUTION: RAID is not a back up, and a second pool that is perhaps larger in capacity, but slower in performance is always recommended to back up the data to.

Choosing or building a NAS

The likes of QNAP and Synology, or Asustor offer entry-level NAS appliances which are good first steps. Typically, the entry-level boxes are rather under powered, however, and very limited in terms of how many drives one can use, etc. For non-technical users perhaps an entry-level NAS makes sense, but building one on perhaps old or leftover hardware with new drives can often have more performance - plus reduce e-waste!

Users interested in building a NAS can look at a variety of open source projects, such as TrueNAS, Open Media Vault, Unraid, and more. Personally, I recommend TrueNAS as it is well supported, has a corporation maintaining the project with Enterprise options for larger organizations, and offers an attractive GUI available via a web browser for setting up and managing the drives, creating users and shares. TrueNAS also has a native implementation of the ZFS file system which is extremely robust with built-in RAID support, copy-on-write operations, unlimited snapshots, and almost unlimited scalability - 256 quaddrillion zetabytes. For perspective, that is similar to buying the entire storage market of all hard drives in a year and connecting them all together. ZFS can also be expanded buy adding more RAID groups (called VDEVs) to a pool so storage can always be expanded. ZFS also has a replicate function called 'zfs send' which can send a snapshot(s) of data to a separate pool either on the same or different host quickly. The second pool or the backup pool, can have completely different hardware, different RAID layout, etc., but the ZFS file structure can still operate and be recovered usually in seconds should there be a need. TrueNAS supports all the major NAS protocols, as well as WebDAV for HTTP/HTTPS transfers, and has the ability to expand functionality with 3rd party projects, VMs, Jails (TrueNAS CORE) or containers (TrueNAS SCALE), making the project quite versatile.

For users interested, more information about getting started with TrueNAS is here.

Cross-site and International Collaboration

In 2020, the world was introduced to lockdowns, disease, and working from home gained unheard of traction and interest. The old adage "necessity is the mother of invention" was never truer. Knowledge workers, including in the creative space, were some of the first to move to working from home leading to a major shift in the office paradigm and a boom in laptop sales. File access was suddenly something that needed to be reinvented.

For users connecting remotely, often a NAS will not be the correct choice, or at least not the total solution for a few reasons. Fist, remote workers are remote and on a different network. NAS protocols - the aforementiond SMB, NFS, AFS - are not built for Internet file access. Most NAS protocols expect a constant connection to the files and will create file locks for open documents. HTTP/HTTPS traffic was designed to handle gaps and mulitiple hops - routing between different servers and routers - when accessing files and is thus the preferred protocol for nearly all Internet-based traffic.

Another important reason not to expose a NAS to the Internet is security. Virtually no NAS provider ever recommends a user to expose the system to the Internet as the appliances are built for back end storage work over a LAN. Especially when using some proprietary systems, there is very little to any auditing being done on the system's firmware and base OS code. Examples abound.**

For multi-site, international collaboration, the most secure and reliable way to access files is via the same medium that gave birth to the Internet - a website. Nextcloud is a total collaboration platform for storing, sharing, and creating documents and files. It includes powerful tooling and apps to track notes, create user / team tasks, manage groups and access, create survey forms, and much more. For creators looking to collaborate with other team members, Nextcloud can even mount an local NAS to the platform so that users on the network editing video, sound or image files from the NAS can then share their results via Nextcloud using secure HTTPS without having to copy the collateral to the platform. The platform has robust file versioning, and with customizable logos and an app-based model for enabling different functionality, Nextcloud can be customized to almost any workflow desired.

Nextcloud is installed as a website and can be run with either Apache or NGINX web servers. The project has several ways to install and get started - raw source, bespoke VM images, or Docker images. Since the platform is built around web servers, it can adhere to the most robust TLS/SSL encryption standards that are well established, with additional security that could be added using load balancers and firewalls possible.

More information on getting started with Nextcloud is here.

Putting it all together

For industries dealing with or creating large files, there are a multitude of ways to store, share, and protect data. For individual users, local storage could be enough, but will quickly fill up and become hard to manage. Networked file storage in the form of a NAS system, make storage and file management easier, and also allow for teams of editors to more easily work together. Growing even larger, or for teams spread out across different locations, Nextcloud is a total platform that is both secure and capable not only for file sharing and storage, but also group collaboration.

Ref:
*8K file sizes https://www.signiant.com/resources/tech-article/file-size-growth-bandwidth-conundrum/
** Synology and WD vulnerabilitys: https://www.securityweek.com/western-digital-synology-nas-vulnerabilities-exposed-millions-of-users-files/
** Asustor vulnerabilities: https://www.theverge.com/2022/2/22/22945962/asustor-nas-deadbolt-ransomware-attack
Get TrueNAS: https://www.truenas.com/
Get Nextcloud: https://www.nextcloud.com/

Using Virt-Manager to Create Base VM Images

This topic is something that I've known about in theory, but never realized until the past couple of weeks ago how easy Virt-Manager (the KVM VM management GUI in Linux) makes this process. To anyone needing to test a lot of things in a Linux environment - or any environment really - and doesn't want to use containers, this will hopefully make your life a little bit happier. 

So I've been creating some self-install scripts for Nextcloud and other software - essentially ways to build up a basic Nextcloud instance from scratch with an easy-to-follow guided flow. In building the script if I hit a snag during testing, I would need to tear down the VM and start over. Thankfully, Virt-Manager can create VM clones. Even better it can leverage the base OS! 

 

If you uncheck the storage option, the image will boot just as the base OS image, and a diff partition will be created. This feature is great for testing, and while there is a notice running this way can be dangerous - essentially if the base OS ever changes the clone could get corrupted I imagine - for testing installs and different package combos, this is really nice. 

Essentially, this feature lets one install a base OS once. Create the user account, add sudo access, etc. once. Then shutdown the base OS, clone it with shared storage and start playing. No need to reinstall the OS, no need to worry if something goes sideways. If the test install somehow has issues, just nuke it, make another clone the good base and start over, or test something else. Really nice!

Why not just use containers?

Docker and podman and kubernetes, etc. are all fantastic tools. However, occasionally, having base feature sets like text editors, systemd, cron, etc, are really helpful. Especially if the instance will be running multiple services with different dependencies etc., using a VM makes a lot of sense. 

How to install it? 

I have a longer video way back in 2020 that goes through the concepts, but if using a Debian variety of Linux (Debian, Ubuntu, LinuxMint, PopOS, ZorinOS, MX Linux, etc.), just run the below command. For RedHat distros just swap 'apt' with 'dnf'.

sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager

Then start the libvirtd service

systemctl enable libvirtd

systemctl start libvirtd

That's all. Virt-Manager will be installed - search for Virtual Machine Manager - and start making VMs!

Using Nextcloud and WebDAV as a backup target

 


This video explores the built-in WebDAV feature of Nextcloud for backing up and sync'ing files from a local client to a Nextcloud instance. There are a couple of reasons why users or organizations may want to implement this feature.

- Nextcloud's own user management makes it very straightforward to separate our different user profiles, authentication, and also data quotas. This means it's very simple to deploy Nextcloud and automate employee data back ups on the network with automated back ups of say 30GB or 100GB per user.

- Using the WebDAV protocol can be more efficient than uploading lots of files to the web interface. 

- The Nextcloud desktop client also perform the same task, and also uses the WebDAV protocol. More information about the client tool is here: https://docs.nextcloud.com/desktop/latest/

 Additional resources:

Object Storage on TrueNAS

 

I wanted to cover the many ways TrueNAS can be used to create and manage object storage. TrueNAS includes Minio S3 object storage out of the box, and the two work very well together. TrueNAS is a scalable, easy way to run and manage ZFS, and Minio is the de-facto way to self host S3 objects. 

Using the standard S3 service in TrueNAS is the quickest way to get setup and running with S3 objects. However, user management, if one wanted to open up remote access to the Minio web console, is a bit more difficult. 

What the video proposes is a couple of ways to silo off and breakdown the S3 service either with a web front end, or simply by making use of jails to host multiple, separate instances of the S3 object storage. 

Here, we'll look more at the details for each option. 

Create a jail to run Minio:

Go to the Jails section on TrueNAS. Create a new jail, give it a name, and set the network as desired - the video showed with DHCP, but static IP addresses are available as well. 

Once created Start the jail. Enter the shell. In the shell type 

pkg update

Accept yes to install.

pkg search minio

pkg install minio-0420....(whatever the current version is provided)

Once installed, make a directory. Could be anywhere, I chose in the /mnt directory of the jail.

mkdir /mnt/miniodata

Start the Minio server with the following

minio server /mnt/miniodata

That will start the service, but if you close the console/terminal screen the service will also terminate. To make this a bit more robust we can run the service with cron. 

Type crontab -e

Insert (defaul is vi, type 'i' to insert, esc to stop inserting, :w to write, :q to exit)

@reboot minio server /mnt/miniodata --console-address=":9090"

Now the service for Minio will start on each boot of the jail, with the console dedicated to run on port 9090 of the jails unique IP address.

Create a web server to run a frontend in a jail: 

To build out a  LAMP (technically a FAMP - FreeBSD, Apache, MySQL, PHP) I followed this excellent guide on Digital Ocean. I did not need the database portion, so that was skipped, though I did install the php-mysqli packages just in case I wanted it in the future. 

Digital Ocean Guide 

Install steps in the jail terminal/shell. 

pkg update -y

pkg install apache24

sysrc apache24_enable="YES"

service apache24 start

Navigate to the jail IP address and check to see if "It works!" appears.

pkg install php81 php81-mysqli mod-php81 php81-simplexml

The php81-simplexml package doesn't come down with the meta package for php81, and this threw me for a couple of hours because it is needed for the AWS PHP S3 plugin we install. 

Initiate PHP with specific settings:

cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini 

Initiate PHP in Apache (you can use either vi or ee as the text editor):

ee /usr/local/etc/apache24/modules.d/001_mod-php.conf

Copy this in the file:

<IfModule dir_module>
    DirectoryIndex index.php index.html
    <FilesMatch "\.php$">
        SetHandler application/x-httpd-php
    </FilesMatch>
    <FilesMatch "\.phps$">
        SetHandler application/x-httpd-php-source
    </FilesMatch>
</IfModule>

Install PHP Composer for the S3 support.

pkg search composer

pkg install php81-composer 

composer require --working-dir=/usr/local/www/apache24 aws/aws-sdk-php

Now all this was to support the POC file I have over on Github. If you want to use it copy it to the jail, or install git in the jail and run a git clone command. Put everything in the /work/src/ folder in the /usr/local/www/apache24/data directory in the jail. Also be sure to modify the S3 endpoint, credentials, bucket and host address to those of the jail's IP and relevant credentials. 

I hope this gives a better overview of using object storage on TrueNAS. It is a really flexible feature, can be plugged into a lot of other environments, or even self-hosted on the TrueNAS itself using jails.

TrueNAS in 2023


 

I haven't had time to look back at TrueNAS in a while. This video is a raw install on a VM to explore the installation and setup of a basic SMB share. I had to scratch my brain a bit to remember some of the options to make some shortcuts. 

Some of the shortcuts that I took are well highlighted in the interface. 

  • First of all I installed on a VM which is not recommended for production. It is known that ZFS doesn't like virtual media, but it is fine for testing and learning the system.
  • Created a stripe pool - I only added one virtual disks because it doesn't matter for virtual medium - but it was nice the UI forces the user to confirm around 3 times before moving forward. 
    • On a standard hardware installation, at least 3 drives - boot and 2 data drives, and more drives can be added to help scale capacity and performance.
  • On the SMB share I didn't add a specific user, but rather just opened it up for anyone to modify. All the ACL flags, and allowing guest access is not great, obviously, for security.
    More information about creating a user to password protect the share is here:
    https://youtu.be/UEiwMIG0W9Q?feature=shared
I am planning to create some more videos about some of the lesser used features with TrueNAS as well as explore some of the newer features that have continued to improve over the past couple of years since I was using TrueNAS regularly.