What are we doing here?

This blog includes a series of videos and references to help new users or enthusiasts better understand how to use open source and free technology tools. The quick links includes more information for accessing many of the tools covered along with other references to learn more for taking advantage of these tools.

Click HERE to see the full list of topics covered!

Nextcloud in Docker!




So this is something I've been debating for a while - stick with a pure SMB share or try setting up Nextcloud.

Nextcloud is essentially local, open source, on-prem Dropbox with an ever growing feature set. It trumps SMB (a local Windows or Linux SAMBA file share) in many regards with things like apps for iOS / Android, calendar sync, email sync, and more. 

It's far more complicated than SMB or a generic NAS share in many respects, but that honestly makes it more easy to use for the end user - simply go to the local site or IP on the LAN, enter credentials and begin using it. 

I set this up using Docker and I wanted to share the experience. 

Using the official Docker Hub information was fine - essentially being able to test something easy in a single command.

$ docker run -d -p 8080:80 nextcloud

After running that in docker (likely will need sudo to grant permissions when running in Linux) you can just log in using your browser <server / computer IP>:8080.

That works, but isn't ideal. Nextcloud will tell you to change the database from their default sqlite to another - better, or more comprehensive - database immediately. 

For better use there is the docker-compose suggestion on the Docker hub Nextcloud page. I followed and tweaked that. It works really well, but there are some things to watch for. 

Modified docker-compose

    *Update* This is also now on GitHub: NextCloudDockerCompose

My version of the docker-compose file changes the following: 

  • Update the version of docker compose to 3, 
  • Adjust the way persistent volumes are defined to a local file folder vs the generic persistent file folder in the reference, 
  • Changed the restart preference to 'unless-stopped' in case you do need to stop the container it won't keep restarting (it should restart on a device/VM reboot) 
  • Add names to both the MariaDB and Nextcloud containers. That last bit makes it a bit easier to manage when you want to restart or interact the container.
Save the file in a folder as docker-compose.yml, adjust the passwords and directories/folders to your needs then run docker-compose up -d and it should run.

When I initially started this it worked well, but on first login I ran into an issue due to the way MariaDB was handling the input from Nextcloud. This might be my luck or just the age of the Nextcloud docker container post. I found this workaround which solved the issue:

Wayne's garden - essentially running the below command in the MariaDB (MySQL DB) solves the issue.

Enter command line in the MariaDB container:

sudo docker exec -it nxtclouddb bash

Then enter the MySQL instance (MariaDB still uses MySQL commands)

mysql -u root -p

Enter the MySQL root password set in the docker-compose file.

Then run this in the MariaDB server:

SET GLOBAL innodb_read_only_compressed=OFF;

This is also linked in this Reddit post (Wayne links to this as well).

That essentially worked for me so my docker-compose file was set, and with that additional work on the MariaDB container I was able to get to work. For now, my wife and her iPad haven't complained that I know of when backing up files. 

The other nice thing about defining a specific data store (volume) in the container, means you can do other things to back that up with more ease. What I did on the system was run a simple cp -u command to back up the data defined in the volume parameters (the parts in the <> brackets under -volumes) .

Feel free to add you own passwords and local volumes. The docker-compose should bring up a Nextcloud instance without issues beyond the MariaDB issue I just highlighted. I'll try to do a more detailed video on the bring up and advantages of Nextcloud in the near future after some more use. 

Thanks for reading, hope it helps, stay safe, and take care.

*Update and additional notes*

Since creating the video I tried to reproduce these steps in Alpine, Linux Mint and Fedora Silverblue (that using podman-compose, not docker-compose),(see below update) and all failed - at least per the above steps. If you are using the straight-up docker run -d -p 8080:80 nextcloud that should work on all, but docker-compose seems a bit more finicky so Ubuntu worked better out of the box.

Also to note, I did run into some issues with docker getting installed as a Snap in Ubuntu (this happened because I selected to install Docker during the installation of the Ubuntu Server 20.04 VM, but docker-compose is not a snap). That caused me issues after removing the containers manually using docker rm <container name> vs first running docker-compose down. Best in my experience to install both docker and docker-compose using the recommended setup steps from Docker themselves.

All quite technical, but if you are following these steps and want the simplest way to get up an running with Nextcloud, just use Ubuntu 20.04 + Docker + Docker-compose. The above scripts tweaked to your environment and all should work fine.

*Update on Fedora and Podman-compose*  

I persisted a bit longer with Fedora Silverblue as I really like Podman - it just is easy to use out of the box and can run without root access. Over a weekend I poked and prodded a bit more which made me realize I made the same mistake twice now; I forgot about SELinux. Fedora, unlike Ubuntu has SELinux enabled by default. In order to mount a volume in either Podman or Docker, one must waive the SELinux permissions if SELinux if running on the system. It's a simple :z in the volume command, but I had totally overlooked that point. 

Once I added that it worked fine - same work around for mariadb as above (I picked the Docker repositories for the mariadb and nextcloud container images), - but then essentially the same in my limited testing.

Updated the config file for users who may run into SELinux or want SELinux equipped systems.

Modified docker-compsoe for Nextcloud on SELinux systems

Once that is save as a docker-compose.yml file on your directory of choice, and the appropriate data and datadb directories are passed, just run podman-compose up -d similar to Docker. 

Podman-compose works slightly differently in the sense that it creates a pod (group of containers, used in Kubernetes (k8s)) rather than simply a new network for the containers. However, it works fine in this instance and assumingly many others - here is an example with Wordpress. 

For stablity and ease of use, I'd probably still recommend an LTS (long-term support) version of Ubuntu, but I wanted to highlight that podman-compose is also just as capable, and honestly the promises of Silverblue also make it an interesting platform for container workloads like Nextcloud.

Thanks again.

Revisting Alpine Linux - rc-update

I had a quick look back at my Alpine VM and installed Docker on it just to check how difficult / easy it would be. It is very easy. 

In Alpine as root run:

apk add docker docker-cli docker-compose

Alpine will get the necessary packages and install Docker. Alternatively, running apk search docker will also bring up a list of different packages in case you need a specific version of any of those packages. 

In Alpine, I had forgotten how to enable the Docker daemon - the services in POSIX (Unix/BSD/Linux) that run all range of functions in the back end - and Genesys Engage (what came up in Google) had a good article here. What I found interesting was the output of the rc-update command. 

The output instantly shows all the services set to start and whether they are set to start on boot or by default. Running rc-status also can instantly display all the running services. 

To start the docker daemon as root run: 

service docker start

To ensure it starts on boot run:

rc-update add docker boot

And that's it. Pretty straightforward to get started with and being so light running an Alpine VM with docker might be a good way to have something that is light on system resources without needing to run the containers directly on your Linux, Windows, or MacOS system. Containers are pretty secure, but don't offer 100% isolation to the extent of a VM. Setting up a light VM on something like Alpine which you can backup and blow away should there be a problem might come in handy for certain deployments. 

A side note, to get something similar to rc-update output on Ubuntu and Fedora there is the 'service' or 'systemctl' commands. 

service --status-all 

The [+] or [-] represents whether the service is running. OR:

systemctl list-units --type=service

The latter is likely the more relevant as its calling from the systemd which both Ubuntu and Fedora use. Systemd is the way many modern Linux operating systems manage processes. Alpine rc-update uses OpenRC as it's init system hence the difference. 

This gets complex quickly, but to boil it down Linux is open source software so many things, including the way services are called and managed. This is why each Linux can have it's own set of design decisions and quirks. The modularity of Linux is also why it is so popular by leading companies, academics, and even NASA.

For more detailed reading here are some relevant links. I just found it interesting the functionality of the rc-update commands and the ease with which Docker can be installed on Alpine.

https://en.wikipedia.org/wiki/Systemd

Fedora services and daemons

Alpine Linux Init System

*Update: Reinstalling this as another test showed that the docker packages are not in the main, but rather the community repositories. These can be added to Alpine by editing the /etc/apk/respositories file and then running apk update. More info on the repositories can be found here.

Fixing Linux display issues

 So recently I made what I thought was a small upgrade to my computer - I added a second SSD to my desktop. Fairly simple, power off, remove the power cord, open the case find a spot, an open SATA port, and a power connector to the SATA SSD, and voila. 

Problem was it booted in low graphics mode. That was odd. Run an update, reboot, and now doesn't want to boot at all. 

What I now know - or think - happened was a dead CMOS battery like this one. Since the board is quite old at this point, the battery had gone away, meaning that upon shutting down and pulling the power all the BIOS settings etc were gone. That means the system goes to the default.

CR2032 Battery | Lithium CR2032 Coin Cell Battery

This isn't a problem for the most part, but did warrant a new coin cell battery. Because I was getting boot issues the quick fix was to remove the graphics card and use onboard graphics. However, the graphics card was the most expensive part of the system, and I wanted to get it working. After replacing the coin cell battery and using the computer for a few days without the card, I found some time and willpower to open it up again and poke around. 

Steps: 

  • Card goes in, DVI-D connect (old computer) goes to the card
  • Does it boot? 'Yes' skip the next step, 'No' do this next step
  • A couple of things to try if there isn't a post screen - 
    • If there is no post (the initial Vendor logo that should say something like "Press F2 or DEL to enter setup"), power off unplug. Popping out the coin battery will reset BIOS settings if that was a culprit/factor. 
    • Re-seat memory which in my case was just to reverse the placement of my 2 DIMMs.
  • If you get boot and can get to Linux start screen try to log in. 
  • I was using an NVIDIA card on Ubuntu 20.04 Gnome for this and the NVIDIA settings weren't coming up. (NVIDIA X server)
  • Open the terminal and run lspci <- make sure you see the card listed
    • You can also run lspci | grep NVIDIA to shorten the list, if you have an NVIDIA card
  • At here either the driver is missing corrupted or, and I think more likely, the display manager got out of whack.
    • Background: Linux like any other OS has a kernel component that interacts and can talk with the graphics drivers. If the setting gets messed up for any reason, it needs to be reset.
  • For good measure I reinstalled the Nvidia graphics driver which in Ubuntu if you already setup the proprietary driver is (last number is the version so please ensure you are installing the version you want)
    sudo apt reinstall nvidia-driver-460
  • In my case running dpkg-reconfigure lightdm then selecting gdm3 as the display manager and reboot fixed the issue. 
  • Do check what the display manager is best suited for the distro / desktop variant you are using. This computer was first installed with Ubuntu 16.04 (2016.04 - April 2016), and updated all the way to today. Meaning, it has both Unity and Gnome desktops which caused a few issues when I accidentally set lightdm as the default rather than gdm3 <-- cool thing about Linux is it still worked! Though it wasn't ideal, it worked relatively well aside from a couple of hot-keys. To verify whether to use lightdm or gdm3 view here
    • Alternatively you can run sudo service lightdm status or sudo service gdm3 status to check if either is running to find the default.

 

 

I hope that is helpful. Since I came across the issue I figured I'd share it. 

 

 

Docker install and basic commands

 


This blog has been a long time coming. I've been quite busy work-wise and hiding from Covid so hard to find quiet time to add these videos. 

Apologies. 

This video is around the setup and basic use of Docker in Linux. Running Docker in Linux is honestly the most ideal case, regardless of whether this a production workload or just for testing / development like on Workstation. 

Key commands we go over:

- run - create and start a new container
- start - start an existing container
- stop  - stop an existing container
- exec - interact with a running container
- rm  - remove an existing and undesired container
- ps - view all running containers
- ps -a - view all existing containers

There are a ton of other commands, but based on what I do to initialize some packaged solutions or existing LAMP stack these tools are fine for managing Docker. 

To get more information about how to use Docker / Podman commands to create a LAMP stack without docker-compose (previous video), please check out this blog.

I hope this is useful, and feel free to comment. 

*Note* please excuse the typo in bionic at around minute 4:30 or so. Binonic is obviously not right :). 

BASH! (Bourne Again SHell)

 

BASH and the Linux terminal are essential tools in Linux, and can also be be a very powerful programming language. As shown, it allows for a user to run through the operating system, create, edit, delete files and directories, and have full control over permissions. 

In regards to scripting, probably the most powerful thing about using BASH is the ability to interact and leverage existing Linux commands to grab appropriate input and output. The timeconverter.sh script demonstrates this capability by calling the default 'date' command and using it to get a point of reference of where the user is (or rather where the system is set to), and then using that to help calculate other times around the globe.

What I like about this script - some self pride showing through, sorry - is it's very fast to run and execute. So if one needs to schedule a meeting for 4 pm their local time and wants to see what that time would be in other parts of the world you can run through a slew of possible times in no time at all! 

A few corrections:
Please excuse some typos and little goofs in the video, like having the extra '/' at the end of #!/bin/bash, and finding the /usr/bin/ directory. Hopefully showing the fixes were equally helpful :). The script also doesn't necessarily need the .sh at the end. That is just a habit since many shell scripts have that, but one could simply name the file timeconverter or magictimes or whatever. I should add it is more common for users to add scripts to /usr/local/bin rather than /usr/bin, though as shown either works. 

Tip: cat /etc/environment to see where the PATH links to. PATH in Unix/BSD/Linux defines where all the commands a user can use are located so that they can be run without having to know where each command file is located.

If anyone notices any bugs - the trickiest part is the daylight savings / standard time section - just please leave a comment and I can update.

------------------------------------------

timeconverter:

Source code of the script

--------------------------------------------

Additional reference:

BASH shell

Linux Permissions

The Linux Command Line

Scale-out file systems - GlusterFS

Happened to be working on this and thought I'd create a write up. 

Scale-out file systems are essentially storage areas that are synchronized between different computers (nodes) but still retain the same data. There are many kinds of scale-out file systems - both open source and closed source - and for this tutorial we'll focus on one called GlusterFS.

GlusterFS is an open source project and maintained by RedHat. What it does for storage is allow for files to be written and read from via multiple devices and all files written to a GlusterFS share are identical to from client to client. 

What does that mean? Well it means for example if I want my data to be redundant even if a single computer or server in my group fails, with GlusterFS that's possible. It also helps to improve performance in certain areas because if the servers are used as storage you are serving files and data from multiple hosts (PCs/servers) rather than a single one. 

The steps below explain how to create a GlusterFS volume between 2 hosts (servers) and then mounting the files systems using GlusterFS client so that data between the hosts is synchronized.  This has numerous applications for IT deployments either in single or multiple sites , but GlusterFS can also be deployed in what is known as N+1 or N+N dispersed arrangements whereby the storage doesn't just replicate or mirror data, but also expands the total capacity. For example N+1 with 4 hosts would have 3x the total capacity of any single host. So if you had 10 TB on each host (server), with GlusterFS your usable capacity would be ~30TB minus the file system overhead, and could have one host crash at anytime without losing data.

This might be a little advanced, but I figured I'd share my progress with this as it could be handy.

Getting Gluster running on 2 hosts - tested in Fedora 33 Server


#1

First need to disable or allow the firewall to pass the Gluster packets.

Simple: $sudo service firewalld stop

Complex: $sudo firewall-cmd --zone=FedoraServer --permanent --add-port=24007/tcp --add-port=24008/tcp --add-port=24009/tcp --add-port=49152/tcp --add-port=49153/tcp


#2

IP addresses don't play nice with Gluster - need to add hosts to each node's host file or have them setup in your DNS.

/etc/hosts:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.122.38 fedoraS1

192.168.122.93 fedoraS2


#3

Install glusterfs, glusterfs-fuse, gluster-server 

Gluster geo-replication may also be required, depends on the use case. Geo-replciation is beyond the scope of this tutorial.


#4

Start glusterd 

$sudo service glusterd start

Note: to enable it run $sudo systemctl enable glusterd <-- this allows for it to run on boot.

Another thing to pay attention to is there is the another service called glusterfsd which is the client service. Glusterd is the server portion to run the volume.


#5

As root or sudo, need to peer probe from 1 client to the other

$sudo gluster peer probe fedoraS2 <-- has to be the host name not the IP as that can screw up


#6

As root or sudo, create a volume

$sudo gluster volume create voltest replica 2 fedoraS1:/home/joe/glusterdata fedoraS2:/home/joe/glusterdata force 

Note: force seems needed with running as root


#7

Need to start the volume. 

$sudo gluster volume start voltest


#8

That's all fine and dandy, but files still won't transfer if you write below that data directory, it needs to be mounted as a glusterfs aware file system.

On both hosts run 

[joe@fedoraS1 glustermnt]$ sudo mount -t glusterfs fedoraS1:/voltest glustermnt

[joe@fedoraS2 glustermnt]$ sudo mount -t glusterfs fedoraS2:/voltest glustermnt

Note that both mount from themselves, the opposite of what I would have thought, but I guess it makes sense from a latency perspective.


#9

At this point it should all work - what shows in glustermnt on one host is replicated to the other.

[joe@fedoraS1 glustermnt]$ df

Filesystem                     1K-blocks    Used Available Use% Mounted on

devtmpfs                          477484       0    477484   0% /dev

tmpfs                             498348       0    498348   0% /dev/shm

tmpfs                             199340     996    198344   1% /run

/dev/mapper/fedora_fedora-root   9422848 2452612   6970236  27% /

tmpfs                             498352       0    498352   0% /tmp

/dev/vda1                        1038336  253940    784396  25% /boot

tmpfs                              99668       0     99668   0% /run/user/1000

fedoraS1:/voltest               18845696 4909272  13936424  27% /home/joe/glustermnt


Additional reference