What are we doing here?

This blog includes a series of videos and references to help new users or enthusiasts better understand how to use open source and free technology tools. The quick links includes more information for accessing many of the tools covered along with other references to learn more for taking advantage of these tools.

Click HERE to see the full list of topics covered!

Getting started with PHP and MongoDB

 

This post and video delve into the world of schemaless, NoSQL databases. NoSQL databases are powerful and interesting ways of storing data at scale. Because they do not have a traditional table that dictates what kind of data needs to be written, NoSQL databases, like MongoDB, can accept data much more easily. There is also the benefit that, at scale, a cluster of multiple nodes can be hosting a common database and collection where as a traditional database cannot scale-out horizontally. 

In cloud computing, NoSQL databases are what drive the likes of Amazon, Google, Facebook and more. 

In order to learn more about them I wanted to try and replicate what I know on PHP and MySQL (a schema or relational database), and apply it to PHP with MongoDB. I did struggle a bit because of the way I was building the container and my lack of familiarity with the way Composer works. As such I thought I'd share my progress and provide the docker-compose file along with a run through of how to get it setup and running. 

The Dockerfile and Docker Compose script are located on my GitHub with no license and anyone can feel free to use the repository to quickly bring up and test a LAMP stack using MongoDB instead of MySQL or MariaDB. Just git clone, and run docker-compose up from the directory and it should build everything. 

Also included in the repo is a very simple HTML form that will write a Name and Age to the MongoDB instance. These two web pages are essentially just offered as a reference for how to connect and get started. 

Docker Compose source: https://github.com/JoeMrCoffee/MongoLAMP

Additional references that I found helpful:

Official documentation on using PHP with MongoDB:
https://docs.mongodb.com/drivers/php/

Official Docker image for MongoDB and Mongo-Express:
https://hub.docker.com/_/mongo
(This was the base for the docker compose minus the Apache PHP image)

Another great tutorial which is worth a look as well.
https://www.javatpoint.com/php-mongodb

Recover a broken NextCloud

This is not intended to be a definitive guide. I am just sharing my experience after crashing my NextCloud container. 

I performed an upgrade on my "NAS" (Ubuntu server on an old laptop) by adding a second, larger external USB drive. This is not best practice, but works for me as a file backup. 

In adding the drive, I was trying to be careful, but obviously I wasn't careful enough. I changed the fstab file (this is where Linux knows what drives and filesystems to mount), to include the new drive. That generally worked fine, but I was mounting using the /dev/sda~/dev/sdb terminology for the drives. When Linux boots it checks the storage driver for what hardware and assigns it to the /dev/ based on its type (nvme will show up /dev/nvme####, SATA drives will usually show up as /dev/sdX#). What happened was because Linux assigns the /dev type at boot, when I rebooted with the new external drive the /dev assignments changed. SURPRISE!!! That then meant 2 out of 3 of my fstab assignments are wrong. They tried to mount backwards.

So that wasn't good. My root drive was using the UUID, which is better and should be used. For the other 2 external drives I changed to using the drive label that can be set when initially formatting the drives. Something like the below:

LABEL=usbbackup /media ntfs defaults 0 0

This was fine for getting everything straightened out. However, in the process of the drives mounting in the wrong order, my NextCloud and MariaDB containers got out of whack. I couldn't log in and just had an "Internal server error" message. 

The help pages mentioned that one can use maintenance mode and using the occ tools to help fix something similar. Unfortunately, the Docker install doesn't have a lot of tools, like 'sudo' to help test. Fortunately, all the data for both containers was 100% still intact as it was mounted using the volume parameter from the docker-compose file. This means, in theory, you can just nuke the containers, rebuild new containers with the same volume points, and get back in business. 

At this point, I want to stop and say that while this is possible and worked for me, be very very very careful that the data is still intact for both the database and NextCloud instances. If anything beyond the defaults in the docker-compose file was changed - i.e. additional packages or other things were added in the container - the next steps will wipe all of that. 

Steps I used:

Stop the containers:

  sudo docker stop nxtcloud nxtclouddb 

Note: Container names are defined as such in my version of the docker-compose file. Running sudo docker ps -a will show the current names of all the containers.

Backup the container data directories to somewhere safe. Leave the data intact in the original location. 

Note: If you can't get the containers restarted and need to truly nuke and pave, all the files in NextCloud are stored under the NextCloud data directory. On the container this would be /var/www/html/data/, on the local volume it would be under the <NextCloud install directory>/data/data/ on the host system using my docker-compose file.

Delete the containers:
  sudo docker rm nxtcloud nxtclouddb

Download the latest version of the NextCloud image:
 sudo docker image pull nextcloud:latest

Update the docker-compose file to specify the MariaDB version as 10.5:
 image: mariadb:10.5

This step is run because on my first go, NextCloud had issues with the latest version of MariaDB (version 10.6 I believe). The docker-compose pulled the latest image version by default.

Recreate and run the containers. From the directory of the docker-compose and data directories run:
 sudo docker-compose up -d

Once the container names appear, verify they are running correctly:
 sudo docker ps

Now open to the NextCloud browser page. In my case it asked to update the NextCloud instance. This triggered everything to get sorted out in NextCloud, and I could log in and everything was fine. Users can log in, data was intact, and things were copacetic.

The updated version of my NextCloud docker-compose file to reflect the change to use MariaDB version 10.5 is HERE.

Moral of the story, always stop running services on a server, especially the Docker containers BEFORE making a hardware update to the system. No matter how simple it may seem, put the host system in maintenance mode first.

More information on /etc/fstab configs is available here:
  https://linuxhint.com/write-edit-etc-fstab/
  https://help.ubuntu.com/community/Fstab

Hopefully this helps demonstrate how volumes can be used to keep container data persistent, along with the flexibility of working with containers. I was lucky that the upgrade was able to trigger a re-sync of the NextCloud instance and the database. Some more information about doing this manually is in the NextCloud Doc page.
  https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html



Practical Programming using Python with Files

 

This tutorial demonstrates how to use Python to open and read from files in a local computer. Python is a powerful language, and the relative ease it has in opening and reading from text files makes it ideal for doing simple things (like calculating an average as showed), to more advanced (say updating 1000s of files in a single batch, or comparing different output from a research survey). There is an endless number of useful things Python can help automate, and it doesn't take a full time developer or software engineer to use.

The source code that was created is below. 

AverageNumberOfLines

For those interested as well in how to write to files, the script I used to create the 100 files with random ends is also provided. This script works by reading a sample file (Lorem Ipsum text in the example) and then randomly chops between 10 and 100 lines of text for 100 different test files. 

CreatingTextFiles

I hope it can be helpful for those interested in programming and eager to see some simple, but useful, use cases.


Creating Animations in GIMP

 

This article and video is a bit more artistic / creative compared to more of the recent ones. Most recently we've been exploring server-side services and writing scripts. This article looks to tackle creating GIF animations using GIMP (GNU Image Manipulation Program).

The video above plays pretty quick because I just created a very simple scene for demo purposes. The possibilities are really infinite. You could use those same steps to make a banner add for your company, or create a web animation that plays on load of a website. It could also be used for doing some animations with Apps like on iOS or Android (though GIFs in Android need a webview to run). Even something as simple as just quick UI or text tutorials can be done - that's how I made the simple animations in my BASH script blogs. 

Hope this can be helpful for some of you out there. I think GIMP is a wonderful tool - unfortunate name, but wonderful tool - that really can do some outstanding photo and image manipulation. It's also open source and free to use on Windows, Linux and MacOS so if you learn it once you can use it anywhere.

Final image from the tutorial - playing around with the number of frames and delay between frames (currently 180 ms) can create a smoother playback.

SpaceAnimation


More BASH - Auto Sorting Files

BASH SMASH REHASH

I'm sharing a recent script I wrote that might be helpful. 

I had a picture folder on my NAS that was just simply way too large for use on a relatively slow Wifi connection. Essentially hundreds of picture and video files from our family photos.

What would happen on our computers is that when you want to open one file it takes forever as the file viewer tries to cache all the images in the folder to make flipping through them faster. That's a nice feature, but not when the folder / directory is dozens to hundreds of gigabytes in size. 

This script essentially prompts for which folder to sort (must be absolute), then prompts for how many files the user wants to sort per folder. The example below prompts to sort a directory with around 200 files into directories of 20 files apiece. 


Link to the code: autosort.sh

I included a lot of comments so hopefully the script makes sense. Again just a useful tool and might be helpful for those interested in shell scripting. 

Run Chrome in Toolbox on Silverblue

 

 

This is something I've been wondering how to do for a while and finally decided to give it a try. Run desktop applications from the desktop in a container. 

In Linux the best and easiest ways to do this are Flatpaks (Community - generally used on non-Ubuntu-based distros) and Snaps (Canonical maintained and default for many apps in Ubuntu). However, while both are good tools, what if the software you want to run isn't in the Flatpak or Snap repositories? What if the software you want to run in a container is Google Chrome?

Some hard core Linux types may not like having Chrome even installed. "What's it doing? How much information is it taking from me? I don't trust anything from big G. " etc. However, all that said, there are lots of pages and particular web tools - like MS Teams in the browser - which don't work in Firefox or Chromium as they don't have all the DRM packages and other compatibility layers of Google Chrome. So regardless, Chrome is a useful tool, and while it does it's own sandboxing, we can sandbox it in a container with Toolbox.

Steps to getting this working.

Download the .rpm version of Google Chrome from the website.
 
From the terminal, run toolbox create. If you want to name the container just add a name at the end. If you want to pull from a specific image use --image <image name>. In Silverblue the default image is Fedora 34.
 
Run toolbox enter to access the container. If you have multiple containers again you can specify the name after 'enter'.
 
In the folder you downloaded Chrome to, e.g. Downloads run:
sudo dnf install google-chrome-stable_current_x86_64.rpm.
 
To launch from the container just run:
google-chrome-stable
 
That should work from the container but what about from the Silverblue desktop?
 
Search Google (or DuckDuckGo or whatever) for a Chrome icon, and pick whichever seems the best for your needs.  
 In ./local/share/application create a .desktop file (i.e. chrome.desktop) with the below contents:

[Desktop Entry]
Version=1.0
Terminal=false
GenericName=Web Browser
Type=Application
Name=Google Chrome
Categories=GNOME;GTK;Network;WebBrowser;
Exec=toolbox run google-chrome-stable
Icon=/var/home/joe/Pictures/ChromeIcon.png
MimeType=text/html;text/xml;application/xhtml+xml;application/xml;application/rss+xml;application/rdf+xml;image/gif;image/jpeg;image/png;x-scheme-handler/http;x-scheme-handler/https;x-scheme-handler/ftp;x-scheme-handler/chrome;video/webm;application/x-xpinstall;

StartupNotify=true
         
You can likely omit much of that like MimeType and categories, but good to keep in. The title [Desktop Entry], Name, Exec, Type, and Icon are all necessary.
 
In the terminal run:
update-desktop-database ~/.local/share/applications
 
It should show at this point and can be launched from the GNOME launcher.
 
Note that if you have multiple containers you can specify which container should run the Chrome browser in the .desktop file.  
Exec=toolbox run -c <container name> google-chrome-stable

That is pretty much it. 

Took me a while to get working thanks to the reason it always takes a while - a typo. You need to have the .deskop file labeled [Desktop Entry]; [Desktop entry] doesn't work.  

The only thing that is a bit problematic is certain fonts, like Chinese characters don't display correctly which should be a dependency issue since the container doesn't have the language paks native to Silverblue. It does prove the container is contained though! It's fixable by installing the language packs in the container:

sudo dnf install google-noto-cjk-fonts-common-20201206-2.fc34.noarch google-noto-sans-tc-fonts-20201206-2.fc34.noarch

Hope it's helpful.

Some more information:

What is Silverblue?

https://docs.fedoraproject.org/en-US/fedora-silverblue/toolbox/

https://www.publish0x.com/the-linux-monitor/how-to-install-brave-browser-on-fedora-silverblue-xpjlzdd
        -- this tutorial is largely the same with the Brave browser and a good resource for me when creating this blog. They also go into a couple of other ways to install Chrome in Silverblue, so worth a look
.

 

Nextcloud in Docker!




So this is something I've been debating for a while - stick with a pure SMB share or try setting up Nextcloud.

Nextcloud is essentially local, open source, on-prem Dropbox with an ever growing feature set. It trumps SMB (a local Windows or Linux SAMBA file share) in many regards with things like apps for iOS / Android, calendar sync, email sync, and more. 

It's far more complicated than SMB or a generic NAS share in many respects, but that honestly makes it more easy to use for the end user - simply go to the local site or IP on the LAN, enter credentials and begin using it. 

I set this up using Docker and I wanted to share the experience. 

Using the official Docker Hub information was fine - essentially being able to test something easy in a single command.

$ docker run -d -p 8080:80 nextcloud

After running that in docker (likely will need sudo to grant permissions when running in Linux) you can just log in using your browser <server / computer IP>:8080.

That works, but isn't ideal. Nextcloud will tell you to change the database from their default sqlite to another - better, or more comprehensive - database immediately. 

For better use there is the docker-compose suggestion on the Docker hub Nextcloud page. I followed and tweaked that. It works really well, but there are some things to watch for. 

Modified docker-compose

    *Update* This is also now on GitHub: NextCloudDockerCompose

My version of the docker-compose file changes the following: 

  • Update the version of docker compose to 3, 
  • Adjust the way persistent volumes are defined to a local file folder vs the generic persistent file folder in the reference, 
  • Changed the restart preference to 'unless-stopped' in case you do need to stop the container it won't keep restarting (it should restart on a device/VM reboot) 
  • Add names to both the MariaDB and Nextcloud containers. That last bit makes it a bit easier to manage when you want to restart or interact the container.
Save the file in a folder as docker-compose.yml, adjust the passwords and directories/folders to your needs then run docker-compose up -d and it should run.

When I initially started this it worked well, but on first login I ran into an issue due to the way MariaDB was handling the input from Nextcloud. This might be my luck or just the age of the Nextcloud docker container post. I found this workaround which solved the issue:

Wayne's garden - essentially running the below command in the MariaDB (MySQL DB) solves the issue.

Enter command line in the MariaDB container:

sudo docker exec -it nxtclouddb bash

Then enter the MySQL instance (MariaDB still uses MySQL commands)

mysql -u root -p

Enter the MySQL root password set in the docker-compose file.

Then run this in the MariaDB server:

SET GLOBAL innodb_read_only_compressed=OFF;

This is also linked in this Reddit post (Wayne links to this as well).

That essentially worked for me so my docker-compose file was set, and with that additional work on the MariaDB container I was able to get to work. For now, my wife and her iPad haven't complained that I know of when backing up files. 

The other nice thing about defining a specific data store (volume) in the container, means you can do other things to back that up with more ease. What I did on the system was run a simple cp -u command to back up the data defined in the volume parameters (the parts in the <> brackets under -volumes) .

Feel free to add you own passwords and local volumes. The docker-compose should bring up a Nextcloud instance without issues beyond the MariaDB issue I just highlighted. I'll try to do a more detailed video on the bring up and advantages of Nextcloud in the near future after some more use. 

Thanks for reading, hope it helps, stay safe, and take care.

*Update and additional notes*

Since creating the video I tried to reproduce these steps in Alpine, Linux Mint and Fedora Silverblue (that using podman-compose, not docker-compose),(see below update) and all failed - at least per the above steps. If you are using the straight-up docker run -d -p 8080:80 nextcloud that should work on all, but docker-compose seems a bit more finicky so Ubuntu worked better out of the box.

Also to note, I did run into some issues with docker getting installed as a Snap in Ubuntu (this happened because I selected to install Docker during the installation of the Ubuntu Server 20.04 VM, but docker-compose is not a snap). That caused me issues after removing the containers manually using docker rm <container name> vs first running docker-compose down. Best in my experience to install both docker and docker-compose using the recommended setup steps from Docker themselves.

All quite technical, but if you are following these steps and want the simplest way to get up an running with Nextcloud, just use Ubuntu 20.04 + Docker + Docker-compose. The above scripts tweaked to your environment and all should work fine.

*Update on Fedora and Podman-compose*  

I persisted a bit longer with Fedora Silverblue as I really like Podman - it just is easy to use out of the box and can run without root access. Over a weekend I poked and prodded a bit more which made me realize I made the same mistake twice now; I forgot about SELinux. Fedora, unlike Ubuntu has SELinux enabled by default. In order to mount a volume in either Podman or Docker, one must waive the SELinux permissions if SELinux if running on the system. It's a simple :z in the volume command, but I had totally overlooked that point. 

Once I added that it worked fine - same work around for mariadb as above (I picked the Docker repositories for the mariadb and nextcloud container images), - but then essentially the same in my limited testing.

Updated the config file for users who may run into SELinux or want SELinux equipped systems.

Modified docker-compsoe for Nextcloud on SELinux systems

Once that is save as a docker-compose.yml file on your directory of choice, and the appropriate data and datadb directories are passed, just run podman-compose up -d similar to Docker. 

Podman-compose works slightly differently in the sense that it creates a pod (group of containers, used in Kubernetes (k8s)) rather than simply a new network for the containers. However, it works fine in this instance and assumingly many others - here is an example with Wordpress. 

For stablity and ease of use, I'd probably still recommend an LTS (long-term support) version of Ubuntu, but I wanted to highlight that podman-compose is also just as capable, and honestly the promises of Silverblue also make it an interesting platform for container workloads like Nextcloud.

Thanks again.

Revisting Alpine Linux - rc-update

I had a quick look back at my Alpine VM and installed Docker on it just to check how difficult / easy it would be. It is very easy. 

In Alpine as root run:

apk add docker docker-cli docker-compose

Alpine will get the necessary packages and install Docker. Alternatively, running apk search docker will also bring up a list of different packages in case you need a specific version of any of those packages. 

In Alpine, I had forgotten how to enable the Docker daemon - the services in POSIX (Unix/BSD/Linux) that run all range of functions in the back end - and Genesys Engage (what came up in Google) had a good article here. What I found interesting was the output of the rc-update command. 

The output instantly shows all the services set to start and whether they are set to start on boot or by default. Running rc-status also can instantly display all the running services. 

To start the docker daemon as root run: 

service docker start

To ensure it starts on boot run:

rc-update add docker boot

And that's it. Pretty straightforward to get started with and being so light running an Alpine VM with docker might be a good way to have something that is light on system resources without needing to run the containers directly on your Linux, Windows, or MacOS system. Containers are pretty secure, but don't offer 100% isolation to the extent of a VM. Setting up a light VM on something like Alpine which you can backup and blow away should there be a problem might come in handy for certain deployments. 

A side note, to get something similar to rc-update output on Ubuntu and Fedora there is the 'service' or 'systemctl' commands. 

service --status-all 

The [+] or [-] represents whether the service is running. OR:

systemctl list-units --type=service

The latter is likely the more relevant as its calling from the systemd which both Ubuntu and Fedora use. Systemd is the way many modern Linux operating systems manage processes. Alpine rc-update uses OpenRC as it's init system hence the difference. 

This gets complex quickly, but to boil it down Linux is open source software so many things, including the way services are called and managed. This is why each Linux can have it's own set of design decisions and quirks. The modularity of Linux is also why it is so popular by leading companies, academics, and even NASA.

For more detailed reading here are some relevant links. I just found it interesting the functionality of the rc-update commands and the ease with which Docker can be installed on Alpine.

https://en.wikipedia.org/wiki/Systemd

Fedora services and daemons

Alpine Linux Init System

*Update: Reinstalling this as another test showed that the docker packages are not in the main, but rather the community repositories. These can be added to Alpine by editing the /etc/apk/respositories file and then running apk update. More info on the repositories can be found here.

Fixing Linux display issues

 So recently I made what I thought was a small upgrade to my computer - I added a second SSD to my desktop. Fairly simple, power off, remove the power cord, open the case find a spot, an open SATA port, and a power connector to the SATA SSD, and voila. 

Problem was it booted in low graphics mode. That was odd. Run an update, reboot, and now doesn't want to boot at all. 

What I now know - or think - happened was a dead CMOS battery like this one. Since the board is quite old at this point, the battery had gone away, meaning that upon shutting down and pulling the power all the BIOS settings etc were gone. That means the system goes to the default.

CR2032 Battery | Lithium CR2032 Coin Cell Battery

This isn't a problem for the most part, but did warrant a new coin cell battery. Because I was getting boot issues the quick fix was to remove the graphics card and use onboard graphics. However, the graphics card was the most expensive part of the system, and I wanted to get it working. After replacing the coin cell battery and using the computer for a few days without the card, I found some time and willpower to open it up again and poke around. 

Steps: 

  • Card goes in, DVI-D connect (old computer) goes to the card
  • Does it boot? 'Yes' skip the next step, 'No' do this next step
  • A couple of things to try if there isn't a post screen - 
    • If there is no post (the initial Vendor logo that should say something like "Press F2 or DEL to enter setup"), power off unplug. Popping out the coin battery will reset BIOS settings if that was a culprit/factor. 
    • Re-seat memory which in my case was just to reverse the placement of my 2 DIMMs.
  • If you get boot and can get to Linux start screen try to log in. 
  • I was using an NVIDIA card on Ubuntu 20.04 Gnome for this and the NVIDIA settings weren't coming up. (NVIDIA X server)
  • Open the terminal and run lspci <- make sure you see the card listed
    • You can also run lspci | grep NVIDIA to shorten the list, if you have an NVIDIA card
  • At here either the driver is missing corrupted or, and I think more likely, the display manager got out of whack.
    • Background: Linux like any other OS has a kernel component that interacts and can talk with the graphics drivers. If the setting gets messed up for any reason, it needs to be reset.
  • For good measure I reinstalled the Nvidia graphics driver which in Ubuntu if you already setup the proprietary driver is (last number is the version so please ensure you are installing the version you want)
    sudo apt reinstall nvidia-driver-460
  • In my case running dpkg-reconfigure lightdm then selecting gdm3 as the display manager and reboot fixed the issue. 
  • Do check what the display manager is best suited for the distro / desktop variant you are using. This computer was first installed with Ubuntu 16.04 (2016.04 - April 2016), and updated all the way to today. Meaning, it has both Unity and Gnome desktops which caused a few issues when I accidentally set lightdm as the default rather than gdm3 <-- cool thing about Linux is it still worked! Though it wasn't ideal, it worked relatively well aside from a couple of hot-keys. To verify whether to use lightdm or gdm3 view here
    • Alternatively you can run sudo service lightdm status or sudo service gdm3 status to check if either is running to find the default.

 

 

I hope that is helpful. Since I came across the issue I figured I'd share it.