What are we doing here?

This blog includes a series of videos and references to help new users or enthusiasts better understand how to use open source and free technology tools. The quick links includes more information for accessing many of the tools covered along with other references to learn more for taking advantage of these tools.

Click HERE to see the full list of topics covered!

Firewalld and Podman - Protecting Your DB

 

The above output is an interesting discovery I found with the relationship between Podman and Firewalld (aka firewall-cmd). 

So I was tinkering with a semi-production environment that is a web server with Apache and PHP on it, and a MySQL database running as a container. This was originally just done quickly, but is becoming more and more important. Honestly, long term, I may want to just move the database to an actual VM that has all the bells and whistles - like Cron for scheduled backups, etc. I digress.

The point is I was reading an article that a lot of sites have MySQL databases without SSL. If you run your database in a container, there is a good chance it also doesn't have SSL since most Docker images do not include that out-of-the-box. I personally don't think it is a problem if the service is on a truly internal network, but I ran a Telnet check from a remote system and it responded. Holy ^$%%@#$!

Now the port shouldn't be exposed, the firewall-cmd rules for the external interface don't allow that port type. However, with further testing I was able to connect a phpmyadmin to that host from a completely remote system. Holy#$%^&*&^%$!

What happens is on RedHat / CentOS / Fedora, podman essentially knows about firewall-cmd and spins itself into the zone trusted. You get that source and everything is available from that source. This also allows the outside world to come to that port - which is NOT what we want.

Turns out though, that it is relatively straightforward to lock this down.

Backup all the databases using a mysqldump - or in this case just by using a PHPMyAdmin to export. 

Then destroy the current container. Run the following:

podman stop <DB container name>

podman rm <DB container name>

Assuming the container has its /var/lib/mysql directory mounted to the host using the -v or --volume at creation likely the backups won't need to be restored. Just re-run the container creation command (example below).

podman run -d --name mySQLdb -v <host directory>:/var/lib/mysql -p 127.0.0.1:3306:3306 --restart=unless-stopped -e MYSQL_ROOT_PASSWORD=<password> -e lower_case_table_names=1 mysql:5.7

The difference in the above command is that the localhost IP - 127.0.0.1 - is specified along with the port value. As such, the container is now only reachable by services or other containers running on the local system. 

Now testing with a telnet command to that host with port 3306 shows unreachable, but the local web server can still interact with the database. 

telnet: Unable to connect to remote host: No route to host

Most tutorials and explanations about Docker or Podman or containers in general usually cover the "-p" call as a network port. However, it can also be used to bind a specific IP:portnum which is very important to helping protect a system that is running on the Internet. 

I hope this can help others keep themselves safer when using containers in production.

Migrating Nextcloud from a Backup

Recently I was forced to retire my "server" because the performance and the file system were throwing errors. This warranted an upgrade from a 14 year-old laptop to an Intel Celeron NUC. Maybe one day I'll actually get a "real" server.....

I thought I'd share the steps I took, and also point out some gotchas that I came across. Let everyone learn from my mistakes :) . 

Some background:

The server essentially was a Ubuntu server instance with SMB (SAMBA) running locally on the system, and my Nextcloud with a separate MariaDB running in containers - the setup from this post. What I had done was back up all the files from the SMB share, and all the files from the mounted volumes of both the Nextcloud container and the MariaDB container. 

Backup:

Important: I also took a manual mysqldump of the 'nextcloud' database in the MariaDB container. This is important. 

Enter the MariaDB container by running:
sudo docker exec -it nxtclouddb bash

In the container, run 

mysqldump -u root -p nextcloud > nextclouddumpDATE.sql

Enter the database password and the file will be verified. I had a volume mounted to the /var/lib/mysql so the data for the MariaDB database was mapped to a directory in the host. Copying the backup file (nextclouddumpDATE.sql) to the /var/lib/mysql directory in the container allows the file to be accessible on the host so it can be copied to a separate backup. 

Back up complete and verified.

Migration:

At this time we can look at the reinstall. Since it was a huge jump in architecture I couldn't just pop in the hard drive from the old laptop to the new computer - needed to reinstall. Went with Ubuntu Server 20.04, ran updates, setup the firewall, installed SMB, set that up and verified the SSH. At this point I could add the external hard drives with the backups. 

Ubuntu Server is nice vs Ubuntu Desktop in that it doesn't auto-mount USB drives in a /media/<user name> directory. This is a feature in my opinion because with the server image the admin is in control. 

Update the /etc/fstab with drive labels (example):

LABEL=FourTBbackup /media2 ext4 defaults 0 0

sudo mount -a will bring mount the partition(s).

Here we can start copying data back. For the SMB share I have the share mounted to another external drive, so no data copying needed on that. The nextcloud instance was running off the main drive and that needed to be copied over. 

rsync -av --progress /media2/nxtcloud/* /home/user/nxtcloud 

Once complete, I tried to bring up the Nextcloud containers. This was a bit reckless in a way, but given all the data was backed up, it wasn't too bad. It was here I needed to dive down a few rabbit holes. 

Troubleshooting:

As soon as I visited the Nextcloud site, I was greeted by an "Internal Server Error" message. At the time I assumed the database was corrupt because I just ran off the restored database files - I didn't first import the database from backup like a good boy. This was not the problem. 

Turns out there were 3 problems:

  • The new install warranted an IP change - need to update the Nextcloud config.php file allowed domains
  • Permissions of the restored files and directories all need to be set as chown www-data:www-data - I believe this was the main reason for the 'Internal Server Error'
  • The Docker image pulled a new version of Nextcloud and my version couldn't be migrated to the new version. You can check that in the config/config.php file.

With Nextcloud in Docker, even if you mount a volume from the host system, the image will add anything missing in the web server volume that it needs. In fact, all that was really needed where the /config, /data, /apps, /3rdparty directories. The /data directory is actually where all the files and other info is saved for the various users. All other files and directories can be removed from the volume mount on the host. More info here in this post.

So what was needed was to specify the older version of Nextcloud in my docker-compose.yml file (image: nextcloud:22.2), only copy the  /config, /data, /apps, /3rdparty directories to the mapped volume, and then rebuild the containers.

sudo docker-compose up -d

At that point, there was still a minor update on Nextcloud (from 22.2.02 to 22.2.10) but that was fine. 

Nextcloud was restored and everything was well.

I will say Docker is pretty handy for this - one can just docker-compose up and down to rebuild containers fast and test different things.

With Nextcloud, the platform is built fairly robust, but the mapping of files to the database and all the complex apps can make backup and restore tricky - particularly when migrating to a new system. Having all the database files, an actual DB dump, and all the Nextcloud files available is the #1 most important step. Back ups = options

I'm also pretty glad I made the move when I did - when I had a poor performing, but still usable system so I could verify the backups and be careful.

Hope this is helpful. I definitely learn more and more about Nextcloud every time I need to rebuild it, and hopefully this post can save others some valuable time.

Changing the Partition Layout in Linux with Encryption

 


This is another important community information piece designed to help save other people hours of time doing Google searches. 

So on my current laptop, I have 2 partitions - the root directory / where all the operating system lives, and the /home directory. The /home directory is encrypted using the LUKS algorithm that most Linux distros make available during install. However, I needed to increase the size of the root directory /. This is tedious but doable. 

Steps:
Back up the whole /home directory. In Linux there are many hidden files and settings that need to get backed up so I recommend using the command line to ensure you get it all.
cp /home/* <back up destination>

Once that completes, shutdown the computer and boot into a Live instance. This is typically created on the USB drive that is most commonly used to install Linux on a bare-metal system these days. 

Once in the Live instance booted in the USB you need to delete the existing /home directory in order to increase the size of the root directory /. This is because partitions in Linux and most operating systems are mapped sequentially on the logical volume (in this case a since SSD, but could be a RAID group, a LUN, mapped storage, etc.). Since the root directory needed to increase in size, I needed to move the /home directory out of the way. 

I used the Gparted tool to do this. It can handle all of the deletion and resizing of the partitions. Alternatively, for this the Gnome Disks utility (image at top of article) also can handle the fairly basic task as well. 


In Gparted select the correct drive in the upper right-hand corner, then delete the already backed up /home directory partition. 

Click on the root directory / partition, and click 'Resize / Move'. I added another 30000 MiB or around 30 GB to the partition. 

Click on the remaining space in create a new partition using all the remaining space. This will become the new /home directory. 

Open the Disks utility which should be in most mainstream distros - Ubuntu, Fedora, Red Hat, CentOS, Linux Mint, etc. 

The Disks utility has a nice feature to allow for the LUKS encryption which I wanted, so I re-formatted the newly claimed space. Probably would have been faster to do all of this in the Disks utility, but I felt more comfortable with Gparted for the resizing. 

Disks also has an option to set the mount point of the partition, which we need to do. I personally edited the /etc/fstab table which is where Linux knows to mount what where at start up, but using the Disks utility also can control this. Changes to the mount options made in the Disks utility will also show up in the /etc/fstab table. 


Once that is done, it should be possible to mount the /home partition on the Linux Live instance running from the bootable USB and copy over all of the data from the backed up /home directory. Maybe it's saved somewhere else on a computer, or in an external hard drive, but again I recommend to run the reverse of the cp command to ensure EVERYTHING is placed back on the new partition. 
cp <back up destination>/* /home
Once all of that is done there is one more step that I missed. It won't cause data loss, but I found the computer kept taking a long time to boot. In Linux if the boot sequence is taking a long time, you can press 'Esc' to see what's happening - another reason I really like Linux. Something like this:

 dev-disk-by\x98756984\x984128b\x3d4657f2\x2d865f\x2d8025cb8d2d01.device: Job dev-disk-by\x98756984\x984128b\x3d4657f2\x2d865f\x2d8025cb8d2d01
.device/start timed out.

Now that was weird to me because I had changed the mount point already, and in the /etc/fstab I had removed the old mount point to /home. I somehow had thought this might happen - a moment of forethought that usually doesn't happen for me. I thought I had prepared for this.

Turns out that not only does /etc/fstab need to be adjusted, if the partition was encrypted from the beginning, there is an additional encryption setting that also tracks the devices that need decryption on boot. The file is /etc/crypttab 

Comment out the old partition that no longer exists and all is grand! 


So in short the steps are:
  • Backup the whole /home directory
  • Boot into a Live environment
  • Delete the BACKED UP /home directory partition
  • Resize the root directory '/' partition
  • Build a new partition for /home
  • Ensure the mount point is set, and double check /etc/fstab to ensure the old partition isn't listed. (Also ensure that you check the /etc/fstab of the SSD, not the USB environment.)
  • If the old partition was encrypted, comment or delete the line referring to the old partition in /etc/crypttab
  • Re-copy all the data from the backup to the new /home partition
  • Reboot into the OS on the SSD - everything should still work just as before but the partition sizes will have changed
I hope this can help as a reference for anyone who needs to adjust the partitions around on a Linux device. Linux is really flexible for these tasks, but always, always, always have a separate back up, if not two.

Getting Started with Ansible


Ansible is a very popular management tool for all sorts of systems. Linux servers, Windows systems, networking equipment, and more can all be managed and provisioned on mass using Ansible. 

I had to investigate some more with Ansible for work, and I wanted to share some of those findings. 

The video covers the basic install, host setup, ping, and running a sample playbook. The playbooks are really the most important tool, because by using a simple yaml file, administrators can perform routine provisioning or updates to dozens to thousands of servers with a single command. 

The sample below shows how to create a file on all of the managed devices, run a package manager update, and even install and application (openjdk is used as an example, but any valid package would work). 


The reference file is located here.

Troubleshooting:

In the making of the video, because I installed Ansible on a somewhat older host for demo purposes, the version 2.5.1 wanted to use Python 2, but the clients default to Python 3. To work around this the Group Variables need to add the below line in the hosts file. I've not seen this using newer versions 2.9, but something to watch for.

ansible_python_interpreter=/usr/bin/python3

Also important is that the ssh keys of the clients need to be in the host running Ansible. That can be most easily setup by connecting to the clients over ssh. If there is a lot of hosts, good key management can also allow for the keys to all be exported and updated into the ssh known_hosts file.

More information:

Getting started

Run a playbook command

Python3

Adding Comments to Posts - MongoDB and PHP

This article explores adding a comments section to blog or forum posts. 

Using my previous project 'OutNoteOrganizer' - which still needs a better name, but all the good ones I could think of were taken - I thought about the design and felt it needed to have a comment section. 

Similar to say for example Reddit, though not public, teams or groups of people could want to leave comments on the initial post. The other thing I didn't like about the initial design was that anyone in the associate group could edit and change the content without a good way from the UI to track what was changed. 

Now the current version only allows the original post author to edit the post, and other group members with access to the post can leave comments.

How I got here?

In terms of how I visualized the code structure, previously I was hung up thinking that comments should be fields in the post itself. For example comment 1, comment 2, etc. However, while it might be possible to do that, it's complex if you want to have an unlimited number of comments to a post. Adjust the concept a bit and you can think of comments that link to the post but are stored in a different table (in MongoDB a different collection) called 'comments'.


Again with MongoDB, there is no need to 'prime' or 'setup' the table. Just from the code tell the DB what fields you want and it will accept the values as a BSON object. 

Once you have an HTML form filled in you really only need this code to create a new collection called 'comments'. 

$col = $db -> comments; //Point the query at the new collection comments (this will create the collection)

//Collect data from an HTML form

$username = $_POST['author'];

$comment = $_POST['commentcontent'];

$postname = $_POST['postname'];

//Create a PHP array to insert into MongoDB

$insertcomment = ['author' => $username, 'comment' => $comment, 'postname' => $postname];

//Make the insert query / connection - the if / else just catches an error

if ($postcomment = $col->insertOne($insertcomment)){

        echo "<h3>Comment added to $postname. </h3>";

}

else { echo "<h3>Error occured while adding that comment. Please contact your IT administrator.</h3>"; }

I also wanted to create a hidden form so that it only appears when a user clicks 'ADD COMMENT'. Initially I was thinking using Javascript and passing an innerHTML call would work. It does for the action, but because the comments need to collect some hidden data (i.e. the post title, the logged in user, etc) in PHP variables, the hidden variables need to exist in the PHP code before the Javascript action takes place. 

Explanation: PHP is a server side language meaning all the actions are requests to the server on page load. If the page loaded, but the PHP variables are not present during page load, the script won't know about them. Javascript is a run-time in the web browser itself. This makes it good for these on page actions - like <onclick='dothisfunction()'>. However, passing additional HTML after the page loaded will omit the variables we need.

It turns out that Javascript has styling properties as well. What they do is allow for CSS changes to be applied to specific elements identified by the 'id' tag.  What we can do is rather than pass new HTML code to the page - which causes issues with the PHP variables needed - we can more simply change the fields from 'hidden' to 'visible'. This way the PHP variables are known to the script, but the same visual result of handling a pop up can be achieved. 

In code - HTML with PHP variables - called using echo in PHP

<button onclick='addcomment()'>ADD COMMENT</button><br><br>

<div id='AddComment' style='visibility: hidden;'>

<form method='post' action='submitcomment.php' id='insertComment'>

<textarea class='giantinput' name='commentcontent' placeholder='Add a comment'></textarea>

<input type='hidden' name='author' value='$loginuser'>

<input type='hidden' name='postname' value='$postname'>

<br><br><input type='submit' form='insertComment' value='Submit'></form>

</div>

In code - JS

function addcomment() {
    var commentform = document.getElementById('AddComment');
    
     if (commentform.style.visibility === 'hidden') {
        commentform.style.visibility = 'visible';
     }
    
     else { commentform.style.visibility = 'hidden'; }
 }

The above is fairly basic as an example implementation. For better accuracy, it may be the case that the HTML form for the comments needs to collect more than just the post title or name, but also say the post author, the date, or better the object ID to ensure if there are say multiple posts with the same name all the comments do not get pulled into each post. 

Just wanted to share a bit of code and discovery using PHP and Javascript hand-in-hand to create a useful layout. Hope it helps. 

Full source code and Docker Compose to install is located on GitHub.

 

Intro to Minio S3 Object Storage

 


Minio  S3 object storage is powerful and easily run in a Docker, LXD, Podman or other container environment. Super easy to setup, super easy to understand, and the latest console has a ton of features to improve the intuitiveness of the platform. 

The command run in the video is the same as in the previous blog:

sudo docker run -it --name=miniotest -p 9000:9000 -p 9001:9001 minio/minio server /data --console-address ":9001"

To map a volume to the container one can create a directory on the local machine and add the -v <localvolume>:/data to the above command. That should help keep the buckets and related files available if the container is ever removed.

Example below from the container created in this test.



More information is below:

Happy to field any questions, just let me know. 



Integrate Firefox .tar into Linux native

Ubuntu 22.04 is here and with it are more polish and features. One feature / change is that Firefox is now a snap package. 

Snaps? What? 

Snaps are the way a lot of third party software gets bundled and packaged in Ubuntu. Developed by Canonical, they are a good way to run applications in a sandboxed environment on the PC. Many, many applications that are either not focused on the Linux community, or otherwise not devoting resources to building applications to run natively on Ubuntu, Fedora, Red Hat, Debian, Arch, etc. can be installed and run on the Linux Desktop by way of Snaps. Full list here: https://snapcraft.io/store

The major competitor to Snaps is Flatpak which is another great project. I may do a blog or video comparing the two in the future, but essentially both help package hundreds of applications for Linux that run in an isolated environment on the host system. Unlike Docker or Podman, these contained applications function pretty much like a standard desktop application with icons, launchers, etc. 

Now Firefox is a member of the Snap family. Great. Why the blog post?

Well Snaps have a reputation of occasionally being somewhat slow to load. Once running fine, but that initial click seems to take a while even with an SSD. I first learned about this with Ubuntu 18.04 running an admittedly 'netbook-like' device (low power laptop) when clicking the most basic calculator seemed to take several seconds. It wasn't until I learned about Snap packages vs native apt or .deb packages that I found the issue. Running calculator installed using the command 'apt install gnome-calculator' just pops right up, the Snap version takes a few seconds longer than it should. 

This is true with the Firefox Snap as well. Being honest, I was kinda looking for it, because when I upgraded my laptop from Ubuntu Mate 21.10 to Ubuntu Mate 22.04 one of the things highlighted was that Firefox would be removed and replaced with a Snap. Indeed, when first using it I had lost all my settings (not too many), and it seemed slower to boot. No apt package available, I set out to install using the Firefox download.

If you download Firefox it gives you a Zip file - technically a .tar.bz2 zip file - that you can unzip and just run Firefox from the 'firefox' directory - just double-click the file firefox-bin. To make this more integrated in the desktop experience do the following. 

Download and unzip Firefox from the official Mozilla website: https://www.mozilla.org/en-US/firefox/new/?redirect_source=firefox-com

Based on the location of the files create a firefox.desktop file under /usr/share/applications. *This will require 'sudo' or 'root' permissions.

Make the file:

  • Open a text editor - Text Editor, Pluma, Kwrite, Nano, VI, VIM - any will do.
  • Enter the following:
    [Desktop Entry]
    Version=1.0
    Name=Firefox
    GenericName=Internet Browser
    Type=Application
    Categories=Network;
    Exec=/<extract location>/firefox/firefox-bin
    Icon=/
    <extract location>/firefox/browser/chrome/icons/default/default32.png
  • Save as firefox.desktop in the /usr/share/applications directory on Ubuntu
  • Log out and log back in to confirm. 

That's it. 

Firefox will get displayed as a searchable application. In my case it even reappeared as the top of my favorites just like it had been saved before I upgraded.

Snaps are great for ease of use and running proprietary applications, such as games, or perhaps the odd communication platform. However, I think for something as commonly used as a browser, having something running on the operating system leads to a speedier experience. 

Hope it helps!

 

 

Fixing filesystem permissions in Flatpak

flatpak logo 

I had a problem with my Flatpak installation of Slack - I couldn't save files. 

I don't recall this being an issue before, but in my new computer with Ubuntu Mate 21.10, it was. 

Turns out the solution is fairly straightforward. 

Flatpak has the option to specify a file path which can then be written to. If this file path is then the default download folder everything is copacetic.

Steps:

Close Slack and make sure it is not running in the background. 

Run flatpak ps to ensure com.slack.Slack is not listed.

Create a directory for storing the download files. This can be in your file manager (Nautilus, Caja, Dolphin, Thunar, etc.), or with the mkdir command.

Once created you can run the following:

flatpak override com.slack.Slack --filesystem=<your full directory path>

*This command needs to be run as root in my testing (use sudo). 

Then start the Slack application again and go to Preferences -> Advanced -> Download location. Set it the same as the directory path just mapped. 


That's it. Files should now be able to be saved to that specific directory. 

Hope it helps anyone who needs to do something similar in Slack or another Flatpak application.


Nextcloud External Storage and Apps

 

This post and video go through how to add external storage in Nextcloud and introduces the wide number of applications that can be used to tailor the functionality of Nextcloud. 

For the external storage, the example is S3 object storage from a Minio container. Minio is a fantastic project that allows for locally hosted S3 API equipped storage. It's also nice because it is quite easy to start and get running, particularly on Docker. 

The command I used to make the Minio test container is below:
sudo docker run -it --name minios3 -p 9000:9000 -p 9001:9001 minio/minio server /data

*Update: Minio's latest image (tested 4/2022) needs to define the console port or it tries to auto find a port that was a bit hit or miss for me.
Revised:

sudo docker run -it --name=miniotest -p 9000:9000 -p 9001:9001 minio/minio server /data --console-address ":9001"

I did need to make some cuts in the video which is why the mouse jumps in a couple of places. Most notably, I initially had the wrong IP address of the Minio instance. This was essentially because in the test environment both containers were running on the same system and couldn't connect using the host IP. External systems wouldn't have had that issue. Around minute 7:25 the change will show moving from the host IP to the actual IP of the container. 

In more detail:

Docker, Podman, and other container management tools assign IP addresses to each container service. When running as a group, say if using a pod and Kubernetes, or running the containers together with Docker Compose, the containers are part of a single network and can identify each other by the service name. 

In this example, the containers were created separately, and being on the same host where unable to reach each other using the address I provided. Changing to the Minio specific IP was all that was needed, but as this was out of scope for the video - and honestly not something that would normally come up, so I chose to omit the debugging.

Some more information and resources about both Nextcloud and Minio are below.

Nextcloud Docker

Minio

Minio Quickstart

Minio Docker

Creating a web application with a MongoDB backend and Docker


This is a bit of a POC that turned into a project.

I was playing with MongoDB to create a front-end that would store data in a user friendly way. I didn't want to take on things like WordPress or others, and had an idea for a note taking application that could be on-prem, and put teams or catalogs first with grouping.

I think there is quite a bit more that could be done, like adding reminders/due dates to the posts/notes and adding more management to groups so new authors can add more people to a project/group. For now this is being used to demonstrate the capabilities of PHP with MongoDB.

A killer feature in MongoDB is the ability for it to not need to be primed in anyway. This feature allows for this app install using Docker to exist. In a traditional/relational database a user needs to setup the database and table structure in order for data to be inserted. With NoSQL/MongoDB you can just point an app at the DB and with the correct credentials start adding and editing a whole DB in the service itself. This is what I do in the initial login screen. I check if the DB and admin exist, and if not, they get automatically populated so a user on a completely fresh install can just start using the application. 

Currently this application is far from perfect, but I hope it helps users get an idea of what an be done using PHP, HTML, CSS, and MongoDB together. I also hope it can be useful, and if anyone requires additional assistance, please feel free to leave comments on the GitHub page. I am not at the caliber of a professional open source developer, and have limited time, but I am interested in the feedback, and willing to assist as I can to further promote usage of the project.

I hope this can be a useful project either for internal team collaboration or simply as reference code for other users. The code is published with the permissible Apache 2.0 license (same as Apache and MongoDB that it builds off).

Notes:

Source code is here: https://github.com/JoeMrCoffee/OurNoteOrganizer

A word on TinyMCE

TinyMCE is an external text editor which is used to help enhance the ease for writing and editing posts. It is an external tool that requires connectivity to external networks in order to function. For environments that do not have external Internet connectivity a standard 'textarea' without rich formatting is employed.

As this project is made to be hosted on any on-prem environment, there is no TinyMCE API key provided or registered. Users can create their own keys based on their domains and needs. To remove the notification about getting started, users can follow the quick steps, create their own API key, and add it to line 8 of the newpost.php file.

More information on TinyMCE and getting started:

https://www.tiny.cloud/

https://www.tiny.cloud/docs/quick-start/