What are we doing here?

This blog includes a series of videos and references to help new users or enthusiasts better understand how to use open source and free technology tools. The quick links includes more information for accessing many of the tools covered along with other references to learn more for taking advantage of these tools.

Click HERE to see the full list of topics covered!
Showing posts with label nextcloud. Show all posts
Showing posts with label nextcloud. Show all posts

Using Nextcloud and WebDAV as a backup target

 


This video explores the built-in WebDAV feature of Nextcloud for backing up and sync'ing files from a local client to a Nextcloud instance. There are a couple of reasons why users or organizations may want to implement this feature.

- Nextcloud's own user management makes it very straightforward to separate our different user profiles, authentication, and also data quotas. This means it's very simple to deploy Nextcloud and automate employee data back ups on the network with automated back ups of say 30GB or 100GB per user.

- Using the WebDAV protocol can be more efficient than uploading lots of files to the web interface. 

- The Nextcloud desktop client also perform the same task, and also uses the WebDAV protocol. More information about the client tool is here: https://docs.nextcloud.com/desktop/latest/

 Additional resources:

Upgrading Nextcloud in Docker

 

This is a quick post on updating and maintaining a Docker install of Nextcloud. I have had my install pretty much untouched since I first put it up about a year or two ago. The whole time it's been fine, but I heard about performance improvements in later versions and have begun the process of upgrading the service. 

Here are some steps to check for. 

Based on my previous restore experience, I know that going from one version of docker to the next can sometimes be version dependent. What has worked in managing the upgrades is to upgrade one version at a time. 

To do this, first check what Nextcloud describes as the newest version. Enter the system as the 'admin' user, go to 'Settings' -> 'Overview'. A screen like below should show.


Nextcloud will give a few suggestions of areas to make your instance more secure or highlight any issues (this is a private instance without HTTPS, as noted, and without a lot of the Calendar and other apps), and it will display what the next version is. 

That next version will be the upgrade target for the Docker instance.

In your host, you can first pull the new Docker image of Nextcloud by running:

sudo docker image pull nextcloud:<the target version from the UI>

This will save some time in the upgrade.

Next make sure everything is saved and no one is using the containers. Stop the containers with:

sudo docker stop <nextcloud container name> <nextcloud DB container name>

In my sample docker-compose reference the command would look like:

sudo docker stop nxtcloud nxtclouddb

Then modify the docker-compose.yml file with the new image specified just downloaded. For example 25.0.3 shown below. 

app:
  image: nextcloud:25.0.3
  container_name: nxtcloud
  restart: unless-stopped

  ports

  .....

Once changed bring up the containers again.

sudo docker-compose up

Here I omit the -d for running in the background. This offers a terminal view of the upgrade similar to below.


Give that a few minutes, and then the Nextcloud should be upgraded to the new version. 

To ensure the containers can be run in the background without an active terminal, stop the containers again, and re-run the sudo docker-compose up -d with the -d flag.

Some more information about Nextcloud is throughout the blog - just click on the Overview page or search by the Nextcloud tags on the right-hand side. 

For more information about the reference docker-compose to get started with it and a MariaDB instance check out the source in GitHub.

Nextcloud External Storage and Apps

 

This post and video go through how to add external storage in Nextcloud and introduces the wide number of applications that can be used to tailor the functionality of Nextcloud. 

For the external storage, the example is S3 object storage from a Minio container. Minio is a fantastic project that allows for locally hosted S3 API equipped storage. It's also nice because it is quite easy to start and get running, particularly on Docker. 

The command I used to make the Minio test container is below:
sudo docker run -it --name minios3 -p 9000:9000 -p 9001:9001 minio/minio server /data

*Update: Minio's latest image (tested 4/2022) needs to define the console port or it tries to auto find a port that was a bit hit or miss for me.
Revised:

sudo docker run -it --name=miniotest -p 9000:9000 -p 9001:9001 minio/minio server /data --console-address ":9001"

I did need to make some cuts in the video which is why the mouse jumps in a couple of places. Most notably, I initially had the wrong IP address of the Minio instance. This was essentially because in the test environment both containers were running on the same system and couldn't connect using the host IP. External systems wouldn't have had that issue. Around minute 7:25 the change will show moving from the host IP to the actual IP of the container. 

In more detail:

Docker, Podman, and other container management tools assign IP addresses to each container service. When running as a group, say if using a pod and Kubernetes, or running the containers together with Docker Compose, the containers are part of a single network and can identify each other by the service name. 

In this example, the containers were created separately, and being on the same host where unable to reach each other using the address I provided. Changing to the Minio specific IP was all that was needed, but as this was out of scope for the video - and honestly not something that would normally come up, so I chose to omit the debugging.

Some more information and resources about both Nextcloud and Minio are below.

Nextcloud Docker

Minio

Minio Quickstart

Minio Docker

Recover a broken NextCloud

This is not intended to be a definitive guide. I am just sharing my experience after crashing my NextCloud container. 

I performed an upgrade on my "NAS" (Ubuntu server on an old laptop) by adding a second, larger external USB drive. This is not best practice, but works for me as a file backup. 

In adding the drive, I was trying to be careful, but obviously I wasn't careful enough. I changed the fstab file (this is where Linux knows what drives and filesystems to mount), to include the new drive. That generally worked fine, but I was mounting using the /dev/sda~/dev/sdb terminology for the drives. When Linux boots it checks the storage driver for what hardware and assigns it to the /dev/ based on its type (nvme will show up /dev/nvme####, SATA drives will usually show up as /dev/sdX#). What happened was because Linux assigns the /dev type at boot, when I rebooted with the new external drive the /dev assignments changed. SURPRISE!!! That then meant 2 out of 3 of my fstab assignments are wrong. They tried to mount backwards.

So that wasn't good. My root drive was using the UUID, which is better and should be used. For the other 2 external drives I changed to using the drive label that can be set when initially formatting the drives. Something like the below:

LABEL=usbbackup /media ntfs defaults 0 0

This was fine for getting everything straightened out. However, in the process of the drives mounting in the wrong order, my NextCloud and MariaDB containers got out of whack. I couldn't log in and just had an "Internal server error" message. 

The help pages mentioned that one can use maintenance mode and using the occ tools to help fix something similar. Unfortunately, the Docker install doesn't have a lot of tools, like 'sudo' to help test. Fortunately, all the data for both containers was 100% still intact as it was mounted using the volume parameter from the docker-compose file. This means, in theory, you can just nuke the containers, rebuild new containers with the same volume points, and get back in business. 

At this point, I want to stop and say that while this is possible and worked for me, be very very very careful that the data is still intact for both the database and NextCloud instances. If anything beyond the defaults in the docker-compose file was changed - i.e. additional packages or other things were added in the container - the next steps will wipe all of that. 

Steps I used:

Stop the containers:

  sudo docker stop nxtcloud nxtclouddb 

Note: Container names are defined as such in my version of the docker-compose file. Running sudo docker ps -a will show the current names of all the containers.

Backup the container data directories to somewhere safe. Leave the data intact in the original location. 

Note: If you can't get the containers restarted and need to truly nuke and pave, all the files in NextCloud are stored under the NextCloud data directory. On the container this would be /var/www/html/data/, on the local volume it would be under the <NextCloud install directory>/data/data/ on the host system using my docker-compose file.

Delete the containers:
  sudo docker rm nxtcloud nxtclouddb

Download the latest version of the NextCloud image:
 sudo docker image pull nextcloud:latest

Update the docker-compose file to specify the MariaDB version as 10.5:
 image: mariadb:10.5

This step is run because on my first go, NextCloud had issues with the latest version of MariaDB (version 10.6 I believe). The docker-compose pulled the latest image version by default.

Recreate and run the containers. From the directory of the docker-compose and data directories run:
 sudo docker-compose up -d

Once the container names appear, verify they are running correctly:
 sudo docker ps

Now open to the NextCloud browser page. In my case it asked to update the NextCloud instance. This triggered everything to get sorted out in NextCloud, and I could log in and everything was fine. Users can log in, data was intact, and things were copacetic.

The updated version of my NextCloud docker-compose file to reflect the change to use MariaDB version 10.5 is HERE.

Moral of the story, always stop running services on a server, especially the Docker containers BEFORE making a hardware update to the system. No matter how simple it may seem, put the host system in maintenance mode first.

More information on /etc/fstab configs is available here:
  https://linuxhint.com/write-edit-etc-fstab/
  https://help.ubuntu.com/community/Fstab

Hopefully this helps demonstrate how volumes can be used to keep container data persistent, along with the flexibility of working with containers. I was lucky that the upgrade was able to trigger a re-sync of the NextCloud instance and the database. Some more information about doing this manually is in the NextCloud Doc page.
  https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/occ_command.html



Nextcloud in Docker!




So this is something I've been debating for a while - stick with a pure SMB share or try setting up Nextcloud.

Nextcloud is essentially local, open source, on-prem Dropbox with an ever growing feature set. It trumps SMB (a local Windows or Linux SAMBA file share) in many regards with things like apps for iOS / Android, calendar sync, email sync, and more. 

It's far more complicated than SMB or a generic NAS share in many respects, but that honestly makes it more easy to use for the end user - simply go to the local site or IP on the LAN, enter credentials and begin using it. 

I set this up using Docker and I wanted to share the experience. 

Using the official Docker Hub information was fine - essentially being able to test something easy in a single command.

$ docker run -d -p 8080:80 nextcloud

After running that in docker (likely will need sudo to grant permissions when running in Linux) you can just log in using your browser <server / computer IP>:8080.

That works, but isn't ideal. Nextcloud will tell you to change the database from their default sqlite to another - better, or more comprehensive - database immediately. 

For better use there is the docker-compose suggestion on the Docker hub Nextcloud page. I followed and tweaked that. It works really well, but there are some things to watch for. 

Modified docker-compose

    *Update* This is also now on GitHub: NextCloudDockerCompose

My version of the docker-compose file changes the following: 

  • Update the version of docker compose to 3, 
  • Adjust the way persistent volumes are defined to a local file folder vs the generic persistent file folder in the reference, 
  • Changed the restart preference to 'unless-stopped' in case you do need to stop the container it won't keep restarting (it should restart on a device/VM reboot) 
  • Add names to both the MariaDB and Nextcloud containers. That last bit makes it a bit easier to manage when you want to restart or interact the container.
Save the file in a folder as docker-compose.yml, adjust the passwords and directories/folders to your needs then run docker-compose up -d and it should run.

When I initially started this it worked well, but on first login I ran into an issue due to the way MariaDB was handling the input from Nextcloud. This might be my luck or just the age of the Nextcloud docker container post. I found this workaround which solved the issue:

Wayne's garden - essentially running the below command in the MariaDB (MySQL DB) solves the issue.

Enter command line in the MariaDB container:

sudo docker exec -it nxtclouddb bash

Then enter the MySQL instance (MariaDB still uses MySQL commands)

mysql -u root -p

Enter the MySQL root password set in the docker-compose file.

Then run this in the MariaDB server:

SET GLOBAL innodb_read_only_compressed=OFF;

This is also linked in this Reddit post (Wayne links to this as well).

That essentially worked for me so my docker-compose file was set, and with that additional work on the MariaDB container I was able to get to work. For now, my wife and her iPad haven't complained that I know of when backing up files. 

The other nice thing about defining a specific data store (volume) in the container, means you can do other things to back that up with more ease. What I did on the system was run a simple cp -u command to back up the data defined in the volume parameters (the parts in the <> brackets under -volumes) .

Feel free to add you own passwords and local volumes. The docker-compose should bring up a Nextcloud instance without issues beyond the MariaDB issue I just highlighted. I'll try to do a more detailed video on the bring up and advantages of Nextcloud in the near future after some more use. 

Thanks for reading, hope it helps, stay safe, and take care.

*Update and additional notes*

Since creating the video I tried to reproduce these steps in Alpine, Linux Mint and Fedora Silverblue (that using podman-compose, not docker-compose),(see below update) and all failed - at least per the above steps. If you are using the straight-up docker run -d -p 8080:80 nextcloud that should work on all, but docker-compose seems a bit more finicky so Ubuntu worked better out of the box.

Also to note, I did run into some issues with docker getting installed as a Snap in Ubuntu (this happened because I selected to install Docker during the installation of the Ubuntu Server 20.04 VM, but docker-compose is not a snap). That caused me issues after removing the containers manually using docker rm <container name> vs first running docker-compose down. Best in my experience to install both docker and docker-compose using the recommended setup steps from Docker themselves.

All quite technical, but if you are following these steps and want the simplest way to get up an running with Nextcloud, just use Ubuntu 20.04 + Docker + Docker-compose. The above scripts tweaked to your environment and all should work fine.

*Update on Fedora and Podman-compose*  

I persisted a bit longer with Fedora Silverblue as I really like Podman - it just is easy to use out of the box and can run without root access. Over a weekend I poked and prodded a bit more which made me realize I made the same mistake twice now; I forgot about SELinux. Fedora, unlike Ubuntu has SELinux enabled by default. In order to mount a volume in either Podman or Docker, one must waive the SELinux permissions if SELinux if running on the system. It's a simple :z in the volume command, but I had totally overlooked that point. 

Once I added that it worked fine - same work around for mariadb as above (I picked the Docker repositories for the mariadb and nextcloud container images), - but then essentially the same in my limited testing.

Updated the config file for users who may run into SELinux or want SELinux equipped systems.

Modified docker-compsoe for Nextcloud on SELinux systems

Once that is save as a docker-compose.yml file on your directory of choice, and the appropriate data and datadb directories are passed, just run podman-compose up -d similar to Docker. 

Podman-compose works slightly differently in the sense that it creates a pod (group of containers, used in Kubernetes (k8s)) rather than simply a new network for the containers. However, it works fine in this instance and assumingly many others - here is an example with Wordpress. 

For stablity and ease of use, I'd probably still recommend an LTS (long-term support) version of Ubuntu, but I wanted to highlight that podman-compose is also just as capable, and honestly the promises of Silverblue also make it an interesting platform for container workloads like Nextcloud.

Thanks again.