What are we doing here?

This blog includes a series of videos and references to help new users or enthusiasts better understand how to use open source and free technology tools. The quick links includes more information for accessing many of the tools covered along with other references to learn more for taking advantage of these tools.

Click HERE to see the full list of topics covered!

BASH! (Bourne Again SHell)

 

BASH and the Linux terminal are essential tools in Linux, and can also be be a very powerful programming language. As shown, it allows for a user to run through the operating system, create, edit, delete files and directories, and have full control over permissions. 

In regards to scripting, probably the most powerful thing about using BASH is the ability to interact and leverage existing Linux commands to grab appropriate input and output. The timeconverter.sh script demonstrates this capability by calling the default 'date' command and using it to get a point of reference of where the user is (or rather where the system is set to), and then using that to help calculate other times around the globe.

What I like about this script - some self pride showing through, sorry - is it's very fast to run and execute. So if one needs to schedule a meeting for 4 pm their local time and wants to see what that time would be in other parts of the world you can run through a slew of possible times in no time at all! 

A few corrections:
Please excuse some typos and little goofs in the video, like having the extra '/' at the end of #!/bin/bash, and finding the /usr/bin/ directory. Hopefully showing the fixes were equally helpful :). The script also doesn't necessarily need the .sh at the end. That is just a habit since many shell scripts have that, but one could simply name the file timeconverter or magictimes or whatever. I should add it is more common for users to add scripts to /usr/local/bin rather than /usr/bin, though as shown either works. 

Tip: cat /etc/environment to see where the PATH links to. PATH in Unix/BSD/Linux defines where all the commands a user can use are located so that they can be run without having to know where each command file is located.

If anyone notices any bugs - the trickiest part is the daylight savings / standard time section - just please leave a comment and I can update.

------------------------------------------

timeconverter:

Source code of the script

--------------------------------------------

Additional reference:

BASH shell

Linux Permissions

The Linux Command Line

Scale-out file systems - GlusterFS

Happened to be working on this and thought I'd create a write up. 

Scale-out file systems are essentially storage areas that are synchronized between different computers (nodes) but still retain the same data. There are many kinds of scale-out file systems - both open source and closed source - and for this tutorial we'll focus on one called GlusterFS.

GlusterFS is an open source project and maintained by RedHat. What it does for storage is allow for files to be written and read from via multiple devices and all files written to a GlusterFS share are identical to from client to client. 

What does that mean? Well it means for example if I want my data to be redundant even if a single computer or server in my group fails, with GlusterFS that's possible. It also helps to improve performance in certain areas because if the servers are used as storage you are serving files and data from multiple hosts (PCs/servers) rather than a single one. 

The steps below explain how to create a GlusterFS volume between 2 hosts (servers) and then mounting the files systems using GlusterFS client so that data between the hosts is synchronized.  This has numerous applications for IT deployments either in single or multiple sites , but GlusterFS can also be deployed in what is known as N+1 or N+N dispersed arrangements whereby the storage doesn't just replicate or mirror data, but also expands the total capacity. For example N+1 with 4 hosts would have 3x the total capacity of any single host. So if you had 10 TB on each host (server), with GlusterFS your usable capacity would be ~30TB minus the file system overhead, and could have one host crash at anytime without losing data.

This might be a little advanced, but I figured I'd share my progress with this as it could be handy.

Getting Gluster running on 2 hosts - tested in Fedora 33 Server


#1

First need to disable or allow the firewall to pass the Gluster packets.

Simple: $sudo service firewalld stop

Complex: $sudo firewall-cmd --zone=FedoraServer --permanent --add-port=24007/tcp --add-port=24008/tcp --add-port=24009/tcp --add-port=49152/tcp --add-port=49153/tcp


#2

IP addresses don't play nice with Gluster - need to add hosts to each node's host file or have them setup in your DNS.

/etc/hosts:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.122.38 fedoraS1

192.168.122.93 fedoraS2


#3

Install glusterfs, glusterfs-fuse, gluster-server 

Gluster geo-replication may also be required, depends on the use case. Geo-replciation is beyond the scope of this tutorial.


#4

Start glusterd 

$sudo service glusterd start

Note: to enable it run $sudo systemctl enable glusterd <-- this allows for it to run on boot.

Another thing to pay attention to is there is the another service called glusterfsd which is the client service. Glusterd is the server portion to run the volume.


#5

As root or sudo, need to peer probe from 1 client to the other

$sudo gluster peer probe fedoraS2 <-- has to be the host name not the IP as that can screw up


#6

As root or sudo, create a volume

$sudo gluster volume create voltest replica 2 fedoraS1:/home/joe/glusterdata fedoraS2:/home/joe/glusterdata force 

Note: force seems needed with running as root


#7

Need to start the volume. 

$sudo gluster volume start voltest


#8

That's all fine and dandy, but files still won't transfer if you write below that data directory, it needs to be mounted as a glusterfs aware file system.

On both hosts run 

[joe@fedoraS1 glustermnt]$ sudo mount -t glusterfs fedoraS1:/voltest glustermnt

[joe@fedoraS2 glustermnt]$ sudo mount -t glusterfs fedoraS2:/voltest glustermnt

Note that both mount from themselves, the opposite of what I would have thought, but I guess it makes sense from a latency perspective.


#9

At this point it should all work - what shows in glustermnt on one host is replicated to the other.

[joe@fedoraS1 glustermnt]$ df

Filesystem                     1K-blocks    Used Available Use% Mounted on

devtmpfs                          477484       0    477484   0% /dev

tmpfs                             498348       0    498348   0% /dev/shm

tmpfs                             199340     996    198344   1% /run

/dev/mapper/fedora_fedora-root   9422848 2452612   6970236  27% /

tmpfs                             498352       0    498352   0% /tmp

/dev/vda1                        1038336  253940    784396  25% /boot

tmpfs                              99668       0     99668   0% /run/user/1000

fedoraS1:/voltest               18845696 4909272  13936424  27% /home/joe/glustermnt


Additional reference


User login and logout using PHP - Part 3 of 3 (still maybe 4)

 

This post and video runs through how to create some logic for searching through a user table in a database, validating the given login credentials, and displaying different results based on the outcome. 

Key concepts covered are the session_start() function of PHP, essentially allowing for the webpage to store data (cookies) about a user, using a counter to help check the login, and then session_destroy() function to logout. 

This bit of code was actually something I thought up, but previous references helped walk me through it. JoyofPHP by Alan Forbes again briefly talks about how, conceptually, to handle some of this, and there are slews of examples on sites like StackOverflow, and W3schools for dealing with sessions and variables.

The bit of javascript code to automatically refresh the page on first load I found from this StackOverflow post. The site is a wealth of knowledge with contributors from all over the world helping to solve each others coding roadblocks. Something anyone who even wants to just start to learn will find useful, and Google results often reference it.

Note: Mentioned in the first post, this tutorial doesn't include a security portion, but should one be considering a live website including SSL/TLS will be critical before doing anything like a user login with sensitive information. At the very least purchase a valid certificate, create your own if its an internal site/system, or run Let's Encrypt on the site.

I hope all of this will be helpful, and provide a good starting place for future development and learning.

Yet another LAMP stack build - Podman!

 

No video this time, just steps. 

After getting tired of LinuxMint (the update from 19.3 to 20.04 didn't work right), I switched back to Fedora. Originally I switched from Fedora to LinuxMint to try it out thinking it would be more stable. Thus far...  nope, but it's my fault for buying a laptop with an AMD CPU. Ubuntu would likely be the better more stable desktop choice, but I prefer flatpak to snapd, and my desktop already runs Ubuntu. Ubuntu is solid, just why stick to only one distro? 

I digress. 

This blog is focused on running a LAMP stack with Podman. I mentioned it briefly in the Intro to Virtualization and Containers video, but essentially Podman is the more open source version of Docker-CE and works almost - key word there - like Docker. However, it doesn't officially have a feature like Docker Compose to build a whole bunch of containers via a yaml file - they need to be built and connected manually...

The nice thing about Podman is that it supports the latest CGroups, and is preinstalled on Fedora (some more background). Fedora 32, by default, doesn't actually support Docker because Docker does not yet support the new CGroups. So Podman is essentially more up-to-date, potentially more secure, and much more of a headache to setup.

Let's get into it.

Need to create a network for the podman containers- default doesn't seem to have a DNS as I understand.

sudo podman network create  <-- Likely better to give a name at the end, just type a name as I understand.

sudo podman network ls <-- Find the name; in the below it's cni-podman1 which is a default name when none is given.

MySQL container

Run sudo podman run --name=mysqltest -p=3306:3306 -d --network=cni-podman1 -e MYSQL_ROOT_PASSWORD=mysqltest -e lower_case_table_names=1 mysql:5.7

Note: The run command in Docker/Podman is very similar. We give the container a name, map ports  <host>:<container>, -d is detach so the container doesn't eat your terminal, define networking, then -e is environment variables like defined in the YAML file. At the end we name the image to pull from; if not already installed this will automatically pull from docker.io, though podman searchs the RedHat and CentOS image hubs first.

PHPMYADMIN container

Run sudo podman run --name=PHPMAtest -p=8081:80 --network=cni-podman1 -e MYSQL_ROOT_PASSWORD=mysqltest -e PMA_HOST=mysqltest -e PMA_PORT=3306 -d phpmyadmin/phpmyadmin:latest

Open the browser and go to localhost:8081, upon seeing PHPMyAdmin use username: root and the password as defined, in this case mysqltest. This should work if your Podman networking is set properly, and no awkward firewall rules.

PHP / Apache image

Create a directory Podman-LAMP and a sub-directory src under it (any name for either directory will do so long as your commands are consistent). Under Podman-LAMP create a Dockerfile with the below information information. 

FROM php:7.0-apache
        RUN docker-php-ext-install mysqli
        EXPOSE 80

Like in Docker, Podman recognizes Dockerfile and will build the container image based on the supplied parameters. We first build the container in order to install the mysqli, and allow port 80 <- port 443 may also be needed if you require HTTPS. We are building the container rather than just running it because we want to install mysqli and expose port 80.

Run chmod -R 755 src to allow the Apache server access to files in the src directory.

To build the new image run sudo podman build ./Podman-LAMP --tag apachephppod. The --tag option gives the image a name.

Run the PHP / Apache container

Run sudo podman run --name=APpod -v ./Podman-LAMP/src:/var/www/html:z -p 80:80 --network=cni-podman1 -d apachephppod  

**The :z in the mount command is important for SELinux issues - Docker doesn't have those issues in my experience could also be related to the CGroups and versioning that I've tried. That said other users have mentioned the :z with Docker and SElinux and could be that my previous experience of running Docker on Fedora had yet to enable the latest SELinux functionality.

Once all of that is setup you can connect to the database using msqli and set the MySQL host name as the name of the MySQL container - mysqltest in this example. This can be done in place of the IP of the local host or Docker network though it might work as well. 

Example: $mysqli = new mysqli('mysqltest', 'root', 'password', 'joescoffee');

Where: (<host address/host name>, <user>, <password>, <database>)


With everything created, to stop/start the containers just follow the below commands. Information will be retained because the containers have been named, and the Apache/PHP container has a mapped volume. If you run the run command again it will throw an error since the container with that name exists, and if you keep running run with new names it just keeps adding containers.

sudo podman container start mysqltest APpod PHPMAtest

sudo podman container stop mysqltest APpod PHPMAtest


This was a good exercise for myself. Learning how to connect separate containers together helped reinforce a lot of my understanding about the technology, both Docker and Podman. The SELinux requiring :z threw me the most, but thankfully poking around Google and forum threads pointed me in the right direction. I hope this blog can help save you the hours I spent getting it working.

A quick JS interlude

 


This video is a quick - very quick by my standards - video on a short script and interacting with a website to provide real-time updates to users. The script essentially just pulls in variables and outputs the product (multiplying price * amount) so that a user's total order price changes as they select a different amount. 

Most of what I know of JavaScript - which is honestly not a lot - is from the excellent tutorials at W3schools. I've not spent as much time with other elements of how JavaScript interacts with databases and helps create more dynamic websites, but I am still learning as I go. 

A couple more sites and references which would be good for future reading and understanding are below. Several are beyond what I've spent time with having focused mostly on PHP and Linux the past couple of years, but I know that they are powerful tools and hope the references can give those interested some more direction for future areas to study to improve their web development skills.

Angular - Typescript- and JavaScript-based (AngularJS) web development platform, led by Google Angular team.

JQuery - more advanced functionality using the JavaScript framework, also includes some potential for DB interaction, though less a focus than PHP/mysqli.

w3schools Web Development Roadmap - a great overview for all the various building blocks that go into web development. 

There are a ton of things that go into building a strong website. This is exactly why so many business chose to pay to have things hosted on Amazon or setup a site with Shopify or Wix. I personally don't like default templates and structure for any site I want to create as often the templates are extremely complex or only partially editable at the HTML/JavaScript/PHP level. That said, things like security - SSL/TLS encryption, secure payment methods, and servers for hosting, and a good CDN for helping copy and deliver content the world-round are costly. Most platforms designed to host a site for you take care of a lot of that back end and any creator would need to weigh said costs with the flexibility of developing a site completely from scratch. 

Hopefully these tutorials can help those looking to just gain a basic understanding of what goes into web design.


PHP + CSS + HTML overview - Post 2 of 3 (maybe 4)



This post goes over how to take the functions of the 'mysqli' function we covered in the previous PizzaOrder site, and create a more usable flow in a nicer format. The new site adds an order confirmation page and website management pages for internal users to add products and review new orders.

User login and control is coming in a separate video.

While this particular code is largely my own, I also wanted to link back again to the JoyofPHP, and also help signal out W3schools website which has a ton of info on managing and creating feature websites. Their site is linked on the right-hand side column in the blog as well, but it is a great source of know-how for getting started with website creation/optimization.

One other note to add is the W3schools' explanations of the enctype for uploading a file to PHP - Note https://www.w3schools.com/tags/att_form_enctype.asp

I hope this is helpful, thanks again.

Intro to Web design using HTML, CSS and PHP - 1 of 3



This tutorial covers the basics of using HTML and PHP to interact with a user, collect data into an HTML form, capture the data, then submit to an existing database. 

Primarily we cover HTML <form></form>, PHP mysqli, PHP echo, and the basic formatting of an HTML table. 

I designed this to be very simple on purpose to act as a base. In the next two or three videos I hope to explore more about formatting and creating a more useful flow of user data and a shopping cart to open orders. In many ways these fields are just building on what is already shown in this video, but help to demonstrate how powerful the mysqli integration is with HTML, CSS, and PHP for create dynamic websites. 

Of course there are a multitude of other features and optimizations that can be done and included. I hope this can be a good foundation for new users or those interested to take the first steps - like I am - into web and software design.

I also wanted to reference The Joy of PHP by Alan Forbes again as his book taught me what little I know of PHP. The examples I'm showing are just pieces of code and know-how that his more in-depth book details, and I found it really helpful and easy to apply.

Finally, I can't emphasize enough, the current site is NOT secure. We built our LAMP stacks without an SSL (Secure Sockets Layer) certificate so that would need to be added to the Apache server/service on a server facing the internet. More information can be found on various sites, and currently one of the best open SSL implantation is Let's Encrypt. It does require some access to the Apache server to setup, but automates the SSL certificate re-registration/authentication so is quite nice for most sites. As an admin it will be up to your team to pick the best solution, but I wanted to ensure that while previous LAMP stacks are both viable test environments, a production environment (hosting a site on the Internet) shoudl have both SSL certicates and strong Firewall protection.

LAMP stack in Docker



This video goes over setting up a LAMP (Linux + Apache + MySQL + PHP) stack in Docker using Docker Compose. 

The video doesn't cover the installation of Docker, but here are the related links. 

Docker-Compose Linux install - Once Docker is setup on Ubuntu just type 'sudo apt install docker-compose'

Generally it's easier to setup the LAMP stack or most stacks for development using Docker since pre-configured images can be pulled from Docker Hub and downloaded quickly without having to know all the packages, and then configuring the packages. That said, our Alpine VM did end up taking up less space, and roughly took the same amount of time to setup - partly my being too talkative, partly just download speed. 

Overall, Docker is a great tool. Its ability to leverage and re-purpose existing images to create new containers, and relative easy install on Windows and MacOS, makes it a go to choice for developers. The mount-bind which we show in the video - ./LAMPcontained:/var/www/html - is amazing as a VM would need the user to upload files using FTP or create another SMB service to share files to in order to upload changes to the VM, whereas Docker can just allow changes from the host to appear in the container using that command. For these reasons, and the relative ease vs setting up a VM environment, Docker is a great choice for developers, and also highly robust in a production / server environment as well. 

I also want to highlight that the original docker-compose.yml file is a modified version I found in the book The Joy of PHP by Alan Forbes which is a great set of example code to get started using PHP and HTML to interact with databases and create interactive websites. I used it to learn PHP and found it very helpful.
Dockerfile template <- note that the link displays Dockerfile.txt to better show the contents, but Dockerfile should not have a extension. 

One other note with running LinuxMint I've had to manually add the Docker additional repository in the /etc/apt/sources.list.d/additional-repositories.list file, so please be mindful of that if you are a LinuxMint user. Poking around forums it seems this can be finicky so if you can't find the Docker packages after apt-add-repository and then apt update, check that file link to make sure it's there and correct to your build of the system. This could happen on any Linux distro it seems, but I ran into on LinuxMint, while I did not on Ubuntu.

Hope this was helpful, more to come. Thank you as always.

LAMP stack setup on Alpine Linux



This video covers setting up a LAMP stack (Linux+Apache+MySQL(Maria)+PHP) in Alpine Linux. Alpine is a super lightweight distro popular with developers using Docker, but also a valid choice for VM work or standalone implementation. 

This video walks through the process of setting up the various components of LAMP, though omits the Alpine install (it takes minutes if you know what you want and what your doing - select sys). Overall I'm quite impressed with Alpine, though there would certainly be conveniences just running Ubuntu Server, Alpine does the job with a fraction of the CPU/memory/capacity/cruft so if you can figure out how it can work for you, I think its worth a look.

Here are the steps shown in the video:
Alpine LAMP deployment

apk add apache2

apk add mariadb mariadb-client

rc-service start apache2

Check the homepage at the machine's IP, confirm you see 'It works'

Source files are located in /var/www/localhost/htdocs#

Ensure repositories are added - /etc/apk/repositories  <- uncomment commmunity and edge

apk add php7 php7-mysqli phpmyadmin php7-apache2

service restart apache2

Create php test doc - nano phpinfo.php

service mariadb start

Setup Maria

Error message appears, need to first run the /etc/init.d/mariadb setup

Re-run service mariadb start

Then we change the password for mariadb root user (this gets created on the system by the mariadb-client).
Run mysql_secure_installation

Walk through the prompts to set a root user password

Setup phpmyadmin

chmod -R 777 /usr/share/webapps/phpmyadmin
chmod 755 /etc/phpmyadmin/config.inc.php
ln -s /usr/share/webapps/phpmyadmin/ /var/www/localhost/htdocs/phpmyadmin


Setup phpmyadmin user in MariaDB
mysql -u root -p
 - enter password

>CREATE USER 'pmauser'@'%' IDENTIFIED BY '<password of choice>';
>GRANT ALL PRIVILEGES ON *.* TO 'pmauser'@'%' WITH GRANT OPTION;
>FLUSH PRIVILEGES;

Open page server IP/phpmyadmin.

Enter the pmauser and given password.

Now phpmyadmin can be used to manage the mariadb and create new databases

<missed this in the video>
To ensure services start on boot need to run 
rc-update add <service name>

The last line are steps to ensure apache2 and mariadb are started if the system reboots, an 'auto-restart' or 'start on boot' behavior. 

References I found helpful for setting this up are below:

I do understand that this stack isn't necessarily the best in terms of performance or security, its simply the 'bare-minimum' to start developing. For optimization I'd suggest reading more on general Apache or NGINX settings, as well as looking to additional 3rd party sources about the various ways to integrate PHP with a web server. 

Setting a Samba (SMB / CIFS) share in Ubuntu



This video is designed to run through the steps for setting up an SMB share in Ubuntu. All steps are shown as based on the Ubuntu tutorial blog, but I wanted to explain some of the steps - like adding a user who can act as the dedicated SMB login account.
https://ubuntu.com/tutorials/install-and-configure-samba#1-overview

The process of using the command line to interact with a remote system is very important for more advanced Linux usage, and we spend a lot of time in the command line using SSH, and that can be setup when first installing Ubuntu Server. Of course with Linux everything can be done, undone, and redone using the command line, so important to get your feet wet if you really want to delve into Linux.

Previously, I ran through similar steps using FreeNAS which has a useful GUI for setting up the share. FreeNAS with its ZFS file system is definitely useful for robust data protection which may or may not apply depending on how the underlying Ubuntu install is set up. FreeNAS is also very useful for advanced shares like NFS, iSCSI, S3, AFP, and SMB. Essentially it saves users from having to install services and config files for each type of share.

Where this is useful is for creating file shares on systems not designed for FreeNAS. For example in a virtual machine, on smaller systems like the Raspberry Pi, or even an old laptop which only has 1 hard drive where FreeNAS typically will need at least 2~3 drives (one for boot and another 2 for data with mirroring).

Beyond setting an SMB share itself, the end of the video mentions additional steps to help protect data. Personally I have a simple SMB share on an old notebook sharing a folder/directory on the boot drive, then to protect that I backup the files to an external drive using over a USB cable. In Ubuntu, a USB drive gets mounted under the system folder /media and users can schedule a nightly backup using a cron job.

In the terminal, run crontab -e.

The file will open and scroll to the bottom. The first five spaces represent: 
  • m - minute
  • h - hour
  • dom - day of month
  • mon - month
  • dow - day of the week (0-6, 1-7 - 0 & 7 both are Sunday)

The picture is an example that runs the copy command, cp -uR, at midnight and 12 noon everyday with the * symbol representing 'any'.

Note there are many other commands and ways to back up using cron jobs, just cp -uR is likely one of the easiest and is quite performant.

Use other distros?
I also recently in reading up on Docker, stumbled upon a very lightweight distro called AlpineLinux. It can be set to host an SMB share and a host of other things just like Ubuntu - steps are largely the same.

TIP: In AlpineLinux, ensure when running 'apk add samba' you also add the samba-common-tools, which are needed to run smbpasswd. 

What Alpine is useful for is just being small. To compare, Ubuntu desktop .iso is around 2.1~2.5 GB in size now; Ubuntu server that doesn't have the desktop runtime and graphical elements - is around 800 MB; Alpine is only around 120 MB. Of course, Ubuntu has things like firewall ufw and other features baked in out-of-the-box to make it quick and easy to get up and running, but I like Alpine for portability and makes a good test environment for running in a VM, on a Raspberry Pi, or something else.

Finally, I wanted to say THANK YOU to the Ubuntu Podcast who were kind enough to mention this blog in a recent episode. I hope these tutorials are useful, and while I am having trouble keeping a regular cadence, I hope the blog itself helps to consolidate the material so its available anytime for anyone. 

Thank you for reading this far. Hope to have some more on creating a web server and basic PHP tips soon.

Introduction to virtualization & containers





This is an introduction to virtualization - hypervisors, virtual machines (VMs), and also containers. Virtualization has become a transformative technology for several reasons highlighted in the video, both in terms of cost savings, but also flexibility in deployment and application usage. Containers - essentially Linux name spaces - build upon the same idea of virtualization with a mix of benefits (low resource usage per deployment, fast to deploy) and limitations (their primarily Linux based and confined). I'm including several links for reference that I've found helpful and hope you can as well.


I think virtualization is pretty straightforward beyond the inner workings of a hypervisor itself. Install a hypervisor, figure out how to create and access VMs and away you go. Containers - though I don't touch on much in the video are interesting, but complex in how to deploy, manage, and use. I attribute a lot of this to the maturity difference between the the technologies, and the underlying complexity of running sandboxed services on a single kernel vs just creating an environment to install wholly independent systems on. 

Ideally when deploying an IT backend, both technologies would and do get used where needed. Having an understanding of both is important for those looking to get into IT or infrastructure workloads for the foreseeable future. 

Hope it helps, leave a comment if you have any suggestions, questions, or comments.



"How wealth has changed" - Motivation for More Open Source

Listening to the news this past week, one good or interesting podcast was about "How wealth has changed" on the Indicator from Planet Money  of NPR.

It struck me, though not really pointed out directly in the 9 minute show, that it was essentially a call for open source. In the episode, the host and guest were discussing how for much of human history the economic prosperity of a country was tied to land - its size, its resources, its ability to be used for farmland and crops, precious metals, etc. Countries in need of or desiring more resources would then start wars, make colonies, and generally squabble for the finite amount of resources available. Come to the Enlightenment and then the Industrial Revolution, there was a new shift for manufacturing with skilled or knowledge-based workers became valuable. The shift has continued to this day with technology now being synonymous with computers and IT, though can really permeate all facets of life and work. What is also more intriguing is that now that ideas and knowledge are able to help drive economic growth, and in many ways are more valuable than raw resources, value growth is accelerated like never before seen in human history. While land and natural resources are finite, knowledge is not, and essentially sharing ideas increases growth potential.

The show also touched upon a few topics of the trade war and patent infringement, but largely argued for better sharing of ideas without mentioning open source. Open source, in the computer science world is essentially the real-world manifestation of this idea. The code, the "how to do something", is put out for the world to see and use. Open source enables technology and know-how to be shared on the Internet, making it available for billions to use, millions to implement, and thousands to even help maintain and improve so that ideas are not only shared, they can be refined at scale.

Outside the scope of the discussion, but underlying the idea is the Solow Growth model which essentially states that technology (either tools, computers, processes, legal systems, governance, etc.) is the true driver of growth in modern society.

Open source technology, the sharing of ideas and know how freely in an open way, can be linked to global economic growth and development. Knowing how to use these tools, and taking an interest in them is absolutely critical, and I was happy to see that idea be recognized by mainstream media.

Intro for Google Forms and Scripts




This is an introduction and walk through of using Google Forms, reacting to data as it comes in, and also setting a custom script to forward the contents of the request to a specific mail address. This is a feature that is very useful for doing things like surveys and basic initial user/customer engagement.

For internal use it should be more than fine. When embedding in a website there are issues about user privacy which need to be considered and managed. As of this writing, though Google itself does comply with the laws of the land (GDPR in the EU, FCC & FTC in the US, and other regulations in other countries), things like right to be forgotten should be practiced - i.e. a company should be able to either script or manually delete a user entry from someone who requests to do so.

Here are some more references I found helpful when setting up this script.

I am not a privacy lawyer, but I want to make the information available about how useful the forms tool is and can be. The alternative to using something like forms requires a larger amount of code, the ability to setup a database of some sort, employing proper SSL certificates on the server, the server itself and likely storage of said server, another server or software set to send and receive emails, and a manager to run all of that infrastructure and backend. For anything which might not warrant all of that overhead, Google Forms is a great tool for collecting, managing and reacting to feedback.


What is a server?


This blog and video aim to introduce and explain servers.

From 1 GB of RAM to 3 TB of RAM, there is a slew of servers for a wide variety of usage cases. I try to break it down by the services running on any given box, explain a little bit of the hardware differences and then go into more depth using the open source operating system, FreeNAS, as an example of a home server. 

I know a lot of concepts covered in the video weren't covered before, and per your interest those could be critical or just details. I'm linking some more information about the concepts covered below.

Off-the-bat - the FreeNAS instance I'm running is not production/persistent. If you want to install and setup FreeNAS be sure to follow the recommended hardware specifications from iXsystems.
https://www.freenas.org/hardware-requirements/

More information about the technologies discussed are below.

  -  I talk more about this in the below video as well
  -  Very general here I'll try to cover the topic more in depth in a future video

Remember, while there is always more to know, most of the basics just build upon one another. Day-by-day, week-by-week, you will gradually learn the importance and competence of all these details. It takes time.

Essentially there is a multitude of ways to split up tasks and data running on a server, and also share said data between devices. FreeNAS offers a nice UI for interacting and setting up common services that have immediate value for home / small office users. Its services can help with data backup, file sharing, redundancy, cloud sync, and, with plugins and jails, has the ability to add more services as needed. 

One more video I hope to share is a short one. In the first video, I used the most open way to create a share on FreeNAS 11.3. However, a better practice is to ensure a specific user is set to the owner of that share, and then let users login using that account name and password.


Networking introduction


This video is a quick introduction about networking on computers. It covers some high-level concepts, such as packets, TCP/IP, IP addresses, DNS, DHCP, as well as some useful commands. Again since this is a Linux based video a lot of the commands are shown using Linux, though Windows equivalents (ipconfig, arp -a, and ping) are also covered.

This is truly just an introduction of key concepts, and omits more complex topics of setting static IPs, bringing up a network on a Linux server, and complex routing or detailed firewall permissions, etc. These are all important for IT administrator work and I am linking some more useful links below. 

References:
Networking is an important part of using computers and tech, and having a basic understanding is really critical to starting on more advanced projects. Like with the other concepts covered in this blog, I hope this is a good starting point.