Skip to content

uberspace and let's encrypt

Nichts besonderes, ich habe einfach nur das Wissen aus dem wiki und dem blog genommen und ich ein shell script gepackt.

Das Resultat ist folgender Cronjob, der einmal aller 60 Tage läuft.

!/bin/bash -l

#

@see:

https://blog.uberspace.de/lets-encrypt-rollt-an/

https://wiki.uberspace.de/webserver:https?s[]=lets&s[]=encrypt

@author: stev leibelt <[email protected]>

@since: 2015-12-28

#

begin of local parameters

LOCALROOTPATH='/home/<user name>' LOCALLOGPATH=$LOCALROOTPATH'/<path to your log files>' LOCAL_ACCOUNT='<your.domain.tld>'

end of local parameters

begin of parameters for letsencrypt-renewer

LOCALCONFIGURATIONPATH=$LOCALROOTPATH'/.config/letsencrypt' LOCALLOGGINGPATH=$LOCALROOTPATH'/.config/letsencrypt/logs' LOCALWORINGPATH=$LOCALROOTPATH'/tmp/'

end of parameters for letsencrypt-renewer

begin of parameters for uperspace-prepare-certificate

LOCALKEYPATH=$LOCALROOTPATH'/.config/letsencrypt/live/'$LOCALACCOUNT'/privkey.pem' LOCALCERTIFICATEPATH=$LOCALROOTPATH'/.config/letsencrypt/live/'$LOCALACCOUNT'/cert.pem'

end of parameters for uperspace-prepare-certificate

letsencrypt-renewer --config-dir $LOCALCONFIGURATIONPATH --logs-dir $LOCALLOGGINGPATH --work-dir $LOCALWORINGPATH &>$LOCALLOGPATH uberspace-prepare-certificate -k $LOCALKEYPATH -c $LOCALCERTIFICATEPATH &>>$LOCALLOGPATH

In schön gibt es das script auch noch einmal hier.
Großen Dank an uberspace und lets encrypt.

Damit das script funktioniert, müsst ihr natürlich zu erst lets encrypt aufsetzen:


uberspace-letsencrypt 
letsencrypt certonly
Ich bin recht faul. Aus diesem Grund lass ich mir die Zertifikate einmal im Monat neu generieren. Um die Infrastruktur nicht zu sehr zu belasten, habe ich mir einen anderen Tag, als den Ersten im Monat ausgesucht. Das gleiche gilt für die Uhrzeit.

Translate to de es fr it pt ja

determine if an apache process is still running via bash to prevent multiple instances running

Given is the fact that you have some processes (like cronjobs) executed via an webserver like apache. Furthermore you have installed and enables apache server status. To add some re usability benefits, we should divide and conquer the problems into either shell scripts or shell functions. Side note, if I am writing about shell, I am in the bash environment. What are the problems we want to tackle down?:

  • find the correct environment
  • check all available webservers if a process is not running
  • specify which process should not run and start it if possible

We can put the first two problems into shell functions like the following ones. I am referencing to some self written shell functions. The reference is indicated by the "net_bazzline_" prefix.

#!/bin/bash
#find the correct environment

if net_bazzline_string_contains $HOSTNAME 'production';
    NET_BAZZLINE_IS_PRODUCTION_ENVIRONMENT=1
else
    NET_BAZZLINE_IS_PRODUCTION_ENVIRONMENT=0
fi

And the mighty check.

#!/bin/bash
#check all available webservers if a process is not running
####
# @param string <process name>
# @return int (0 if at least one process was found)
####
function local_is_there_at_least_one_apache_process_running()
{
    if [[ $# -lt 1 ]]; then
       echo 'invalid number of arguments'
       echo '    local_is_there_at_least_one_apache_process_running <process name>'

       return 1
    fi

    if [[ $NET_BAZZLINE_IS_PRODUCTION_ENVIRONMENT -eq 1 ]]; then
        LOCAL_ENVIRONMENT='production'
    else
        LOCAL_ENVIRONMENT='staging'
    fi

    #variables are prefixed with LOCAL_ to prevent overwriting system variables
    LOCAL_PROCESS_NAME="$1"

    #declare the array with all available host names
    declare -a LOCAL_HOSTNAMES=("webserver01" "webserver02" "webserver03");

    for LOCAL_HOSTNAME in ${LOCAL_HOSTNAMES[@]}; do
        APACHE_STATUS_URL="http://$LOCAL_HOSTNAME.my.domain/server-status"

        OUTPUT=$(curl -s $APACHE_STATUS_URL | grep -i $LOCAL_PROCESS_NAME)
        EXIT_CODE_OF_LAST_PROCESS="$?"

        if [[ $EXIT_CODE_OF_LAST_PROCESS == "0" ]]; then
            echo "$LOCAL_PROCESS_NAME found on $LOCAL_HOSTNAME"
            return 0
        fi
    done;

    return 1
}

And here is an example how to use it.

#!/bin/bash
#specify which process should not run and start it if possible

source /path/to/your/bash/functions

LOCAL_PROCESS_NAME="my_process"

local_is_there_at_least_one_apache_process_running $LOCAL_PROCESS_NAME

EXIT_CODE_OF_LAST_PROCESS="$?"

if [[ $EXIT_CODE_OF_LAST_PROCESS == "0" ]]; then
    echo "$LOCAL_PROCESS_NAME still running"
    exit 0;
else
    #execute your process
    echo 'started at: '$(date +'%Y-%m-%d %H:%M:%S');
    curl "my.domain/$LOCAL_PROCESS_NAME"
    echo 'started at: '$(date +'%Y-%m-%d %H:%M:%S');
fi

You can put this into a loop by calling it via the cronjob environment or use watch if you only need it from time to time:

watch -n 60 'bash /path/to/your/shell/script'

Enjoy your day :-).

Translate to de es fr it pt ja

bash - compare big xml files - get differences

The following script provides a solution to compare two big xml files. I tried to compare a lot of xml files with a size of greater 500 megabytes with different tools. Each tool was eating up my memory and swap and finally crashed. All i want to have is "show me what is in file one and not in file two and vice versa". I've reached this goal by using a property my xml files have. Each file as nodes. Each node has a unique identifier inside. I cutting out the unique identifier tag and putting this tag, line by line, into a file. After that, i'm sorting this unique identifiers. Finally i am using diff. To create a more useful output, i'm separating the "what is only in file one" into a own file (and the same for file two).

Happy using and if you find errors, i'm ready to fix them :-).

#!/bin/bash
####
# script to compare two xml files by (unique) tag
####
# @author stev leibelt
# @since 2013-03-13
####

if [[ $# -eq 3 ]]; then
  XML_FILE_ONE="$1"
  XML_FILE_TWO="$2"
  XML_TAG="$3"

  if [[ -f "$XML_FILE_ONE"
        && -f "$XML_FILE_TWO"
        && ! -z "$XML_TAG" ]]; then
    #retrieving xml_tags per file
    #reduce xmls by lines containing the tag
    sed -n -e 's/.*<'$XML_TAG'>\(.*\)<\/'$XML_TAG'>.*/\1/p' $XML_FILE_ONE > $XML_FILE_ONE'.sed'
    sed -n -e 's/.*<'$XML_TAG'>\(.*\)<\/'$XML_TAG'>.*/\1/p' $XML_FILE_TWO > $XML_FILE_TWO'.sed'

    #sort and uniq the sed'ed files
    sort $XML_FILE_ONE'.sed' | uniq > $XML_FILE_ONE'.sort'
    sort $XML_FILE_TWO'.sed' | uniq > $XML_FILE_TWO'.sort'

    #output the differences
    diff $XML_FILE_ONE'.sort' $XML_FILE_TWO'.sort' > 'xml_diff_by_tag.diff'
    #diff --side-by-side $XML_FILE_ONE'.sort' $XML_FILE_TWO'.sort' > 'xml_diff_by_tag.diff'
    #comm -3 $XML_FILE_ONE'.sort' $XML_FILE_TWO'.sort' > 'xml_diff_by_tag.comm'

    #show only differences per file
    sed -n -e 's/^<\ \(.*\)/\1/p' 'xml_diff_by_tag.diff' > $XML_FILE_ONE'.diff.uniq'
    sed -n -e 's/^>\ \(.*\)/\1/p' 'xml_diff_by_tag.diff' > $XML_FILE_TWO'.diff.uniq'

    #sed -n -e 's/^<\(.*\)/<\1/p' 'xml_diff_by_tag.comm' > $XML_FILE_ONE'.comm.uniq'
    #sed -n -e 's/\t<\(.*\)/<\1/p' 'xml_diff_by_tag.comm' > $XML_FILE_TWO'.comm.uniq'

    #removing unused files
    rm -fr $XML_FILE_ONE'.sed' $XML_FILE_TWO'.sed' $XML_FILE_ONE'.sort' $XML_FILE_TWO'.sort'
  else
    echo 'Invalid arguments provided'
    echo 'try '$0' $xmlFileOne $xmlFileTwo $comparingTag'
  fi
else
  echo 'Invalid number of arguments provided'
  echo 'try '$0' $xmlFileOne $xmlFileTwo $comparingTag'
fi

Available on github.com.

Translate to de es fr it pt ja

bash - enhanced burn

Since we are dealing more with usb devices, the good old optical burning is not used that often. I created a simple function with an alias of "burn" to write a iso file as quick as possible to my optical storage device.

####
# burns given iso file
#
# @author stev leibelt
# @since 2013-02-12
####
function net_bazzline_burn ()
{
  if [[ $# -lt 1 ]]; then
    echo 'No valid argument supplied.'
    echo 'Try net_bazzline_burn $isoFile [$opticalDevice]'

    exit 1
  fi

  if [[ $# -eq 1 ]]; then
    sudo wodim -v dev=/dev/cdrom "$1"
  else
    sudo wodim -v dev=/dev/"$2" "$1"
  fi
}

This is available to my shell function on github.com.

Translate to de es fr it pt ja

bash - enhanced compress and decompress

Again two functions you can add alias for. The first function provides a wrapper for the task to create tar.gz file.

If you provide only one argument (like a filename or a directory), the function creates a $name.tar.gz file from the first argument. If you provide more the one argument, the function uses the first argument as name of the *.tar.gz file and all others as files/directories to compress.

####

compress given directories into tar.gz

#

@author stev leibelt

@since 2013-02-02

#

function netbazzlinecompress () { if [[ $# -lt 1 ]]; then echo 'No valid arguments supplied.'

exit 1

fi

FILENAME_TAR="$1".tar.gz

if [[ $# -gt 1 ]]; then shift fi

tar -zcf "$FILENAME_TAR" "$@" }

The second function provides a wrapper to get the files of a *.tar.gz file.

If you provide one argument, the functions is using this as the *.tar.gz filename. If you provide two arguments, the second argument is used as output directory.

####

compress given directories into tar.gz

#

@author stev leibelt

@since 2013-02-02

#

function netbazzlinedecompress () { if [[ $# -lt 1 ]]; then echo 'No valid arguments supplied.' echo 'Try netbazzlinedecompress $nameOfCompressedFile [$pathToDecompress]'

exit 1

fi

if [[ $# -eq 1 ]]; then tar -zxf "$1" else tar -zxf "$1" -C "$2" fi }

As general, they are also in my shell function file available on github.com.

Translate to de es fr it pt ja

bash - enhanced cd

This time, i want to share a small enhancement for the "mkdir" bash command. The code i will paste below is doing the following thing. If you supply a number like 3, this function tries to goes up three directory levels. If you supply a string, it behaves like the normal cd.

function net_bazzline_cd()
{
  #numeric value given?
  if [ `expr $1 + 1 2> /dev/null` ]; then
    for (( i=1; i <= $1; i++ ))
    do
      cd ..
    done
  else
    cd "$1"
  fi
}

Sourceode available on github.com.

Translate to de es fr it pt ja

bash - enhanced mkdir

Since i am fully addicted to the shell, the customisation started.

This time, i want to share a small enhancement for the "mkdir" bash command. The code i will paste below is doing the following thing. If you supply one argument, it will create the directory you want with "mkdir -p" and change into that. If you supply multiple arguments, it will behave like the normal "mkdir".

How to use it? Open your .bashrc file and add the code provided below. Then define an alias like "alias mkdir=netbazzlinemkdir". When you open your next shell, your mkdir is enhanced :-).

function net_bazzline_mkdir ()
{
  #check if at least one argument is supplied
  if [ $# -eq 0 ]
  then
    echo "No arguments supplied"
    return 1
  fi

  #if one argument is supplied, create dir and
  # change to it
  if [ $# -eq 1 ]
  then
    mkdir -p "$1"
    cd "$1"
    return 0
  fi

  #if more then one argument is supplied
  # execute default mkdir
  if [ $# -gt 1 ]
  then
    mkdir $@
    return 0
  fi
}

Sourceode available on github.com.

Translate to de es fr it pt ja

howto - remove CrLF from file

tr -d '\r' < myFileWithCrLF > m<FileWithoutCrLF

'tr' is a cli tool to translate or delete characters. Option '-d' is used to delete the evil carrage return '\r'. For more information take a look into the man page.

Translate to de es fr it pt ja

howto - sed - work with xml files - get content inside one tag

Assuming you have a large xml file (say 400 megabytes) and you want to grep the content inside one tag, which tool would solve this better then sed?

sed -n -e 's/.*\(.*\)<\/my_magicTag>.*/\1/p' myInputFile.xml > myInputFileFilteredByMyMagicTag.xml

So what we are doing? We are telling sed to search for none or a lot of text before "", store none or a lot of text before "". With "\1", we are using the first remembered pattern (since we only use one "()", we only have one in this command). With "\p", we are telling sed to print this out. After that, as usual, we are using ">" to redirect the standard output into a file.

Translate to de es fr it pt ja

howto - simple backupscript for your linux home

The backupscripts i provide in this blog entry doesn't contain any voodoo. But when you start with linux, i hope it will put a lot of personal insecurity away. Perfectly, you will became more secure working with the shell by using this scripts. Since you have to adapt the scripts, you have to read it. I tried to add a lot of comments to it but i do not repeat a comment.

The heart of my backup script is a script that calls further scripts. This lets you easily extend your backup process without changing your backup command. Furthermore, you just have to set one alias in your bashrc (like "alias backupHome='sh ~/code/sh/local/bachup.sh';") you have to call.

For me, the backupscript first calls a script that tar's all my settings. After this script is finished, a second script rsync's all wished files to a backup path. Rsync should be your first choice for syncing folders. Read the manual for rsync or some howto's in the web if you want to know more about it works or why i used the settings i used.

The script that tar's all my settings is quit simple. It first renames/moves the existing tar to an old name (if it exists), then tar's all the given settings into an archive and deletes the backup if the new tar was created. You can see, it is more or less just a tar call with a list of files or directories you want to have in the tar.

The second script executes the rsync call with some parameters and with the "from" and the "to" directory path. Pretty straight forward, isn't it?

When you download this script, you only have to edit some areas. In the backup.sh, you have to edit the path where your other scripts where stored. In the tarSettings.sh, you have to edit the settings you want to tar. I don't think you are using all the tools i use ;-). In the rsyncDirectorys.sh, you have to edit the two paths and also the directories you want to rsync.

I created a tar with all three scripts for you as download backup.tar.gz and also added the scripts for copy and paste below. Have fun with it and if you find errors or something you want to add, feel free.

tarSettings.sh

#!/bin/bash

This script tars all the files into an tar.gz archive. It backups an old copy before creating the new one and deletes the backup on success.

#

@author artodeto

@param string TARBALL - the name of the tar.gz you want to create

@param string TARBALLOLD - the name for the backup

@param string DIRECTORY - the name where you want to store the TARBALL

@since 2012-07-25

TARBALL='settings.tar.gz' TARBALLOLD="$TARBALL.old" DIRECTORY='/home/artodeto/backup/'

if [ -f "$DIRECTORY$TARBALL" ]; then echo 'renaming old settings' mv "$DIRECTORY$TARBALL" "$DIRECTORY$TARBALLOLD" fi

tar -czf "$DIRECTORY$TARBALL" ~/.viminfo ~/.filezilla ~/.wireshark ~/.umlet ~/.vim ~/.netbeans ~/.config ~/.ssh ~/.bashrc ~/.notion ~/.irssi ~/.VirtualBox ~/.conkyrc ~/.mozilla ~/.gnome2_private ~/.pentadactyl

if [ -f "$DIRECTORY$TARBALL" ]; then echo 'removing old settings' rm "$DIRECTORY$TARBALLOLD" fi

rsyncDirectorys.sh

#!/bin/bash

This script rsyncs all given diretories from on path to another.

@author artodeto

@param string DIRECTORIES - all the directories you want to rsynced

@param string SOURCE - from where you want to rsync

@param string DESTINATION - to where you want to rsync

@since 2012-07-25

DIRECTORIES='coding data backup log scripts bin config doc tool' SOURCE='/home/artodeto/' DESTINATION='/home/artodeto/share/in/backup'

for DIRECTORY in $DIRECTORIES; do

rsync -vrptgoLDu --delete $SOURCE$DIRECTORY/ $DESTINATION/$DIRECTORY/

done;

backup.sh

#!/bin/bash

This script executes all steps for your backup.

@author artodeto

@since 2012-07-25

if file exists, execute it

if [ -f ~/code/sh/local/backup/tarSettings.sh ]; then echo '----' echo 'starting tar the settings' echo '----' sh ~/code/sh/local/backup/tarSettings.sh fi

if [ -f ~/code/sh/local/backup/rsyncDirectorys.sh ]; then echo '----' echo 'starting rsync directorys' echo '----' sh ~/code/sh/local/backup/rsyncDirectorys.sh fi

Translate to de es fr it pt ja

howto - debian / ubuntu / linux mint - list all installed packages

dpkg --get-selections

With the upper command on your peferred shell you get a list of all installed packages. By using | less or | grep mypackage it is very handy to filter for criterias.

Want to know more? Follow the links below. aboutdebian.com/packages apt-get introduction

Translate to de es fr it pt ja

howto - mount filesystem via ssh using sshfs

First of all, you have to install "sshfs". After that you just have to type.

sshfs $user@$host:/path/to/dir /path/to/mount

If fuse throws an error of permission, you have to add your user to the group fuse by typing:

usermod -a -G fuse $user

After you re-logged-in you can see by typing "groups" in your shell that everything is done fin. Do not want to logoff? Try

su $user
in your shell. This is a new login and so your new group is available now.

Just another hint, if you want to access symbolic links, try:

-o followsymlinks
and/or
-o transformsymlinks

The man page will help you to understand what you are doing.

Now put everything as an alias in your bashrc (or something similar) and you are done :-).

alias mountMyShare="sshfs [email protected]:/path/to/dir /path/to/mount -o follow_symlinks"; alias umountMyShare="fusermount -u /path/to/mount";

Happy mounting and unmounting.

Translate to de es fr it pt ja

mplayer - play files selected with "ls | grep $name" using xargs

Easy task, you want to listen to some files in your audio directory. You can list the files via ls and also grep them by piping the output of ls. But how do you get this result to the mplayer? By using xargs.

ls | grep '$mynameschema' | xargs mplayer

Do you want to know more about using xargs? Try the following link

Translate to de es fr it pt ja

bash - unrar more than one file in separate directories

Long story short, here is the script.

#!/bin/sh for f in \*.rar do mkdir ${f%.rar} unrar e $f ${f%.rar}/ done
What does it? For every *.rar file in you current directory, a subdirectory will be created by using the filename except of '.rar'.

Just use this script on your command line.

sh ~/my/path/to/the/script.sh
Thats it, have fun :-).

It should not be that problem to adapt this script to other types of archive.

Want to know more about bash scripting and string manipulation? Try to check the following links. string manipulation special parameters

Translate to de es fr it pt ja

Add a watermark to a bunch of images

The Problem: You have a bunch of images (like files in an directory ;-)) you want to add with a watermark.

The Solution (an easy one):

!/bin/sh mkdir withLogo for f in *.jpg do composite -gravity SouthEast -watermark 25% ~/path/to/my/watermark.png $f withLogo/$f done

As far as you see, this shell script creates a directory called "withLogo" and walks to every file with the ending "jpg". The script combines every file with the "~/path/to/my/watermark.png" file saves it into "withLogo".

Just change in the directoy with the images you want to add with a watermark and type in "sh path/to/my/combine/script.sh".

If the command "composite" is not on your system, try to install "imagemagick".

Translate to de es fr it pt ja

resize images via shell

The Problem: You have a bunch of images (like files in an directory ;-)) you want to resize.

The Solution (an easy one):

#!/bin/sh mkdir 800px for f in *.jpg do convert $f -verbose -resize 800 -quality 90% -comment "powered by open source" 800px/$f done

As far as you see, this shell script creates a directory called "800px" and walks to every file with the ending "jpg". The script resize every file to a with of 800 pixels and saves it into "800px".

Just change in the directoy with the images you want to resize and type in "sh path/to/my/convert/script.sh".

If the command "convert" is not on your system, try to install "imagemagick".

Translate to de es fr it pt ja

Change session lifetime for phpmyadmin

Be aware that this can be security problem if you increase the session lifetime on a productive machine. Just do it on your local development machine where the database does not handle any sensitive data or informations. So far for the service announcements ;-).

Are you tired of logging in every 30 minutes to the phpmyadmin (currently it is unimportant that there is a mysql cli and tools like tora outside)?

Just edit your configuration file. This file should be in the path "/etc/phpmyadmin" (on debian) and is called "config.inc.php". In this file, you just have to add the following line to increase the session time to four hours.

$cfg['LoginCookieValidity'] = 14400;

You want to know more? Take a look at the phpmyadmin wiki

Translate to de es fr it pt ja

Easy PHPUnit Test Skeleton with a data provider

<?php class UnitTest extends PHPUnitFrameworkTestCase { public static function providerTestFirst() { return array( array(1, 1), array(2, 2), array(2, 1), ); }

/**
 * @dataProvider providerTestFirst
 *
 * @param array $data 
 */
public function testFirst($expected, $value)
{
    $this->assertEquals($expected, $value, $value . ' should be ' . $expected);
}

}

The skeleton is really simple indeed. And it should trow an error ( 2 != 1).

manual 3.5 manual current

Translate to de es fr it pt ja

Virtualbox could not start - error message says "run /etc/init.d/vboxdrv setup"

If you shell outputs something like the following don't get frustrated, the solution is just one apt-get install away! The error

WARNING: All config files need .conf: /etc/modprobe.d/blacklist, it will be ignored in a future release. Stopping VirtualBox kernel modules:. Uninstalling old VirtualBox DKMS kernel modules:. Trying to register the VirtualBox kernel modules using DKMS:Error! echo Your kernel headers for kernel 3.0.0-1-686-pae cannot be found at /lib/modules/3.0.0-1-686-pae/build or /lib/modules/3.0.0-1-686-pae/source. Failed, trying without DKMS ... failed! Recompiling VirtualBox kernel modules: Look at /var/log/vbox-install.log to find out what went wrong ... failed!

The solution

apt-get install linux-headers-3.0.0-1-686-pae /etc/init.d/vboxdrv setup

You see, your system just needs the current kernel header file and thats it.

Thanks zzeroo for the the hint

Translate to de es fr it pt ja