Skip to content

MySQL and a known bug since 2003 about the auto_increment value

We detected a bug that is known since 2003.

The value of an auto increment column is set back to zero when you are having an empty table and restarting the MySQL DBMS. We run into this issue by using the auto increment value as a history id into a second table.

How can you work around this issue?

The easiest way is to order by id descending on the second table or to setup a "start up" shell script that calculates and sets the auto increment value.

Translate to de es fr it pt ja

Safe the open web - Please Tim Berners-Lee, kick out the EME proposal

Zak Ragoff has written an important article about Tim Berners-Lee decision he has to make right now.

Tim decided it the past to not include closed source binary blob software running in the browser without any control to lock down the web to something big companies can control. Was this the right decision? Well, see where HTML or javascript is used and answer the question on your own.

Big, DRM companies like Netflix, Apple, Google and Microsoft (they are just the vassals from MPAA and RIAA - America, fuck yeah! ;-)) are working on a standard called Encrypted Media Extension. This proposal is created to gain control over anything. You can start with Videos but the proposal is "open for change", so it is a no brainer to add images, audio, more or less even the whole HTML or Javascript under the "cover" of a DRM.

Beside the fact that big companies can decide who can see what, there is an even bigger issue rotating in my open source mind. I, as a user, have to run closed source binary code and all I can do is pray or blindly trust that this companies are not fooling around with me like mining all my data or that they are taking security serious. Well, call me blinded by the past, but this never happens on close source software.

Please Tim, do it like Linus has done it with NVidia and give them a nice "I don't care" kick in their buts.

Translate to de es fr it pt ja

Zend Framework 2 - Lazy Factory - Kickstarter (for ZF 2.4)

No blabla, just how you do it with zend framework 2.4.

#add following line to your composer.json
"ocramius/proxy-manager": "1.0.*",

composer update

#add following lines to your module.config.php
'lazy_services' => [
    'class_map' => [
        \My\Class::class => \My\Class::class
    ]
],
'service_manager' => [
    'delegators' => [
        \My\Class::class => [
            \Zend\ServiceManager\Proxy\LazyServiceFactory::class
        ]
    ],
    'factories' => [
        \My\Class::class => \My\ClassFactory::class,
        \Zend\ServiceManager\Proxy\LazyServiceFactory::class => \Zend\ServiceManager\Proxy\LazyServiceFactoryFactory::class
    ]
]

Thats it.

Useful links are a gist from a closed issue and somehow an official howto (maybe working with zend framework greater 2.4).

Translate to de es fr it pt ja

simple bash function to prefix any given command with sudo if needed

The title of this entry tells it all. I've create a simple bash function to prefix any given command expression with sudo if needed. If needed means, if you are not root. It is totally simple but to be true, it took me time to use my bash skills to write it. Imagine you now how to draw a cloud, the sun, a tree and a house and once you figure out how to concatenate all of that elements to draw a picture for your parents ;-).

if [[ $# -lt 1 ]];
then
    echo "Invalid number of arguments provided"
    echo "${FUNCNAME[0]} <command to execute>"
    return 1
fi

if [[ $(whoami) == "root" ]];
then
    [email protected]
else
    sudo [email protected]
fi

Looks like Chuck Norris is using github right now. Github is down, otherwise I would add a link to the fitting repository commit. Github is back online, here is the promised link to the commit.

Übersetze nach en fr

serendipity command line update released

I am happy to announce the initial and 1.0.0 of the serendipity command line update script. It is designed to do the boring work of updating against the latest release. The workflow is dead simple.

  • fetch latest version
  • check if latest version is installed
  • make a backup of the existing installation
  • update

Enjoy using it. If you find an error, feel free to open an issue or a pull request.

Translate to de es fr it pt ja

Vim 8.0 released

This the first major Vim release in ten years. There are interesting new features, many small improvements and lots of bug fixes.

Among the new features are:
- Asynchronous I/O support, channels, JSON
- Jobs
- Timers
- Partials, Lambdas and Closures
- Packages
- New style testing
- Viminfo merged by timestamp
- GTK+ 3 support
- MS-Windows DirectX support
[...]

Quelle

Translate to de es fr it pt ja

Kirigami official released

Das KDE-Framework Kirigami steht in einer ersten Veröffentlichung bereit, um Qt-Entwickler bei der Erstellung konvergenter Apps für Desktop und Mobilgeräte plattformübergreifend zu unterstützen.

Quelle
[...]
Kirigami currently officially supports Android, Desktop GNU/Linux (both X11 and Wayland), Windows, and the upcoming Plasma Mobile. iOS support is currently in an experimental stage, support for Ubuntu Touch is being worked on. The plan is to eventually become part of KDE Frameworks 5, but is currently released standalone in KDE Extragear. Since it is aimed to be a Tier 1 framework, it has no other dependencies apart from Qt, and therefore will not increase your application’s size any more than necessary. [...]

Quelle
kirigami design principles, ui patterns and styleguide
kirigami api documentation

Translate to de es fr it pt ja

web - a beautiful transfer of agil coding to building a house to show you how bad this approach can be

Miles English has published a beautiful text to demonstrate how bad agil development can be if you are doing things totally wrong (and at the end, sadly this is the common way of building software :-(). He illustrates the agil approach by start building a house without knowing how this should look like.
The sad truth is, that dishonesty is build up from this approach. The development team starts doing things secretly to fix the flaws they had to build. The consequence of this is, that even simple task are taking more time then expected (because of the hidden refactoring) which in turn disgusts the management.
At the end, both side are talking but no one is listening because no one is trusting the other sides words.

Translate to de es fr it pt ja

migration from owncloud 9 to nextcloud 9.0.50

I just migrated my installation from owncloud 9 to nextcloud 9.0.50.
Only one thing is not working, the notes application simple shows me an empty list of notes. beside that, it is more like a new theme.

At the moment we only support manual migrations from ownCloud 8.2 and 9.0 to Nextcloud 9.

To do that please follow the usual upgrading steps:

* Delete everything from the ownCloud folder except data and config
* Download the Nextcloud 9 release from https://nextcloud.com/install/43
* Put the files into the folder where the ownCloud files where before
* Trigger the update either via OCC or via web.

source

I did the following steps.

make a backup of your database

log into your server and cd to the owncloud path

#

assuming your installation is in the directory "cloud"

./occ maintenance:repair cd .. wget https://download.nextcloud.com/server/releases/nextcloud-9.0.50.zip unzip nextcloud-9.0.50.zip cp -rv cloud/config nextcloud/ cp -rv cloud/data nextcloud/ mv cloud owncloud mv nextcloud cloud cd cloud ./occ upgrade ./occ app:list

enable the apps you want

Update from 2016-06-26
I created a small upgrade.sh script. Here it is.

!/bin/bash

@author stev leibelt artodeto@bazzline.net

@since 2016-06-26

begin of runtime environment validation

if [[ $# -lt 1 ]]; then
echo "invalid number of variables provided"
echo "upgrade.sh "
exit 1
fi

if [[ -d backup ]]; then
echo "backup directory sill exists"
exit 1
fi

end of runtime environment validation

begin of local runtime variables

LOCALCURRENTDATE=$(date +'%Y-%m-%d')
LOCALURLTOTHENEXTVERSION="$1"
LOCAL
PUBLICBACKUPPATH="public$LOCALCURRENT_DATE"

end of local runtime variables

begin of downloading new version

wget $LOCALURLTOTHENEXT_VERSION
unzip *.zip

end of downloading new version

begin of making backups

cd public
tar --ignore-failed-read -zcf "public.$LOCALCURRENTDATE.tar.gz" public
./occ maintenance:singleuser --on
cd ../
mkdir backup
cp -rv public/config backup/
cp -rv public/data backup/
mv public $LOCALPUBLICBACKUP_PATH

end of making backups

begin of upgrade

mv nextcloud public
cp -rv backup/config public
cp -rv backup/data public
cd public
./occ upgrade
./occ maintenance:singleuser --off
echo "enable the apps you need with ./occ app:enable "
./occ app:list

end of upgrade

Translate to de es fr it pt ja

Propel, the PropelOnDemandFormatter, self loaded runtime properties and your special reload data if needed usecase- howto

Propel column representation offering the option to add some business logic inside. We are making this from time to time by enriching the existing object with more data, but only when a method is called explicit.


class MyTable extends BaseMyTable
{
    /** @var null|\My\Enriched\MyTable\Data */
    protected $enrichedData;

/**
 * @return null|\My\Enriched\MyTable\Data
 */
public function getEnrichedData()
{
    //prevent reloading enriched data if this method is called more than once
    if ($this->noEnrichedDataYetLoaded()) {
        //do something heavy data lifting
        $this->enrichedData = $this->tryToLoadEnrichedData();
    }

    return $this->enrichedData;
}

/**
 * @return bool
 */
private function noEnrichedDataYetLoaded()
{
    return (is_null($this->enrichedData));
}

}

If you would use the corrosponding MyTableQuery object in combination with a PropelOnDemandFormatter and iterating over an collection after calling find(), you would get the same enriched data for different MyTable objects.
Why? Because the PropelOnDemandFormatter does a smart thing by reusing the on MyTable object and "just" updating the properties.
Following is a workaround I am using to fix this (totally right) behaviour. Anyways, be caution if you do things like that. This should not be your regular way of doing it.


class MyTable extends BaseMyTable
{
    /** @var null|\My\Enriched\MyTable\Data */
    protected $enrichedData;

/** @var int */
protected $myId;

/**
 * @return null|\My\Enriched\MyTable\Data
 */
public function getEnrichedData()
{
    //prevent reloading enriched data if this method is called more than once
    if ($this->noEnrichedDataYetLoaded()) {
        //do something heavy data lifting
        $this->enrichedData = $this->tryToLoadEnrichedData();
        $this->myId         = $this->getId();
    }

    return $this->enrichedData;
}

/**
 * @return bool
 */
private function noEnrichedDataYetLoaded()
{
    return (
        ($this->myId == $this->getId())
        &&amp; (is_null($this->enrichedData))
    );
}

}

Translate to de es fr it pt ja

Reminder of the Propel Bug 734 - update() with limit() and a workaround

Just because we ran into this issue again. There is known and serious bug in propel whenever you use "update()" in combination with "limit()".
Our workaround right now is to replace the code.


//this will update all entry with the content "bar" in the column "foo"
MyQuery::create()
    ->filterByFoo('bar')
    ->limit(100)
    ->update(
        array(
            'Foo' => 'baz'
        )
    );

//this will only update 100 rows $ids = (array) MyQuery::create() ->filterByFoo('bar') ->limit(100) ->select( array( 'Id' ) ) ->find();

MyQuery::create() ->filterById($ids) ->update( array( 'Foo' => 'baz' ) );

Translate to de es fr it pt ja

roundcube 1.2.0 released - now with PGP encryption

We proudly announce the stable version 1.2.0 of Roundcube Webmail which is now available for download. It introduces new features since version 1.1 covering security and PGP encryption topics:

* PHP7 compatibility
* PGP encryption
* Drag-n-drop attachments from mail preview to compose window
* Mail messages searching with predefined date interval
* Improved security measures to protect from brute-force attacks
[...]

Quelle

Translate to de es fr it pt ja

howto - php composer - File(/etc/pki/tls/certs) is not within the allowed path(s) - on arch linux

Since a few days (or weeks?), I discovered the following issue on one of my Arch Linux system.
Whenever I try to use PHP's composer, I got the following issue:


[ErrorException]
is_dir(): open_basedir restriction in effect. File(/etc/pki/tls/certs) is not within the allowed path(s): (/srv/http/ [...]

Well, it didn't hurt that much since I am using (like every cool webkiddy is doing) docker or vagrant for my development. But this time, I needed to solve it since it is a customer edge case - so I solved it.
The how to I will show you is not the perfect way. I had two things in mind, try to minimize the place I have to adapt the php.ini. And try to keep the system as normal as possible. Until now, I can not estimate the security holes I opened with this setting. I will let you know if this how to turns out to be a "don't try this at home" thing.

So, what have I done?
First of all, I asked curl to tell me where it is looking for certificates by executing:


curl-config --ca

output: /etc/ssl/certs/ca-certificates.crt


After that I had a look what this path is:

ls -halt /etc/ssl/certs/ca-certificates.crt

output: [...] /etc/ssl/certs/ca-certificates.crt -> ../../ca-certificates/extracted/tls-ca-bundle.pem


So, with that knowledge it turned out that the following steps are reflecting my requirements mention above.

sudo mkdir -p /etc/pki/tls/certs
sudo ln -s /etc/ssl/certs/ca-certificates.crt /etc/pki/tls/certs/ca-certificates.crt
sudo vi /etc/php/php.ini

add following lines to "open_basedir" configuration section

:/etc/pki/tls/certs:/etc/ssl/certs


And that is it, composer should now be back in business.

Translate to de es fr it pt ja

zf-rest - error "title":"Not Found","status":404,"detail":"Entity not found."

I configured the routes as well as the other parts pretty well.
An important step to solving the issue was adding the following configuration section into my project "local.php".


    //this is possible overwritten by the zf-rest module
    'view-manager' => array(
        'displayexceptions' => true,
        'displaynotfoundreason' => true
    )

After that, I got back an response with the following content:
{"type":"http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html","title":"Not Found","status":404,"detail":"Entity not found."

After adding a lot of debugging statements in the "ZF\Rest\RestController" (search for "Entity not found." ;-)), started understanding the issue.
The answer is pretty clear after all. Whenever you listen on a GET HTTP Method with your listener, you have to return an array with the configured "routeidentifiername" (the entity identifier), otherwise the controller as well as the HAL post processor is not able to successful build the response.

Translate to de es fr it pt ja

PHP UserGroup Hamburg - 2016-02-09 - Putting down the leadtime

Following some notes about my the last php usergroup meetup.

By Judith Andresen

  • what is the leadtime?

    • time between adding the ticket and releasing it as a feature
  • what is a cycletime?

    • time between someone has an idea and releaseing it as a feature
  • we are currently in a time of digital transformation to the "first mover"

    • first idea try/test is the one who owns the bigges marked share
    • try to not be perfect
    • remove bottlenecks
    • try to scale vertical (microservice, duplicated data)

      • one team and service per business value/topic

        • search (including frontend, backend, customer data etc.)
        • product page
      • "community of practice" is a team (per vertical cut/team) that try

        • to keep the big architecture picture in mind
        • to share knowledge, approach and libraries
    • try to add a decision-maker into the team (extend the team in the value chain)
    • try to bring people together, also on an emotional level (increase the "we" feeling)
    • you can argue always with a decreased leadtime / small time to market
  • typical "facts" against

    • we have never done that this way
    • my discipline is better, information silos, no talk between departments (typically between 20 and 80 peoples)
    • there is no "we"
  • how?

    • talk to each other
    • major goal: deliver fast
    • create room for improvment or options

      • time
      • people
      • space/room
Translate to de es fr it pt ja

PHP UserGroup Hamburg - 2016-02-09 - Dockerizing PHP Applications

Following some notes about my the last php usergroup meetup.

By Sebastian Heuer

  • docker is not one tool but a whole ecosystem
    • machine (provisioning)
    • swarm (clustering and container scheduling)
    • compose (multi container application)
    • registry (image distribution)
    • engine (the container)
    • ktematic (gui)
  • pretty small compared to virtual box/full virtual machines
  • updating means, building a new container
  • theoretically, you can use all the images from the hub
    • always ask yourself if you want to use them in production
      • are they maintained
      • how secure are they
  • docker compose
    • builds and pulls images
    • runs containers
    • enables networking between containers
    • aggregates STDOUT and STDERR output

example Dockerfile

FROM php:7.0.2-fpm

RUN docker-php-ext-install pdo pdo_mysql

COPY php/php.ini /usr/local/etc/php/
# copy the content of the source code into the image
# you can ship this code version now
COPY . /srv/meetup-service

# the date in the container is not persistent
# if ypu change something in it, it will bill lost afterwards

CMD ["php-fpm"]

example docker-compose.yml

webserver:
  build: ./nginx    #path to the docker file and configuration etc
  links:
    - application
  ports:
    - "80:80"   #from port 80 to port 80
  volumesfrom:
    - application
application:
  build: ./meetup-service   #your project
  links:
    - database
  ports:
    - "9000:9000"
  volumes:
    - ./meetup-service:/srv/meetup-service  #mounting local source code into the container
  environment:
    - MYSQLHOST=database
    - MYSQLDATABASE=application
    - MYSQLUSER=root
    - MYSQLPASSWORD=parola
database:
  image: mysql:5.7  #no build path, instead an image is used
  volumes:
    - /var/lib/mysql
  ports:
    - "3306:3306"
  environment:
    - MYSQLROOTPASSWORD=docker
    - MYSQLDATABASE=app

Translate to de es fr it pt ja

A way to deal with Schei* encoding - deal with "Non-ISO extended-ASCII"

We had, again, some issues with encoding.
*file* returns an output like "Non-ISO extended-ASCII". This time, I created a basic step sequence here.
At the end, it really is an brute force approach. And we are using heavily a lot of open source software (thanks again duds!). Furthermore, the sequence steps are based on this post from superuser.com.

create a list with supported encodings

iconv --list | sed 's/\/\/$//' | sort > listwithsupported_encodings.txt

iterate over the list of know encodings and try to encode the file with it

LOCALSUPPORTEDENCODINGFILEPATH='listwithsupportedencodings.txt' LOCALRESULTFILEPATH='result.txt'

for LOCALENCODING in cat $LOCAL_SUPPORTED_ENCODING_FILE_PATH; do printf "$LOCALENCODING " iconv -f $LOCALENCODING -t UTF-8 2016-02-08UPLOADCSV.csv.stev > /dev/null 2>&1 && echo "ok: $LOCALENCODING" || echo "fail: $LOCAL_ENCODING"

uncomment line below if you want to see the result and put it into the file

done | tee $LOCALRESULTFILE_PATH

put the output into the file

done | cat > $LOCALRESULTFILE_PATH

filter only the successful tryouts

LOCALRESULTFILE_PATH='result.txt'

cat $LOCALRESULTFILEPATH | grep 'ok:' > 'onlyok'$LOCALRESULTFILEPATH

Now comes the hard work, you have to give it a try for each "ok" result in the fitting file.
# read the result file with the ok content and create a encoded version of your broken file
LOCALBROKENFILEPATH='relative/or/full/qualified/file/name.txt'
LOCALRESULTFILEPATH='onlyokresult.txt'

sed -e 's/^(.)\ \ ok(.)/\1/p' means

remove any kind of content starting with ' ok:' on each line

assmed a line looks like "S2 ok: WS2", the result will look like "WS2"

for LOCALENCODING in `cat $LOCALRESULTFILEPATH | grep ok | sed -e 's/^(.)\ \ ok(.)/\1/p' | uniq; do LOCALCONVERTEDFILEPATH=$LOCALENCODING''$LOCALBROKENFILEPATH #echo $LOCALCONVERTEDFILEPATH iconv -f CP850 -t UTF-8 $LOCALBROKENFILEPATH > $LOCALCONVERTEDFILE_PATH done

Open each file and check if your fitting special characters are looking good. "WINDOWS-1258" and "CP850" are good blind guesses here.

Translate to de es fr it pt ja

An unfinished review about the book "Patterns, Principles, and Practices of Domain-Driven Design" by Scott Millett

To put my current status in one sentence would end in something like "Still not finished but already learned and achieved so much".
This entry is about the book named "Patterns, Principles, and Practices of Domain-Driven Design" by Scott Millett.

First of all, thank you Scott Millett.

I started reading this book at the end of 2015 and I am right now on chapter eleven. It is not because of the complexity of this book. It is because of the essential knowledge shared in each sentence (ok, maybe only each paragraph ;-)).
My approach right now is to read a page and practice it right away, either in the company at all, in the team or in the code.
Since Domain Driven Design is quite close to normal behavior and life, I always run into open arms when explaining somebody an idea, either a part of the qa, the developers or the business stuff.
It is also cool that Scott Millett tells you more than once, Domain Driven Design is not the silver bullet.

As written above, I am far away from having finished this book, but even now (ore even few chapters before) I would have signed the sentence "totally worth the money".

Last but not least, thank you Scott Millett.

Translate to de es fr it pt ja

web - Serendipity 2.0.3 released

Happy new Year! Serendipity 2.0.3 has just been released to address a XSS security issue found and reported by Onur Yilmaz and Robert Abela from Netsparker.com. Thanks a lot for contacting us and working with us to address the issue.
[...]

source

And what I totally missed to write, the new serendipity 2.x admin interface is incredible cool. First I thought "ok" but after a few entries, it is amazing. It is fast and a lot of "small" things like long mouse movements are perfectly optimized. Thousand thanks from my side.

Translate to de es fr it pt ja

determine if an apache process is still running via bash to prevent multiple instances running

Given is the fact that you have some processes (like cronjobs) executed via an webserver like apache. Furthermore you have installed and enables apache server status. To add some re usability benefits, we should divide and conquer the problems into either shell scripts or shell functions. Side note, if I am writing about shell, I am in the bash environment. What are the problems we want to tackle down?:

  • find the correct environment
  • check all available webservers if a process is not running
  • specify which process should not run and start it if possible

We can put the first two problems into shell functions like the following ones. I am referencing to some self written shell functions. The reference is indicated by the "net_bazzline_" prefix.

#!/bin/bash
#find the correct environment

if net_bazzline_string_contains $HOSTNAME 'production';
    NET_BAZZLINE_IS_PRODUCTION_ENVIRONMENT=1
else
    NET_BAZZLINE_IS_PRODUCTION_ENVIRONMENT=0
fi

And the mighty check.

#!/bin/bash
#check all available webservers if a process is not running
####
# @param string <process name>
# @return int (0 if at least one process was found)
####
function local_is_there_at_least_one_apache_process_running()
{
    if [[ $# -lt 1 ]]; then
       echo 'invalid number of arguments'
       echo '    local_is_there_at_least_one_apache_process_running <process name>'

       return 1
    fi

    if [[ $NET_BAZZLINE_IS_PRODUCTION_ENVIRONMENT -eq 1 ]]; then
        LOCAL_ENVIRONMENT='production'
    else
        LOCAL_ENVIRONMENT='staging'
    fi

    #variables are prefixed with LOCAL_ to prevent overwriting system variables
    LOCAL_PROCESS_NAME="$1"

    #declare the array with all available host names
    declare -a LOCAL_HOSTNAMES=("webserver01" "webserver02" "webserver03");

    for LOCAL_HOSTNAME in ${LOCAL_HOSTNAMES[@]}; do
        APACHE_STATUS_URL="http://$LOCAL_HOSTNAME.my.domain/server-status"

        OUTPUT=$(curl -s $APACHE_STATUS_URL | grep -i $LOCAL_PROCESS_NAME)
        EXIT_CODE_OF_LAST_PROCESS="$?"

        if [[ $EXIT_CODE_OF_LAST_PROCESS == "0" ]]; then
            echo "$LOCAL_PROCESS_NAME found on $LOCAL_HOSTNAME"
            return 0
        fi
    done;

    return 1
}

And here is an example how to use it.

#!/bin/bash
#specify which process should not run and start it if possible

source /path/to/your/bash/functions

LOCAL_PROCESS_NAME="my_process"

local_is_there_at_least_one_apache_process_running $LOCAL_PROCESS_NAME

EXIT_CODE_OF_LAST_PROCESS="$?"

if [[ $EXIT_CODE_OF_LAST_PROCESS == "0" ]]; then
    echo "$LOCAL_PROCESS_NAME still running"
    exit 0;
else
    #execute your process
    echo 'started at: '$(date +'%Y-%m-%d %H:%M:%S');
    curl "my.domain/$LOCAL_PROCESS_NAME"
    echo 'started at: '$(date +'%Y-%m-%d %H:%M:%S');
fi

You can put this into a loop by calling it via the cronjob environment or use watch if you only need it from time to time:

watch -n 60 'bash /path/to/your/shell/script'

Enjoy your day :-).

Translate to de es fr it pt ja