All posts by Sipke

Flask It

If you have worked around Python and Web technologies then you have heard of Flask. It is well known and there are a lot of examples for getting started with simple apps.

Simple/single apps are great if you are learning, but quickly bog down if you have even a little bit of complexity.

Here are some key things to know before you get too far into a monolithic application.

Modules

Blueprints is the Flask concept for bringing in sub-things or modules. The official documentation is clear and provides an example for rendering a page, but similar concept applies for APIs if your application has no web content.

Our recommendation is, as soon as you have the simple app understood, break it into modules (in your preferred pattern) before adding any more routes or pages.

Exceptions

If you leave the exception handling up to Flask, especially if you have an API only application, you will end up with the base errors being returned, like Method Not Allowed or Server Error and the calling client will have no idea what the cause may be.

To deal with this, you can use exception handlers. Create a MyApplicationException and when that exception occurs, return a json object with meaningful description which your client can then understand. In this Handling Application Errors example, instead of catching HTTPException, catch MyApplicationException, and adjust so that response = HTTPException().get_response() to setup the base exception to send onwards by returning response, e.code.

from flask import json
from werkzeug.exceptions import HTTPException

@app.errorhandler(MyApplicationException)
def handle_exception(e):
    """Return JSON instead of HTML for our application errors."""
    # start with an empty http exception as that is what we wish to return
    response = HTTPException().get_response()
    # replace the body with JSON
    response.data = json.dumps({
        "code": e.code,
        "description": e.description,
        #any other key values you want the client to have access to
    })
    response.content_type = "application/json"
    return response, e.code

Own the Product

If you are a product owner your strength will come from owning the product not the team. Understand all parts of your product from users needs to build and production of it and set direction in a way that releases and empowers your team.

Zebra ZD411 PDF Direct and CUPS/lpr Setup on Linux

PDF Direct

There is a lot of marketing about PDF Direct, but more difficult to find How to Guides or specifics, especially if you want to print a label from a Linux distribution. This post tries to bring the information together.

We purchased a ZD411T Thermal Transfer printer for demonstration purposes and decided PDF Direct might be a good choice as it would allow for labels to be stored for record as well as be printed.

In 2023 the ZD411T came with LinkOS 6.3 so the PDF Direct feature was already available, and no firmware update was required.

If you have a printer with a touch display, you can set up PDF Direct using its menu. If not you need to send the ZPL command to the printer.

At location 10:50 in the video below (of a Zebra webinar) it discusses how to do it from a developer Toolbox, but if you don’t have that installed you can also do it from the Zebra Setup Utilities application on a Windows PC .

ARVE

The following are resources from Zebra to set this “alternate application” apl setting to “pdf” for PDF Direct.

Once moving to PDF Direct, your printer will only handle PDF Direct, so don’t chop and change. That question was raised in this video above at the end.

The video also mentions the PDF file being sent should be the same dimensions as the label.

After we enabled PDF Direct, from Windows, at first printing did not work at all, even after confirming the setup was sent and rebooting the printer. A reboot of the PC was required for it to print the PDF from Adobe Reader, via Ctrl-P.

Disable Boot Label Scroll

By default it seems the printer is setup to do some auto calibration or similar on power up and it will print out additional blank labels on each boot. This can be adjusted in the printer settings. We found the following settings to work if you want to avoid scrolling out blank labels.

  • Media Type: Non Continuous / Web sensing
  • Media Handling: Tear Off
  • Media Feed Option: No feed (avoids label feed due to calibration at power on)
  • Back Feed: Before Printing

CUPS (Linux or Mac)

There is mention that you can print using CUPS in Linux and Mac, but Zebra’s documentation shy’s away from directly saying they support it for Linux. So, we recommend you set your printer up first with a Windows PC and its tools, and once you have proven it move to Linux/Mac.

We were able to add the ZD411 as a ZPL printer to a Ubuntu 20.04 distribution, using these instructions,

Adding-a-Zebra-Printer-in-a-CUPS-Printing-System

which basically involves walking through adding a printer via the CUPS web interface at localhost:631. The printer did not work before having done this.

We setup the print settings on Windows previously, so we left the printer settings to “Printer default” in most settings.

Attempting to print a PDF via the Ubuntu default Document Viewer resulted in an empty label being scrolled out and the print head did not make any noise, so no attempt was made to print.

However sending the PDF file directly to the printer using the lpr command printed the label. The following are the commands to list the printers (to get the printer name) and then send the file (of the same dimensions as the label) directly to the printer to print the label.

lpstat -p -d
lpr -P Zebra_Technologies_ZTC_ZD411-203dpi_ZPL  2022-123456_label.pdf

Hope that will save you some setup time if you choose to go Zebra on Linux.

Embedded Linux Gems

Serial Port Access for non-sudo User

Working with embedded devices often requires serial access. By default linux tty ports require super user access, so when running an application like a serial communications parser, you will want to give access to the user rather than require your application to have sudo powers.

Once you know which user(s) are to require access, add them to dialout ant tty.

sudo usermod -a -G dialout <serial-user>
sudo usermod -a -G tty <serial-user>

Dialout is a group, so the user changes will take effect after reboot.

Password Hash

Embedded devices will often need a password hash.

Use mkpasswd to generate a hash. First use the -m help option to see what hash algorithms are available.

$ mkpasswd -m help
Available methods:
sha512crypt     SHA-512
sha256crypt     SHA-256
md5crypt        MD5
descrypt        standard 56 bit DES-based crypt(3)

Then generate the hash.

$ mkpasswd -m sha512crypt
Password: 
$6$IPKp8j6a8R$2S1SZigM5x.mupPNX50y0/.mhoJ42LSEe80wszpI6L5jiq4oEUPH9A73zAyT3BqJQm1HIk1p9kI3H.eimTMxY.

Center Ubuntu/Gnome Background

If the image you set as your background does not fit into the screen layout, likely it will be stretched or cropped. To center the image, effectively shrinking it execute on command line

$ gsettings set org.gnome.desktop.background picture-options 'centered'

Explanation and details: https://askubuntu.com/questions/1111201/background-image-resize-in-ubuntu-18-04-1-lts

PySide6 6.2 to 6.3 Upgrade

If you use PySide6 and have developed against PySide6 version 6.2.x, and you upgrade to 6.3 or later, you may find the following failures.

Module Import Error

ModuleNotFoundError: No module named 'PySide6.QtCore'

The Qt documentation hints at it, so if you performed just a pip install to the new version, then it is likely you’re issue. Instead uninstall and re-install.

In case you are updating your 6.2.x install, make sure the PySide6_Essentials and PySide6_Addons wheels are getting installed, otherwise, uninstall and install PySide6 to get the new structure.

From: https://www.qt.io/blog/qt-for-python-details-on-the-new-6.3-release

So in your virtual environment, or main environment

# ensure all previous installs uninstalled
python3 -m pip uninstall PySide6
python3 -m pip uninstall PySide6-Essentials
python3 -m pip uninstall PySide6-AddOns
# and finally re-install
python3 -m pip install PySide6

Could not load the Qt platform plugin “xcb”

After upgrading PySide6 to 6.5 on a Ubuntu 20.04, attempting to load an application functional with PySide6 6.2.4 you may find it fails on a missing plugin as follows.

qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb, vnc, wayland, minimalegl, vkkhrdisplay, offscreen, wayland-egl, minimal, linuxfb, eglfs.

If you enable debug by adding the environment variable …

QT_DEBUG_PLUGINS=1

… you may see later versions of PySide6 are looking for -cursor xcb library also.

libxcb-cursor.so.0: cannot open shared object file: No such file or directory

This can be installed using apt install as follows

sudo apt install libxcb-cursor0

Setting up LabJack U3 On Linux for Python

Here’s a collation of steps to setup a U3 LabJack device on Linux, in this case Ubuntu 20.04.

Download tar ball

Downloaded the Linux x86_64 Release Linux 64-bit Installer Package from:

https://labjack.com/pages/support/?doc=/software-driver/installer-downloads/ljm-software-installers-t4-t7-digit/#header-three-3r1sj

Extract and Install

Extract

$ tar -xzf labjack_ljm_software_2019_07_16_x86_64tar.gz

$ ls -al
labjack_ljm_software_2019_07_16_x86_64 
labjack_ljm_software_2019_07_16_x86_64tar.gz

Install

$ cd labjack_ljm_software_2019_07_16_x86_64
$ sudo ./labjack_ljm_installer.run 
[sudo] password for user: 
Creating directory labjack_ljm_software
Verifying archive integrity... All good.
Uncompressing LabJack software for T4s, T7s, and Digits for Linux x86_64.......................................................................................................................................................................................................................................................................
Installing libLabJackM.so.1.20.1 to /usr/local/lib... done.
Installing LabJackM.h to /usr/local/include... done.
Installing constants files to /usr/local/share... done.
Installing labjack_kipling to /opt... done.
Installing command line shortcut labjack_kipling to /usr/local/bin... done.
Registering with application launcher... done.
/usr/local/lib >> /etc/ld.so.conf
Adding LabJack device rules... done.
Restarting the device rules... done.

Install finished. Please check out the README for usage help.

If you have any LabJack devices connected, please disconnect and
reconnect them now for device rule changes to take effect.

Verify

Plug in your device (U3 JabJack) to USB port and list /var/log/syslog looking for LabJack U3 to ensure it is found, as below.

$ tail /var/log/syslog
May  3 11:33:50 laptop systemd[5916]: tracker-extract.service: Succeeded.
May  3 11:33:51 laptop kernel: [15761.633639] usb 3-3: new full-speed USB device number 6 using xhci_hcd
May  3 11:33:51 laptop kernel: [15761.782937] usb 3-3: New USB device found, idVendor=0cd5, idProduct=0003, bcdDevice= 0.00
May  3 11:33:51 laptop-01-ub kernel: [15761.782942] usb 3-3: New USB device strings: Mfr=1, Product=2, SerialNumber=0
May  3 11:33:51 laptop kernel: [15761.782944] usb 3-3: Product: LabJack U3
May  3 11:33:51 laptop kernel: [15761.782947] usb 3-3: Manufacturer: LabJack

Install ExoDriver

The above software install does not install all requirements. The exodriver is also required for usb control from Python, and is detailed here:

https://labjack.com/pages/support?doc=/software-driver/installer-downloads/exodriver/#section-header-two-25lef

$ sudo apt-get install build-essential
... output removed
$ sudo apt-get install libusb-1.0-0-dev
... output removed
$ sudo apt-get install git-core 
... output removed
$ git clone https://github.com/labjack/exodriver.git
Cloning into 'exodriver'...
remote: Enumerating objects: 1044, done.
remote: Counting objects: 100% (48/48), done.
remote: Compressing objects: 100% (38/38), done.
remote: Total 1044 (delta 23), reused 19 (delta 10), pack-reused 996
Receiving objects: 100% (1044/1044), 378.59 KiB | 3.08 MiB/s, done.
Resolving deltas: 100% (615/615), done.
$ cd exodriver/
$ sudo ./install.sh 
Making..
rm -f liblabjackusb.so.2.7.0 *.o *~
cc -fPIC -g -Wall  -c labjackusb.c
cc -shared -Wl,-soname,liblabjackusb.so -o liblabjackusb.so.2.7.0 labjackusb.o -lusb-1.0 -lc
Installing..
test -z /usr/local/lib || mkdir -p /usr/local/lib
install liblabjackusb.so.2.7.0 /usr/local/lib
test -z /usr/local/include || mkdir -p /usr/local/include
install labjackusb.h /usr/local/include
ldconfig
Adding 90-labjack.rules to /lib/udev/rules.d..
Restarting the rules..
Install finished. Thank you for choosing LabJack.
If you have any LabJack devices connected, please disconnect and reconnect them now for device rule changes to take effect.

If you do not install or build it, you end up with this error if you try to open a device in Python

~$ python3
Python 3.8.10 (default, Mar 13 2023, 10:26:41) 
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import u3
<class 'LabJackPython.LabJackException'>: Could not load the Exodriver driver. Ethernet connectivity only.

Check that the Exodriver is installed, and the permissions are set correctly.
The error message was: liblabjackusb.so: cannot open shared object file: No such file or directory
>>> quit()

Install Python Library

Install the python library as per:

https://labjack.com/pages/support?doc=/software-driver/example-codewrappers/labjackpython-for-ud-exodriver-u12-windows-mac-linux/#header-three-qhnmb

$ python3 -m pip install LabJackPython
Collecting LabJackPython
  Downloading LabJackPython-2.1.0-py2.py3-none-any.whl (115 kB)
     |████████████████████████████████| 115 kB 2.2 MB/s 
Installing collected packages: LabJackPython
Successfully installed LabJackPython-2.1.0

If you installed without sudo, you’re library will be in your .local folder.

Find the u3.py file like this

$ find ~ | grep u3.py
/home/user/.local/lib/python3.8/site-packages/u3.py

Open it to have a look, or see it on the github repo

https://github.com/labjack/LabJackPython/blob/master/src/u3.py

Run python3 interpreter and verify you can communicate with your device by instantiating a U3() class, reading an analog input (default state of pins, so we can avoid configuring them at this point) and printing the temperature.

$ python3
Python 3.8.10 (default, Mar 13 2023, 10:26:41) 
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import u3
>>> d = u3.U3()
>>> print (d.getAIN(4))
0.305592048
>>> print(d.getTemperature())
303.5464268177748
>>> d.close()

Docker Snippets

Docker Compose Projects

We recommend always using a project name with the docker-compose -p argument.

# start two separate instances of the same docker-compose project
docker-compose -p dev_instance_1 up
docker-compose -p dev_instance_2 up

Docker compose nicely defines and contains your services and is pretty straight forward. It is not designed for large scale deployment though and a big gotcha is that if you run multiple instances on a single computer/server, you may destroy parts of your existing instance, so take care and ALWAYS use project names if you are using multiple instances on the one server. If you are using docker-compose for your development (as we) you will likely run into this.

Volumes across Instances

The danger of multiple instances extends further if you are defining volumes. For this reason if you are using volumes in your multi instance development, consider using extensible yaml files as detailed in Docker docs.

Restart Single Worker using Docker Compose

docker-compose -p dev_instance_1 restart webserver

Access docker command line

List the docker services, and then connect using docker exec.

docker ps
CONTAINER ID   IMAGE                         COMMAND                  CREATED       STATUS                  PORTS                                         NAMES
dbc1effc3903   dev_project/nginx:latest           "nginx -g 'daemon of…"   3 weeks ago   Up Less than a second   443/tcp, 0.0.0.0:81->80/tcp, :::81->80/tcp    dev_instance_1_nginx_1

# select identifier from list
docker exec -it dbc1effc3903 /bin/bash
bash-5.1# echo here you are on the docker cli

Putting the I back in CI

Have you ever considered where the I is in your Continuous Integration (CI) solution?

The challenge in CI is what it is you are continually doing. This is important because the integration is the key and a good manual integration setup is more effective than a poor continuous something-or-other.

Continuous Integration has been a part of software development for decades, but more recently has become a buzz word and hence a must for larger software teams or you may be seen as not doing software correctly! The unfortunate side effect can be that something is put in place, to ensure the implicit kpi is met, and it stops there.

The C in CI

The C in CI is simple, as continually doing something does not really involve much more than say configuring a Jenkins environment or online pipeline to kick off some scripts to build something. This is probably why often enough, teams develop a CI which is effectively a build setup to produce whatever monolithic blob of software or embedded image that is needed for the end product. This is important naturally, but if this is where the CI stops, then the integration is limited or non existent, and you have a continuous build (CB) setup rather than a continuous integration setup.

A fundamental problem with these monolithic builds, built regularly, often as nightly builds, is the ramifications of a single failure. If your build takes hours to perform, and a line of code in your build fails towards the end of the build, the build that a bunch of team members have been waiting for, to verify their integration, becomes effectively unusable, and is delayed by another day.
Hence trying to do your integration using these continuous builds eventually grind to a halt as a useful tool, and become a continuous annoyance and time eater rather than a useful tool to improve your development.

Some primary reasons for this to happen, would be because you need to have a monolithic build of some sort to make a release. So this is done, and once its done it is understandable that one thinks CI is done and no actual integration is preformed. Furthermore, anything monolithic allows easy addition of non modular components (for example through software with excessive dependency chains), so its ‘quick and easy’ to add, and before long untangling is too hard, so any attempt to modularise and improve integration is seen as too time consuming and the train continues along its path of least resistance.

The I in CI

Unlike the Continuous, unfortunately the Integration is more involved and does require some thinking and planning, not something all engineers are fond of or interested in. Some build systems, like the Yocto Project do lend themselves to helping you in achieving more modular integration setups, but it does require, skill, effort and the desire to actually do it, so even setups using advanced build systems can fall into the trap of not utilising well targeted integration.
The integration needs to be multi-fold, while targeting the end product, by splitting the device software into manageable sets of sub components which can work and be tested independently to the final system. They should also leverage off the individual software component tests, to ensure only integration testing is required in integration rather than dilution of these if software tests are lumped into the integration effort.

Put the I back in CI

So, taking a step back, for successful software development, including embedded development, you need these things.

  1. A build setup to generate your releases and system test images, with emphasis here on the word system. This does not need to be continuous, and probably should be less continuous and instead timed to suit effective iteration.
  2. A build setup for individual software components, with emphasis here on individual. This should be very often and very fast, and whenever possible for embedded device be performed on the development host rather than the embedded or emulated embedded device (or you risk losing the fast). This should also include effective unit and component tests.
  3. Multiple integration setups, simply because you can not effectively integrate if you do everything at once. This should involve:
    3a. groups of components built and tested on the development host.
    3b. groups of component built and tested on an emulated embedded device
    3c. groups of components built and tested on a physical embedded device

Regular Iteration

We will call 1 and 2 Regular Iteration. These steps should be done at regular intervals, feed from individual software builds and into system builds. It is tempting to call this Continuous Iteration, just to put an annoying twist on CI :-), but RI is maybe a little less confusing.

Continuous Integration

Our last group, 3, will be what we are going to consider as Continuous Integration, with the integration fairly and squarely at the centre because the emphasis of this is to integrate in numerous ways to target specific feature sets in order to ensure a robust system release by early detection of issues before system testing.

The Why

The positive side effects of multiple integration setups are:

  • Finding issues will be faster
    • previously simple build issues would be found late in a slow large build, now most build issues will be found in fast individual small build steps
  • Isolating issues will be easier
    • with targeted integration setups, triaging of many issues will be limited to those components, and many subtle system errors will be ironed out at this stage
  • Less time is spent building huge software blobs/releases
    • as we have actively decided that building large blobs really does not have to happen all the time, we open resources for faster ways to find issues
  • More time is available to test your huge software blob/release
    • because we spend less time building large blobs, we open resources to test those blobs which are built at less regular intervals.

Yocto Project ® and all related marks and logos are trademarks of The Linux Foundation. This website is not, in any way, endorsed by the Yocto Project or The Linux Foundation.

Page by Page Angularjs to Angular 2+ Migration

If you have worked with network capable embedded devices, you will have worked with web interfaces to control them as it is a convenient UI interface.

If you have an interface written in AngularJS, then you may be considering updating to Angular as a natural progression.

There are numerous examples on line of controlled and gradual ways to do this, and they seem very valid, but most of them require a consistent architecture on the AngularJS side.

If your main aim is to brute replace routes/pages one by one. i.e. Create a new Angular page to replace the AngularJS one, then caressing AngularJS into Angular ‘style’ is probably not good value for money.

Instead, here is a summary of some sites found which together will allow you to

  1. upgrade your entire AngularJS site into a parent Angular site.
  2. route/re-direct to the base page of your AngularJS site (and effectively hand over the router)
  3. Install a event dispatcher form AngularJS to Angular for information that the new pages need from the old ones.

The first two steps are detailed very nicely and with good detail in this reference: https://medium.com/lapis/running-angularjs-1-6-in-angular-5-side-by-side-d2ed771e2a8f

Doing that will get you an Angular project with your original AngularJS running entirely inside it.

Step 3 is the ‘tunnel’ between the two, and involves a rudimentary dispatcher from AngularJS to Angular.

Remembering that we are deprecating AngularJS, so something simple means we can get to the end result faster.

On the Angular side, in a component.ts of your choosing add.

import { Component, HostListener } from '@angular/core';
---snip---

@Component({
  selector: 'your-component',
  templateUrl: './your.component.html',
  styleUrls: ['./your.component.css']
})
export class YourComponent {

  constructor( ) { }

  @HostListener('window:message', ['$event']) sncListener(data: any) {
    console.log("Your message has arrived, decode the data and act on it");
    if (data.detail.your_msg_code === 'user_deleted') {
       // etc
    }
  }

---snip---
}

The HostListener reference is https://angular.io/api/core/HostListener

The relevant piece here is the ‘window:message’ where window is a system string, as shown in the reference above, and the message is what the AngularJS dispatcher (see further down) has in its CustomEvent.

On the AngularJS side, you need to dispatch the event, as follows.

angular.module('Main', ['ngTouch'])

        .controller('MainController', ['$window', '$scope', '$q', '$http', '$location', '$interval') {
        
---snip---

        $scope.someFunction = function () {
        
          // data here is some variable/object
          yourDispatcher({'your_msg_code': data});

---snip---
        }

        function yourDispatcher(data) {
          // message and detail are part of the custom event so need to be just that.
          var event = new CustomEvent('message', {
	          'detail' : data
	        });
          window.dispatchEvent(event);
        }

---snip---

        
        }

The use of CustomEvent and dispatchEvent was gleaned from here https://stackoverflow.com/questions/41050573/when-angularjs-controller-meets-postmessage-events

and a reference is https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/dispatchEvent and https://developer.mozilla.org/en-US/docs/Web/API/CustomEvent

Finally now you have the means to create new pages in Angular and replace the old ones one by one.