Apr 25

Fun with templates

As you may be aware, I maintain dbus-cxx, and I’ve been working on it lately to get it ready for a new release. Most of that work is not adding new features, but updating the code generator to work correctly. However, this post today is not about that work, it is about the work that I am doing on the next major version of dbus-cxx(2.0). Part of this work involves using new C++ features, due to libsigc++ needing C++17 to compile now. With the new variadic templates that C++ has(actually since C++11), we can have more than 7 template parameters to a function. (This limit of 7 is arbitrarily chosen by libsigc++, it’s not a C++ limitation.)

Because of this however, some of the code in dbus-cxx needs to change in order to work correctly. The main portion that I’m working on right now has to do with getting the DBus signatures to work properly. Here’s a small piece of code that is currently in dbus_signal.h(after running dbus_signal.h.m4 through m4):

  /** Returns a DBus XML description of this interface */
  virtual std::string introspect(int space_depth=0) const
    std::ostringstream sout;
    std::string spaces;
    for (int i=0; i < space_depth; i++ ) spaces += " ";
    sout << spaces << "<signal name=\"" << name() << "\">\n";

    T_arg1 arg1;
    sout << spaces << "  <arg name=\"" << m_arg_names[1-1] << "\" type=\"" << signature(arg1) << "\"/>\n";
    T_arg2 arg2;
    sout << spaces << "  <arg name=\"" << m_arg_names[2-1] << "\" type=\"" << signature(arg2) << "\"/>\n";
    sout << spaces << "</signal>\n";
    return sout.str();

This method is created once for each overloaded type that we have. The important part is that T_arg is created once for each argument that we have. With variadic templates, this is impossible to do. The way to get the template arguments out of the variadic call is to do recursion.

Recursion + templates is not something that I’m very familiar with, so this took me a while to figure out. However, I present the following sample code for getting the signature of a DBus method:

  inline std::string signature( uint8_t )     { return "y"; }
  inline std::string signature( bool )        { return "b"; }
  inline std::string signature( int16_t )     { return "n"; }
  inline std::string signature( uint16_t )    { return "q"; }
  inline std::string signature( int32_t )     { return "i"; }
  inline std::string signature( uint32_t )    { return "u"; }

  template<typename... argn> class sig;

   template<> class sig<>{
   std::string sigg() const {
     return "";

   template<typename arg1, typename... argn>
   class sig<arg1, argn...> : public sig<argn...> {
   std::string sigg() const{
     arg1 arg;
     return signature(arg) + sig<argn...>::sigg();

int main(int argc, char** argv){
  std::cout << sig<uint32_t,uint32_t,bool,int64_t>().sigg() << std::endl;


This took me a few hours to figure out exactly how to do it, so I’m at least a little proud of it! The other confusing part that I had to work out was how to use a recursive call(with signature()) also with the recursive call for the templates, which leads us to the following observation:

It’s recursion all the way down.

Mar 15

One large program or many little programs?

Earlier this week, I came across this question on softwareengineering.stackexchange.com.  This has some relevance for me, since at work two of our main projects follow the two sides of this design philosophy: one project is a more monolithic application, and the other follows more of a microservices model(e.g. many applications).  There are some reasons for this which I will now attempt to explain.

Option 1: Monolithic Application

The first project that I will explain here is our project that has a more monolithic application.  First of all, a brief overview of how this project works.  Custom hardware(running Linux) collects information from sensors(both built-in and often third-party over Modbus) and aggregates it and classifies it.  The classification is due to the nature of the project – sensor data falls into one of several classes(temperature, voltage, etc.).  This information is saved off periodically to a local database, and then synchronized with a custom website for viewing.  The project, for the most part, can be split into these two main parts:

  • Data collector
  • Web viewer
  • Local viewer(separate system, talks over ethernet)

Due to the nature of the hardware, there is no web interface on the hardware directly, it is on a cloud server.

Now, the data collector application is a mostly monolithic application.  However, it is structured similarly to the Linux kernel in that we have specific ‘drivers’ that talk with different pieces of equipment, so the core parts of the application don’t know what hardware they are talking to, they are just talking with specific interfaces that we have defined.

In this case, why did we choose to go with a monolithic application?  Well, there are a few reasons and advantages.

Reason 1: As primarily a data collector device, there’s no real need to have different applications send data to each other.

Reason 2: The development of the system is much easier, since you don’t need to debug interactions between different programs.

Reason 3: Following from the first two, we often have a need to talk with multiple devices on the same serial link using Modbus.  This has to be siphoned in through a single point of entry to avoid contention on the bus, since you can only have one modbus message in-flight at a time.

Reason 4: All of the data comes in on one processor, there is no need to talk with another processor.  Note that this is not the same as talking with other devices.

Reason 5: It’s a lot simpler to pass data around and think about it conceptually when it is all in the same process.

Now that we have some reasons, what are some disadvantages to this scheme?

Disadvantage 1: Bugs.  Since our application is in C++(the ability to use C libraries is important), a single segfault can crash the entire application.

Disadvantage 2: The build can take a long time; the incremental build and linking isn’t bad, but a clean build can take a few minutes.  A test build on Jenkins will take >10 minutes, and it can still take several minutes to compile on a dev machine if you don’t do parallel make.

Overall, the disadvantages are not show-stoppers(except for number 1, there is some bad memory management happening somewhere but I haven’t figured out where yet).  The separation into three basic parts(data collection, local GUI, web GUI) gives us a good separation of concerns.  We do blend in a little bit of option 2 with multiple applications, but that is to allow certain core functionality to function even if the main application is down – what we use that for is to talk with our local cell modem.  Given that the data collection hardware may not be easily accessible, ensuring that the cellular communications are free from bugs in our main application is important.

Option 2: Multiple Applications

If you don’t want to make a monolithic application, you may decide to do a lot of small applications.  One of my other primary projects uses this approach, and the reason is due to the nature of the hardware and how things need to interact.

In our project with multiple applications, we have both multiple compute units and very disparate sensor readings that we are taking in.  Unlike the monolithic application where data is easily classified into categories, this project has even more disparate data.  Moreover, we take in a lot of different kinds of data.  This data can come in on any processor, so there is no ‘master’ application per se.  This data also needs to be replicated to all displays, which may(or may not) be smart displays.  We also want to insulate ourselves from failure in any one application.  A single bug should not take down the entire system.

To handle this, we essentially have a common data bus that connects all of the processors together.  We don’t use RabbitMQ, but the concept is similar to their federation plugin, in that you can publish a message on any processor and it will be replicated to all connected processors.  This makes adding new processors extremely easy.  All of the data is basically made on a producer/consumer model.

Advantage 1: Program resiliency.  With multiple applications running, a bug in one application will not cause the others to exit.

Advantage 2: You can easily add more processors.  This is not really a problem for us, but since data is automatically synchronized between processors, adding a new consumer of data becomes very simple.

Advantage 3: Data can come and go from any connected system, you need not know in advance which processor is giving out information.

This design is not without some caveats though.

Disadvantage 1: Debugging becomes much harder.  Since you can have more than one processor in the system, your producer and your consumer can be on different processors, or you could have multiple consumers.

Disadvantage 2: Because this is a producer/consumer system(it’s the only way that I can see to effectively scale), there’s no way to get data directly from an application(e.g. there’s no remote procedure call easily possible over the network).


There are two very different use cases for these two designs.  From my experience, here’s a quick rundown:

Monolithic Application

  • Generally easier to develop, since you don’t have to figure out program<->program interactions
  • Often needed if you need to control access to a resource(e.g. physical serial port)
  • Works best if you only have to run on one computer at a time

Multiple Applications

  • Harder to develop due to program<->program interactions
  • Better at scaling across multiple computers
  • Individual applications are generally simpler

Due to the nature of engineering, there’s no one way to do this that is best.  There are often multiple ways to solve a given problem, and very rarely is one of them unequivocally the best solution.

Jan 31

Counting lines of code

So a few years ago, me and a coworker had to figure out how many lines of code we had. This was either for metrics or for because we were curious, I can’t really remember why. I came across the script again today while going through some old code. Here it is in all its glory:


let n=0; for x in "$@"; do temp1=`find $x | grep '\\.cpp$\|\\.c$\|\\.java$\|\\.cc$\|\\.h$\|\\.xml$\|\\.sh$\|\\.pl$\|\\.bash$\|\\.proto$'`; temp=`cat /dev/null $temp1 | grep -c [.]*`; let n=$n+$temp; if [ $temp -gt 0 ]; then printf "%s: " $x ; echo $temp; fi; done ; echo Total: $n

This took us at least an hour(or perhaps more), I’m not really sure.

Anyway, it was after we had done all of that work that we realized that wc exists.

Apr 09

Bitcoin Mining for a Useful Purpose?

So I was thinking about this the other day when I came across this article on Slashdot that points out that GPU prices are high due to the demand for Bitcoin(and other cryptocurrencies) mining.  This got me thinking, what’s the point for this?  What if we could do something useful(well, more useful) than mining for virtual currency?  I’ve been running BOINC for probably about 12+ years now, doing calculations for SETI@Home.  Originally, I wasn’t even using the BOINC client, SETI@Home had their own standalone software that has now been superseded by BOINC.  Which given that the original software was used until 2005, means that I have probably actually been doing this for 15+ years at this point(logging into the SETI website indicates I have been a member since December 2005)…

But I digress.  The question really is, could we mine cryptocurrency as part of the normal BOINC process?  It seems like this would have a number of benefits for people:

  • For mining people, they can still mine coins
  • For projects on BOINC, they gain computing power from people wanting to mine coins at the same time
  • This could result in more computing power for a “good” cause as well, instead of what is(in my mind at least) a rather useless endeavor

I’m not exactly sure how this would work, as I don’t really know anything about blockchain.  Could perhaps Ethereum be used to provide people with “compute credits” that would allow this scheme to work?  It could also provide a good way of storing the calculation results, and have them verifiable.

Feb 19

Intergalactics Source Code

Does anybody out there have the original source code to the Java game Intergalactics?  I was able to pull the (compiled) client off of SourceForge, but without a server it’s not too useful.  I did start updating the client to work properly over the summer along with a new server implementation, but it would still be interesting to get all of the original source code.

Anyway, if you do happen to have the original code, I would be grateful.  Intergalactics was always a nice fun timewaster.  It wasn’t too complicated but it did require a certain amount of strategy.

Jul 21

NTP Woes

For the past year or so, I’ve been battling trying to get NTP fed from GPSD.  This has proven to be somewhat harder than I imagined.  GPSD is supposed to feed NTP directly from shared memory, however this was not happening.  I haven’t been trying to do this the entire time, but it’s been a problem for at least a year!

The simple situation is as follows: I have a device that has a GPS receiver on it in order to get the time.  NTP is also running to sync the time whenever it connects to the internet(for most of the time, we are not on the network).  This is also a very vanilla Debian 8 (Jessie) system on an ARM processor, so we’re not custom-compiling any of the standard packages.  The GPS is also a hotplug, so we have a udev rule that calls gpsctl with the proper TTY when the USB is activated:

(udev match options here) SYMLINK+="gps%n", TAG+="systemd", ENV{SYSTEMD_WANTS}="gpsdctl@%k.service"

According to all of the literature on the web, we should just be able to do this by adding the following to /etc/ntp.conf:

#gpsd SHM
server prefer
fudge refid GPS flag1 1

(note that we are setting flag1 here, as the system that this is running on has no clock backup).

This allows NTP to read from a shared memory address that is fed by GPSD.  You can check this with ntpq -p:

     remote           refid      st t when poll reach   delay   offset  jitter
 SHM(0)          .GPS.            0 l    -   64    0    0.000    0.000   0.000
-altus.ip-connec    2 u    7   64    1  189.620    6.525   5.288
+ks3370497.kimsu    2 u    5   64    1  186.252   17.282  14.504
+head1.mirror.ca    2 u    5   64    1  186.503   15.792  14.691
*de.danzuck.eu   2 u    4   64    1  155.953    8.786  13.366

As you can see, in this situation there is no connection to the SHM segment, but we are synced with another NTP server.  However, we can verify that GPSD is running and getting GPS data by running the ‘gpsmon’ program(in the ‘gpsd-clients’ package).

The other important thing to note about the SHM segement is that the ‘reach’ value is always 0, and will never increase.  You can also check the output of NTP trying to reach it with:

 ntpq -p

This was very confusing to me: how are we not talking with the SHM segment?  This would also seem to work on some systems, but not other systems.  No matter how long I left it running, NTP would never get any useful data from the SHM segment.  Finally, I stumbled across a link, which I can’t find now, that said that GPSD must be started before NTP.  I then tried that on my system, by stopping my software(which is always reading from GPSD), stopping NTP and then stopping GPSD.  I then started GPSD, NTP, and my software(which will also initialize the GPS system to start sending commands).  This causes NTP to consistently sync with GPSD.

Do this like the following:

$ systemctl stop your-software
$ systemctl stop ntp
$ systemctl stop gpsd
$ .. make sure everything is dead..
$ systemctl start gpsd
$ systemctl start ntp
$ systemctl start your-software

For reference, here’s the ntp.conf that I am using(we only added the SHM driver from the standard Debian install):

# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help

driftfile /var/lib/ntp/ntp.drift

# Enable this if you want statistics to be logged.
#statsdir /var/log/ntpstats/

statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable

# You do need to talk to an NTP server or two (or three).
#server ntp.your-provider.example

#gpsd SHM
server prefer
fudge refid GPS flag1 1

# pool.ntp.org maps to about 1000 low-stratum NTP servers.  Your server will
# pick a different set every time it starts up.  Please consider joining the
# pool: <http://www.pool.ntp.org/join.html>
server 0.debian.pool.ntp.org iburst
server 1.debian.pool.ntp.org iburst
server 2.debian.pool.ntp.org iburst
server 3.debian.pool.ntp.org iburst

# Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for
# details.  The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions>
# might also be helpful.
# Note that "restrict" applies to both servers and clients, so a configuration
# that might be intended to block requests from certain clients could also end
# up blocking replies from your own upstream servers.

# By default, exchange time with everybody, but don't allow configuration.
restrict -4 default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery

# Local users may interrogate the ntp server more closely.
restrict ::1

# Clients from this (example!) subnet have unlimited access, but only if
# cryptographically authenticated.
#restrict mask notrust

# If you want to provide time to your local subnet, change the next line.
# (Again, the address is an example only.)

# If you want to listen to time broadcasts on your local subnet, de-comment the
# next lines.  Please do this only if you trust everybody on the network!
#disable auth

TL;DR Having trouble feeding NTP from GPSD when you have a good GPS lock?  Make sure that GPSD starts before NTP.

Apr 09

QTest Project Setup

Recently, I was looking into how to setup a Qt project with unit tests.  The Qt-esque way to do this is to use the QTest framework within Qt in order to do this.  While not a bad framework, the problem is that there isn’t a good way to integrate it within your project.  What I mean with that is that you must link with the QTest library and have a special main() method in order to run the tests.  This brings up two problems:

  1. How do we run the tests if we have only one executable?
  2. How do we only link with QTest at certain times(e.g. we don’t want to link with QTest when we send the executable out)

Searching for a solution to the problem, one thing that some people did was to create a .pri file with the source files to compile into two projects: one for the main application, and one for the unit tests.  The disadvantages to this are that it makes it hard to work with Qt Creator to add in the files, as it can’t add them in automatically to the SOURCES and HEADERS list.  So that’s not good.

The other solution that I came across was to add in the files to each project.  However, this adds in a lot of overhead, and you must be sure to add in the files to both projects when something changes.  So that’s not good either.

The solution that I came up with was to have three separate projects:

  • One project for the main code of our project.  This gets built as a static library.
  • One project to run the main code of our project.  This is the normal executable.
  • One project to run the unit tests.  This links with QTest, and doesn’t get installed by default.

This is basically the same project setup as described here, although it wasn’t until after I had created my project that I realized it was the exact same thing(I did come across the link at first, but didn’t understand it at the time since it wasn’t using QTest).

With this project setup, you can edit the main code in Qt creator, do anything that you want, and still have everything link and test in a clean manner(no need to link with QTest in your main executable!)

The code for the same project is up on github, to give you a general overview of how it all fits together.

Feb 04

Microchip Harmony and Bootloaders

I’ve been messing around with a Digilent Max32, and have been using Microchip’s MPLAB Harmony framework to program the device.  The framework is rather nice, but it can be a little confusing at times.  It took me a while to figure out how to work the bootloader on it, so this post will go into how to setup your projects.


  1. PIC32 based device.  In our case, this will be the Max32, but any PIC based device should work fine.
  2. MPLAB X, Microchip’s dev platform based on Netbeans.
  3. Microchip XC32 compiler.  This should be a part of your MPLAB X install, but if not you can download it from Microchip’s website.  I am using version 1.42.
  4. PIC Programmer.  I have an ICD3, but it should be possible to use a PICKit programmer as well.

Part I: Install Harmony

Before we can mess around with Harmony, we need to install the latest version.  In our case, we will be using the current beta version of 2.0.

  1. Download the new Harmony Configurator from Microchip’s website.
  2. Go to the ‘Downloads’ tab, and download the proper version for your system

    Step 1 and 2 – Download Harmony and Configurator

  3. Run the installer for Harmony.  Make sure that you remember where you installed it, you will need this information later.
  4. In MPLAB X, go to Tools->Plugins.

    Plugins location

  5. Go to the ‘Downloaded’ tab
  6. Click ‘Add Plugins’
  7. Browse to where you downloaded the Harmony plug-in to, and select it.

    How to install the downloaded plugin

  8. The plugin will now show up in your Plugins window.  Click the ‘Install’ button.
  9. The plugin will install.  Once it does, you will have to restart MPLAB X.

Part II: Create Bootloader project

Note that before you come here, you must have MPLAB X Harmony and the proper Harmony plugin installed.

  1. Create a new project by going to File->New Project.
  2. Select ’32-bit MPLAB Harmony Project’ and click ‘Next’

    Basic project configuration

  3. Fill in all of the information on this screen.  Don’t forget to set the PIC device that you are using.
  4. If the Harmony configurator does not open up for you automatically, go to Tools -> Embedded -> MPLAB Harmony Configurator

    Open up the Harmony Configurator if it closes

  5. Under the ‘Application Configuration’ section, we want to create an application(we will call it max32_bl) and click ‘Generate application code for selected Harmony components’.  We want to generate code for the bootloader in this area.

    Bootloader configuration #1

  6. Under the ‘Harmony Framework Configuration’ section, select ‘Bootloader Library’.  We will be using the USART bootloader.  Under the ‘Bootloader or Applicaton?’ section, select ‘Bootloader’.  Also select ‘Ports’ under ‘System Services’

    Bootloader configuration #2

  7. Click the ‘Generate Code’ button on the Configurator screen.  We will have a number of new C/H files for us to edit.  Also, there is now a custom linker script for us under the ‘Linker Files’ folder.
  8. Now that code has been generated, there are a few things that we want to do to make it clear what is happening.  First, we don’t need the legacy bootloader options.  So under ‘Header Files’, go to app/system_config/default/system_config.h, and comment out the preprocessor macros BOOTLOADER_LEGACY and BTL_SWITCH

    Delete/comment out these macros

  9. The first thing for us to edit in the code is to open up our max32_bl.c file.  This file contains generated code for us to modify.  The first thing is that we need to move the line that says

    to be in the


    function.  Otherwise, the bootloader code will not call our ForceEvent function.

  10. Now, let’s add a splash screen.  For now, we can simply add a simple function to print out to the UART, and then call that function once in the MAX32_BL_Tasks function, in the MAX32_BL_STATE_INIT case statement of our switch.  Here’s the simple UART printing function:
    static void writeStringToUART( const char* string ){
     int pos = 0;
     while( string[ pos ] != 0 ){
     while( DRV_USART0_TransmitBufferIsFull() ){}
     DRV_USART0_WriteByte( string[ pos++ ] );
  11. You should now be able to program your device with the bootloader.  Open up a terminal with a terminal emulator such as Putty, and if you have done everything correctly you should see a splash screen whenever you reset the processor.  On the MAX32, this is accomplished by pressing the reset switch.
  12. Let’s also make sure that we can get to the bootloader manually.  To do this, we will have to configure one pin on the chip as a GPIO input, with a pull-up resistor.  Go back to the Harmony Configurator screen, and go to the ‘Pin Settings’ tab.

    Pin settings

  13. Choose a pin to use as an input.  In my case, I chose RG6.  This corresponds to pin 52 on the Max32 board.  I will also enable the pull up resistor on this pin, so that when the pin is shorted to ground the bootloader will start.
  14. Go back to our max32_bl file, and modify MAX32_BL_Bootloader_ForceEvent to look like the following(change your pin if you used a different one).
    int MAX32_BL_Bootloader_ForceEvent(void)
     if( !rg6 ){
     writeStringToUART( "Button held, forcing bootloader\n" );
     return 1;
     /* Check the trigger memory location and return true/false. */
     return (1);
     return (0);
  15. Build and re-flash your device.  When you short pin RG6 to GND, the string “Button held, forcing bootloader” should now print out on your console when the bootloader starts.

Here’s the entire file for reference:

  MPLAB Harmony Application Source File
    Microchip Technology Inc.
  File Name:

    This file contains the source code for the MPLAB Harmony application.

    This file contains the source code for the MPLAB Harmony application.  It 
    implements the logic of the application's state machine and it may call 
    API routines of other MPLAB Harmony modules in the system, such as drivers,
    system services, and middleware.  However, it does not call any of the
    system interfaces (such as the "Initialize" and "Tasks" functions) of any of
    the modules in the system or make any assumptions about when those functions
    are called.  That is the responsibility of the configuration-specific system

Copyright (c) 2013-2014 released Microchip Technology Inc.  All rights reserved.

Microchip licenses to you the right to use, modify, copy and distribute
Software only when embedded on a Microchip microcontroller or digital signal
controller that is integrated into your product or third party product
(pursuant to the sublicense terms in the accompanying license agreement).

You should refer to the license agreement accompanying this Software for
additional information regarding your rights and obligations.


// *****************************************************************************
// *****************************************************************************
// Section: Included Files 
// *****************************************************************************
// *****************************************************************************

#include "max32_bl.h"

#include <stdio.h>

// *****************************************************************************
// *****************************************************************************
// Section: Global Data Definitions
// *****************************************************************************
// *****************************************************************************

// *****************************************************************************
/* Application Data

    Holds application data

    This structure holds the application's data.

    This structure should be initialized by the APP_Initialize function.
    Application strings and buffers are be defined outside this structure.

MAX32_BL_DATA max32_blData;

static void writeStringToUART( const char* string ){
    int pos = 0;
    while( string[ pos ] != 0 ){
        while( DRV_USART0_TransmitBufferIsFull() ){}
        DRV_USART0_WriteByte( string[ pos++ ] );

static void writeIntAsCharToUART( int i ){
    while( DRV_USART0_TransmitBufferIsFull() ){}
    DRV_USART0_WriteByte( i + 48 );

static void waitForUARTFlush(){
    while( !PLIB_USART_TransmitterIsEmpty( USART_ID_1 ) ){}

// *****************************************************************************
// *****************************************************************************
// Section: Application Callback Functions
// *****************************************************************************
// *****************************************************************************
    static void MAX32_BL_Bootloader_ForceEvent (void)
    Sets a trigger to be passed to force bootloader callback.
	Run bootloader if memory location == '0xFFFFFFFF' otherwise jump to user 
int MAX32_BL_Bootloader_ForceEvent(void)
    if( !rg6 ){
        writeStringToUART( "Button held, forcing bootloader\n" );
        return 1;
    /* Check the trigger memory location and return true/false. */
        return (1);
		return (0);

/* TODO:  Add any necessary callback functions.

// *****************************************************************************
// *****************************************************************************
// Section: Application Local Functions
// *****************************************************************************
// *****************************************************************************

/* TODO:  Add any necessary local functions.

// *****************************************************************************
// *****************************************************************************
// Section: Application Initialization and State Machine Functions
// *****************************************************************************
// *****************************************************************************

    void MAX32_BL_Initialize ( void )

    See prototype in max32_bl.h.

void MAX32_BL_Initialize ( void )
    /* Place the App state machine in its initial state. */
    max32_blData.state = MAX32_BL_STATE_INIT;

    /* TODO: Initialize your application's state machine and other
     * parameters.

static void printSplash(){    
    writeStringToUART( "MAX32 Bootloader\n" );
    writeStringToUART( "Version " );
    writeIntAsCharToUART( MAJOR_VERSION );
    writeStringToUART( "." );
    writeIntAsCharToUART( MINOR_VERSION );
    writeStringToUART( "\n" );
    writeStringToUART( __DATE__ );
    writeStringToUART( " " );
    writeStringToUART( __TIME__ );
    writeStringToUART( "\n" );

    void MAX32_BL_Tasks ( void )

    See prototype in max32_bl.h.

void MAX32_BL_Tasks ( void )

    /* Check the application's current state. */
    switch ( max32_blData.state )
        /* Application's initial state. */
        case MAX32_BL_STATE_INIT:
            bool appInitialized = true;
            if (appInitialized)
                max32_blData.state = MAX32_BL_STATE_SERVICE_TASKS;
                //print splash screen.  this will happen only once


        /* TODO: implement your application state machine.*/

        /* The default state should never be executed. */
            /* TODO: Handle error in application's state machine. */


 End of File

Part III: Create our main program

Now that we have our bootloader setup, we need to create the main program that will get loaded by the bootloader.

  1. Create a new project by going to File->New Project.
  2. Select ’32-bit MPLAB Harmony Project’ and click ‘Next’
  3. Fill in all of the information on this screen.  Don’t forget to set the PIC device that you are using.
  4. If the Harmony configurator does not open up for you automatically, go to Tools -> Embedded -> MPLAB Harmony Configurator
  5. Under the ‘Application Configuration’ section, we want to create an application(we will call it max32_program) and click ‘Generate application code for selected Harmony components’.  We want code for the Timer and USART to give us some code to work with.

    Main program configuration #1

  6. Under the ‘Harmony Framework Configuration’ section, select ‘Bootloader Library’.  We will be using the USART bootloader.  Under the ‘Bootloader or Applicaton?’ section, select ‘Application’

    Main program configuration #2

  7. Under the ‘Harmony Framework Configuration’ section, also make sure that you have drivers for the timer and USART 0 selected.  The USART should be running at the same baud rate as the bootloader.
  8. Go to the ‘Pin Settings’ page as we did before.  For the MAX32, pin RC1 is an output that drives an LED.  We will use RG6 as an input as we did before on the botloader.  Setup those settings.

    Pin diagram for the main program

  9. Click ‘Generate code’
  10. Your project is now created with some basic information.  Since we set up our timer to use interrupts, we now have a TimerCallback function in mainapp.c.  Let’s modify it to toggle the RC1 LED every time the timer is called.  The code will now look like this:
    static void TimerCallback ( uintptr_t context, uint32_t alarmCount )
     static int state = 0;
     state = !state;
  11. Build your project.  The hex file that is generated is what you want to send to the bootloader.  Note that this hex file can’t be sent as-is, you must convert it to binary and use Microchip’s protocol.  See the next section for some discussion.

Part IV: Bootloading

Now that we have our bootloader and a program, we should try to actually load information onto the device using the bootloader.  To do this, we will need a program to write the information.  You can use either Microchip’s version in AN1388, or I wrote a version that is cross-platform compatible using my CSerial library.

Whichever method you use, once you have built the actual program you need to send the hex file across to get the bootloader to write the data.  If everything has gone correctly, you will then be able to re-set the board and have your application running.

One more tip: If you can program the entire chip at once(e.g. you have an ICD3), if you set the bootloader project as a loadable of your main project you can flash both projects to the chip at the same time.  Since they get flashed in different locations, there’s no conflict.  Note that this won’t work with the lower-end programmers.


This tutorial probably skims over a few steps, but it should at least be enough to get people started and working with Harmony and its bootloader.  Questions? Add a comment and I will see about expanding upon either this post or a later one.

Jan 07

Jenkins in the Cloud

At work, we use Jenkins to automatically build our software.  We use it to build both Java and C++ programs.  Since it’s useful to have a continuous integration server setup to build and test software, I figured that I would document a bit about what I have done on a new VM that I have setup.

This post is going to go into several different directions and technologies, so it may be a little rambling.

Setting up the VM

In order for us to create a hosted instance of Jenkins, the easiest thing to do is to create a VM for us to install Jenkins on.  Since many hosting providers don’t allow you to run Java on shared hosting, our options are to either create a VPS or create a cloud server.  I chose to create a cloud server with DreamCompute, as my web hosting is already through Dreamhost.  This allows us to create a new VM that we have full control over.  The normal VPS with Dreamhost is at some level still a shared system – Ubuntu only.  My preference is for Debian(I have had many weird issues with Ubuntu before).  Note that Jenkins will probably fail if it has less than 2 GB of RAM.  My first instance only had 512 MB of RAM, and as soon as I started to build something that required a bit of memory the VM immediately started thrashing.  My load average went to 11 with only 1 CPU!  (this means that I had enough jobs to keep 11 CPUs busy)

I launched a new Debian 8 image on DreamCompute and created a little script to install the needed things for Jenkins and a few other dependencies that I needed.  This also sets Jenkins up to run on port 80, so that we can access it directly through our browser.

Now that we have Jenkins installed, we need to SSH in to finish the install.

Finish Setup VM

Now that the VM has been mostly setup, we need to SSH in to finish the install.  The first thing I had to do was to finish the install of some needed parts(obviously, you have to wait until the setup script finishes before you can go to this step).

apt-get install cowbuilder

The next thing that we have to do is to setup Jenkins for the first time.  Get the initial admin password from the server:

root@jenkins:/home/debian# cat /var/lib/jenkins/secrets/initialAdminPassword

This hex string is our default password for logging into Jenkins.  We need to go to our public IP address to set this up.  In my case, it is  Once I got there, I put in the admin password and setup Jenkins for the first time.

DNS Setup

Accessing the server by the IP is not particularly useful.  Fortunately, DreamHost has a quick guide on how to setup your DNS entries.  I then put the server at jenkins.rm5248.com.

Jenkins Jobs

I have quite a few projects on my GitHub.  Half of this endeavor is to make a nice place where I can make sure that software is building cleanly.  So I have added a few projects to have them build automatically.

C/C++ Projects

Now to the other big reason for doing this: enabling automatic builds and automatic packaging.  Building Debian packages can be a little tricky. Some tools, like jenkins-debian-glue exist, but they are not the greatest for configuration and usage.  Problems that exist with jenkins-debian-glue:

  • Script-based.  This is not too bad, but it means that it is hard to split builds across multiple nodes.  For example, building ARM packages on an ARM node in my testing is almost 10x faster than emulating ARM on x86
  • Configuration is not clear.  There are separate scripts to generate git snapshots and svn snapshots.   Not all of the configuration variables are documented either.
  • Requires a lot of manual configuration.
  • The default setup requires you to have two Jenkins jobs per project.  This can quickly go overboard and lead to an explosion of jobs.
  • Outputs of the build are not automatically archived to be able to be downloaded through the web interface

To be clear, jenkins-debian-glue is not the worst way to build Debian packages – but it could be a lot better.  So I wrote my own Jenkins plugin to do it.

Debian Pbuilder

This plugin is designed to make building of Debian packages very simple and straightforward.  Because it’s a plugin, it also makes running on other Jenkins nodes practical.  It is also in some ways a port of jenkins-debian-glue to Java, instead of being shell scripts.  In that regard, it uses cowbuilder to create the chroot environment and does build everything in a clean environment.  The actual Jenkins plugin for building packages doesn’t support this behavior at all, which makes builds awkward.

Anyway, this plugin fixes these issues and does everything in a nice way.  If there are people who would like to test out this plugin, that would be very useful.  Since it is built through our Jenkins instance, we can create a new build of the project at any time.  Note that as of right now, you have to install the plugin manually through Jenkins, it is not on the main Jenkins git to be easily installed at this moment.

More information on how to use Debian-pbuilder will be coming soon. 🙂

Other Nodes

This is not strictly related to Jenkins in the cloud, but one thing that we have done at work is to get a dedicated ARM computer in order to build our ARM packages with Debian-pbuilder. We use the NVidia Jetson TK1 board as a Jenkins node.  This greatly decreases our build time from almost 1 hour to only 10 minutes!  The problem with what we were doing before was that because of how Debian-pbuilder and jenkins-debian-glue work, they create an (emulated) chroot environment on the system.  This is slooooooooooow.  Making the chroot on actual hardware greatly speeds up the building of projects.

Nov 28

SVN Checkout By Date Problems

Have you ever had to checkout an SVN revision by the date and gotten an error like the following?

bash$ svn co -r {2016-11-28} https://example.com/svn/foo
svn: E175002: Unexpected HTTP status 500 'Internal Server Error' on '/svn/!svn/me'

svn: E175002: Additional errors:
svn: E175002: REPORT request on '/svn/!svn/me' failed: 500 Internal Server Error

(This is often the case with Jenkins, see this issue for some more details)

Why Jenkins checks out revisions by date is another matter(it’s stupid), but what is the original problem in the first place?  Well, if we look at the Apache error_log on the server, it will look something like the following:

[Mon Nov 28 08:01:24 2016] [error] [client] Could not access revision times.  [500, #175002]

Well, that’s not good.  What causes the problem exactly?  I’m not sure; but this only started after I committed a revision and the SVN command exited after printing out an error message, something about not being able to lock the database(I think this came from the server).  Things didn’t break all at once for some reason, but eventually all of the automated builds on Jenkins were failing.  The workaround specified in JENKINS-17431(edit the URLs to add @HEAD) works, but the fact that it randomly broke was bothering me.

Since this seemed to be related to a revision time problem, I tried to find out what revision was broken, which eventually led me to this blog post with the solution.  I’ve copied the relevant portions of the post below incase the post is ever removed/deleted:

In order to correct the problem the entries with the missing dates will need to be located.  This is easily done with the following command string which needs to be run on the Subversion server itself.

svn log --xml -v -r {2010-12-01T00:00:00Z}:{2011-01-13T15:00:00Z} file:///svn/repo

If there is any issue related to the time on a revision then the following error will be shown when you run the above command.

<?xml version="1.0"?>
svn: Failed to find time on revision 34534

The first step in making the necessary corrections is to edit the /svn/repos/mosaic/hooks/post-revprop-change file so that it contains the following information.

exit 0

If the previous steps had not be performed the hook files would have prevented this step from actually occurring. So to make the change to the revision entry the svn command is used on the server itself using the propset subcommand. When entering the date try to select a date that is in between the dates that proceed and follow this revision entry.

svn propset -r34534 --revprop svn:date '2010-11-29T19:55:44.000220Z' file:///svn/repo

The interesting thing about this was that in my case, the particular revision that was bad had been committed over 3 years ago, so why that one revision was bad is still a mystery to me.