Apr 25

Fun with templates

As you may be aware, I maintain dbus-cxx, and I’ve been working on it lately to get it ready for a new release. Most of that work is not adding new features, but updating the code generator to work correctly. However, this post today is not about that work, it is about the work that I am doing on the next major version of dbus-cxx(2.0). Part of this work involves using new C++ features, due to libsigc++ needing C++17 to compile now. With the new variadic templates that C++ has(actually since C++11), we can have more than 7 template parameters to a function. (This limit of 7 is arbitrarily chosen by libsigc++, it’s not a C++ limitation.)

Because of this however, some of the code in dbus-cxx needs to change in order to work correctly. The main portion that I’m working on right now has to do with getting the DBus signatures to work properly. Here’s a small piece of code that is currently in dbus_signal.h(after running dbus_signal.h.m4 through m4):

  /** Returns a DBus XML description of this interface */
  virtual std::string introspect(int space_depth=0) const
  {
    std::ostringstream sout;
    std::string spaces;
    for (int i=0; i < space_depth; i++ ) spaces += " ";
    sout << spaces << "<signal name=\"" << name() << "\">\n";

    T_arg1 arg1;
    sout << spaces << "  <arg name=\"" << m_arg_names[1-1] << "\" type=\"" << signature(arg1) << "\"/>\n";
    T_arg2 arg2;
    sout << spaces << "  <arg name=\"" << m_arg_names[2-1] << "\" type=\"" << signature(arg2) << "\"/>\n";
    sout << spaces << "</signal>\n";
    return sout.str();
  }

This method is created once for each overloaded type that we have. The important part is that T_arg is created once for each argument that we have. With variadic templates, this is impossible to do. The way to get the template arguments out of the variadic call is to do recursion.

Recursion + templates is not something that I’m very familiar with, so this took me a while to figure out. However, I present the following sample code for getting the signature of a DBus method:

  inline std::string signature( uint8_t )     { return "y"; }
  inline std::string signature( bool )        { return "b"; }
  inline std::string signature( int16_t )     { return "n"; }
  inline std::string signature( uint16_t )    { return "q"; }
  inline std::string signature( int32_t )     { return "i"; }
  inline std::string signature( uint32_t )    { return "u"; }

  template<typename... argn> class sig;

   template<> class sig<>{
   public:
   std::string sigg() const {
     return "";
   }
   };

   template<typename arg1, typename... argn>
   class sig<arg1, argn...> : public sig<argn...> {
   public:
   std::string sigg() const{
     arg1 arg;
     return signature(arg) + sig<argn...>::sigg();
   }
   };

int main(int argc, char** argv){
  std::cout << sig<uint32_t,uint32_t,bool,int64_t>().sigg() << std::endl;
}

Output: 
uubx

This took me a few hours to figure out exactly how to do it, so I’m at least a little proud of it! The other confusing part that I had to work out was how to use a recursive call(with signature()) also with the recursive call for the templates, which leads us to the following observation:

It’s recursion all the way down.

Mar 15

One large program or many little programs?

Earlier this week, I came across this question on softwareengineering.stackexchange.com.  This has some relevance for me, since at work two of our main projects follow the two sides of this design philosophy: one project is a more monolithic application, and the other follows more of a microservices model(e.g. many applications).  There are some reasons for this which I will now attempt to explain.

Option 1: Monolithic Application

The first project that I will explain here is our project that has a more monolithic application.  First of all, a brief overview of how this project works.  Custom hardware(running Linux) collects information from sensors(both built-in and often third-party over Modbus) and aggregates it and classifies it.  The classification is due to the nature of the project – sensor data falls into one of several classes(temperature, voltage, etc.).  This information is saved off periodically to a local database, and then synchronized with a custom website for viewing.  The project, for the most part, can be split into these two main parts:

  • Data collector
  • Web viewer
  • Local viewer(separate system, talks over ethernet)

Due to the nature of the hardware, there is no web interface on the hardware directly, it is on a cloud server.

Now, the data collector application is a mostly monolithic application.  However, it is structured similarly to the Linux kernel in that we have specific ‘drivers’ that talk with different pieces of equipment, so the core parts of the application don’t know what hardware they are talking to, they are just talking with specific interfaces that we have defined.

In this case, why did we choose to go with a monolithic application?  Well, there are a few reasons and advantages.

Reason 1: As primarily a data collector device, there’s no real need to have different applications send data to each other.

Reason 2: The development of the system is much easier, since you don’t need to debug interactions between different programs.

Reason 3: Following from the first two, we often have a need to talk with multiple devices on the same serial link using Modbus.  This has to be siphoned in through a single point of entry to avoid contention on the bus, since you can only have one modbus message in-flight at a time.

Reason 4: All of the data comes in on one processor, there is no need to talk with another processor.  Note that this is not the same as talking with other devices.

Reason 5: It’s a lot simpler to pass data around and think about it conceptually when it is all in the same process.

Now that we have some reasons, what are some disadvantages to this scheme?

Disadvantage 1: Bugs.  Since our application is in C++(the ability to use C libraries is important), a single segfault can crash the entire application.

Disadvantage 2: The build can take a long time; the incremental build and linking isn’t bad, but a clean build can take a few minutes.  A test build on Jenkins will take >10 minutes, and it can still take several minutes to compile on a dev machine if you don’t do parallel make.

Overall, the disadvantages are not show-stoppers(except for number 1, there is some bad memory management happening somewhere but I haven’t figured out where yet).  The separation into three basic parts(data collection, local GUI, web GUI) gives us a good separation of concerns.  We do blend in a little bit of option 2 with multiple applications, but that is to allow certain core functionality to function even if the main application is down – what we use that for is to talk with our local cell modem.  Given that the data collection hardware may not be easily accessible, ensuring that the cellular communications are free from bugs in our main application is important.

Option 2: Multiple Applications

If you don’t want to make a monolithic application, you may decide to do a lot of small applications.  One of my other primary projects uses this approach, and the reason is due to the nature of the hardware and how things need to interact.

In our project with multiple applications, we have both multiple compute units and very disparate sensor readings that we are taking in.  Unlike the monolithic application where data is easily classified into categories, this project has even more disparate data.  Moreover, we take in a lot of different kinds of data.  This data can come in on any processor, so there is no ‘master’ application per se.  This data also needs to be replicated to all displays, which may(or may not) be smart displays.  We also want to insulate ourselves from failure in any one application.  A single bug should not take down the entire system.

To handle this, we essentially have a common data bus that connects all of the processors together.  We don’t use RabbitMQ, but the concept is similar to their federation plugin, in that you can publish a message on any processor and it will be replicated to all connected processors.  This makes adding new processors extremely easy.  All of the data is basically made on a producer/consumer model.

Advantage 1: Program resiliency.  With multiple applications running, a bug in one application will not cause the others to exit.

Advantage 2: You can easily add more processors.  This is not really a problem for us, but since data is automatically synchronized between processors, adding a new consumer of data becomes very simple.

Advantage 3: Data can come and go from any connected system, you need not know in advance which processor is giving out information.

This design is not without some caveats though.

Disadvantage 1: Debugging becomes much harder.  Since you can have more than one processor in the system, your producer and your consumer can be on different processors, or you could have multiple consumers.

Disadvantage 2: Because this is a producer/consumer system(it’s the only way that I can see to effectively scale), there’s no way to get data directly from an application(e.g. there’s no remote procedure call easily possible over the network).

Conclusion

There are two very different use cases for these two designs.  From my experience, here’s a quick rundown:

Monolithic Application

  • Generally easier to develop, since you don’t have to figure out program<->program interactions
  • Often needed if you need to control access to a resource(e.g. physical serial port)
  • Works best if you only have to run on one computer at a time

Multiple Applications

  • Harder to develop due to program<->program interactions
  • Better at scaling across multiple computers
  • Individual applications are generally simpler

Due to the nature of engineering, there’s no one way to do this that is best.  There are often multiple ways to solve a given problem, and very rarely is one of them unequivocally the best solution.

Jan 31

Counting lines of code

So a few years ago, me and a coworker had to figure out how many lines of code we had. This was either for metrics or for because we were curious, I can’t really remember why. I came across the script again today while going through some old code. Here it is in all its glory:

#!/bin/bash

let n=0; for x in "$@"; do temp1=`find $x | grep '\\.cpp$\|\\.c$\|\\.java$\|\\.cc$\|\\.h$\|\\.xml$\|\\.sh$\|\\.pl$\|\\.bash$\|\\.proto$'`; temp=`cat /dev/null $temp1 | grep -c [.]*`; let n=$n+$temp; if [ $temp -gt 0 ]; then printf "%s: " $x ; echo $temp; fi; done ; echo Total: $n

This took us at least an hour(or perhaps more), I’m not really sure.

Anyway, it was after we had done all of that work that we realized that wc exists.

Oct 17

Law Abiding Citizen

A few weeks ago, I came across this comment on Reddit about Law Abiding Citizen, and how some people think that the ending was bad.  So I would just like to say that first of all, I do like the ending as it stands(see the parent comment).  It fits with the movie, especially since earlier in the movie Clyde explicitly told Nick that his objective wasn’t to kill him, it was to teach him a lesson.  It’s a bit hard to teach somebody a lesson if they are dead!

However, the two alternative endings are interesting, so here are some thoughts on how those might play out:

The same ending happens, but during the meeting with the city officials the Mayor basically sanctions higher ups to stop Shelton by any means necessary because of how politically damaging it is becoming. She basically gives the go ahead to have them kill him and make it look like an accident, but unbeknownst to them the same feed Shelton was watching in prison is being live-streamed to media new outlets around the world, thereby showing the world that the people that swore to uphold justice will wilfully abandon their morals to save themselves, thus “bring the whole fuckin’ diseased, corrupt temple down…”.

This would be kinda cool, in a revenge sort of way, to perhaps show that corruption goes all the way up.  But I don’t really see how this could be a good ending with teaching Nick a lesson – I think it would have to end with the feed going out, but the bomb still going off, to show that once you break the rules there are consequences.

The original ending plays out the same, except at the end Foxx is sitting at his daughters recital pleased as punch that he beat Butler, even though it was by straight up murdering him letting him die, and his tie suddenly tightens and chokes him to death (the same method that was foreshadowed by the CIA agent earlier in the film). Clyde still dies, but Foxx learns that even the DA is not exempt from ‘action without consequence’ so it’s a little easier to swallow.

So what would also make this very cool would be to have an ending similar to Inception.  You don’t show the actual act, you have the tie start to visibly tighten and then show Nick’s hand going up to his neck to play with the tie – and then cut out. Again though, I don’t think that this fits with the theme of teaching Nick a lesson, since it’s hard to teach him a lesson if he’s dead.  It would be a cool ending though.

Apr 28

English sentences and punctuation

Barbara Bush died a few days ago, and The Onion had this to say about it:

Barbara Bush Passes Away Surrounded By Loved Ones, Jeb

This got me thinking a bit, how can we change the meaning of this sentence just by making some very minor edits to it?  As it stands right no, the comma at the end of the headline make two different groups, Barbara Bush’s loved ones and Jeb.  These groups are separate, and the headline would be the same if the comma was replaced with an ampersand(&).  What happens if we change the comma to a colon though?

Barbara Bush Passes Away Surrounded By Loved Ones: Jeb

As a headline in this case, this is saying that Jeb stated Barbara passed away.  There’s no relationship between Jeb and her loved ones.  Now what would happen if we add more people to the end?

Barbara Bush Passes Away Surrounded By Loved Ones: Jeb, George

By having more than one person here, we are now defining who the loved ones are of Barbara.  At least that’s what first comes to mind for me.

 

Anyway, I just thought that this was interesting.  And quite possibly confusing to people who are just learning English, as the punctuation makes a big difference in this case.

Apr 09

Bitcoin Mining for a Useful Purpose?

So I was thinking about this the other day when I came across this article on Slashdot that points out that GPU prices are high due to the demand for Bitcoin(and other cryptocurrencies) mining.  This got me thinking, what’s the point for this?  What if we could do something useful(well, more useful) than mining for virtual currency?  I’ve been running BOINC for probably about 12+ years now, doing calculations for SETI@Home.  Originally, I wasn’t even using the BOINC client, SETI@Home had their own standalone software that has now been superseded by BOINC.  Which given that the original software was used until 2005, means that I have probably actually been doing this for 15+ years at this point(logging into the SETI website indicates I have been a member since December 2005)…

But I digress.  The question really is, could we mine cryptocurrency as part of the normal BOINC process?  It seems like this would have a number of benefits for people:

  • For mining people, they can still mine coins
  • For projects on BOINC, they gain computing power from people wanting to mine coins at the same time
  • This could result in more computing power for a “good” cause as well, instead of what is(in my mind at least) a rather useless endeavor

I’m not exactly sure how this would work, as I don’t really know anything about blockchain.  Could perhaps Ethereum be used to provide people with “compute credits” that would allow this scheme to work?  It could also provide a good way of storing the calculation results, and have them verifiable.

Feb 19

Intergalactics Source Code

Does anybody out there have the original source code to the Java game Intergalactics?  I was able to pull the (compiled) client off of SourceForge, but without a server it’s not too useful.  I did start updating the client to work properly over the summer along with a new server implementation, but it would still be interesting to get all of the original source code.

Anyway, if you do happen to have the original code, I would be grateful.  Intergalactics was always a nice fun timewaster.  It wasn’t too complicated but it did require a certain amount of strategy.

Dec 09

APT Repo is now live

Today, I created an APT repo for my projects.  At the moment, this hosts only CSerial, however the intention is to put some other projects up at some point.  Note that because CSerial is built as both amd64 and armhf in the same repository, you may need to give the exact version to APT when installing: apt-get install cserial-dev=version

Versions can be seen by using the following APT command: apt-cache policy cserial-dev

There are actually two APT repos: one for nightly builds, and one for the releases.  As of right now, nothing is in the releases, as I need to fix a few bugs before that happens.

Everything in these APT repos is built from Jenkins.

The main website can be seen here: http://apt.rm5248.com/

Sep 03

The Demise of Juicero

As you may have heard recently, Juicero is shutting down.  While not entirely unexpected, their business model just seemed insane to start with.  As recounted in some of the Slashdot comments, Juicero went through $100+ million dollars in funding.

This brings up an obvious question:

HOW DO YOU SPEND THIS MUCH MONEY?

Really.  Over 100 million dollars to create a device that does nothing but squeeze juice?  Give me only ten million and I will fail faster for you!  I’m not quite sure what these guys were doing with all that money(although AvE’s video seems enlightening), but I’m beginning to think that they were simply trying to go too far at once.  Instead of making a small, feasible product at first, they went and sunk a whole lot of money into design and not a whole lot into making an actually reasonable device.  This is just insane the amount of money spent on this.

From the VC point of view, probably what they were thinking is “continuous revenue stream”, which would be reasonable from their view of getting money back.  While the premise may have good, the actual execution was very lacking.  After all, “aim for the stars, hope not to get the exact polar opposite of what you were actually trying to accomplish.”