What I Learned Today

WordPress tells me this is the 17th post I’ve written. That’s not true at all. I know it’s not true. This is the 31st post I’ve written. But the missing 14 or are gone forever. Why? Because in my own hubris and impatience, I deleted them forever, accidentally, at 2am this morning.

Lesson 1: NEVER AUTOREMOVE/UNINSTALL/PURGE A PROGRAM WITHOUT UNDERSTANDING WHAT IT WILL TAKE WITH IT

Using Amazon Linux, I installed the (then) current versions of apache, mysql, php, etc. That’s fine. They’ve worked great. But a program I wanted to play with required PHP 5.4+, while I was stuck on 5.3. So I uninstalled PHP, but then needed to update apache, mysql, etc. – so I just uninstalled them all, purged everything, and installed newer versions.

I purged everything.

I purged everything.

I purged everything.

I purged the mysql database.

All of my posts are WERE stored in that database. All of the data from my project management software were stored in that database. All of the user login data from my site were stored in that database.

Lesson 2: NEVER DO SOMETHING REALLY LATE AT NIGHT

When I told my wife I’d done this, the first thing out of her mouth: “Did you do this really late last night?” Yes. Yes I did. Just like she’s been reminding me not to do since she was my girlfriend. I know better. It happens A LOT (though never this critically – see lesson 3…). I know by know that when I wake up at 6am with 4 hours sleep, work a full day, grab supper and run out to a function until 10pm, get home at 11 then work for 2 more hours until I’m passing out – that is not the time to do what I did.

Lesson 3: FREQUENT BACKUPS

The only reason this is post 17 and not post 1 is I do have a backup from the end of February.

This is what I’m kicking myself the most over – because the very last thing I put into the mysql database (I’m not kidding) is a reminder to myself to set up a regular backup system.

The irony of this is agonizing.

What I Lost

2 months of blog posts – my World War I series 100 years later in Real Time (each Saturday, releasing a mock news article about what happened 100 ago to date). 2 months of project management work items/tasks/bug tracking/etc.  Other things I don’t know yet.

APPLICATION

1. Regular backups. Since I write bi-weekly, I’ll just do it after each article.

2. Don’t do anything after 11pm that can’t be CTRL-Z’ed.

3. Understand what the command I’m about to do does.

4. Get away from my computer right now before I break it physically.

I had originally intended to spend this post discussing Ceedling, and Unity testing, but now I won’t. Why? Over the weekend, I was reading the API guide for Arduino libraries, and realized they’re heavily focused on object-oriented programming. Synthduino was written in a functional style (it’s C) – while I’d used a basic struct to hold together a note’s frequency and duration, all the calls where global functions that accepted either a note or one of its members as a parameter. This was contrary to the API.

 

I did research, trying to find a good unit testing suite for C++. Turns out I’d somehow missed what should have been a top result: Google Test (aka gTest). I had gotten distracted by the Wikipedia list of suites, and gone through there. If only…
Anyway, I’ve spent the last 2-3 days rewriting Synthduino in C++, with classes, and rewriting the test suites. Fortunately, the logic and test data is the same, so I’ve got that going for me. Once I’ve spent more time with Google Test, I’ll try to do a better comparison between it and Ceedling/Unity.

Wherein our hero fights the dragons of cross-platform compatibility, libraries not being updated, inadequate documentation, and the fact that Arduino can’t do unit testing natively.

 

Why?

I’m currently working on the new release of my project Synthduino, and I’ve fallen in love with Unit Testing from various Ruby and Ruby on Rails projects I worked on last year. I decided it would be good to bring unit testing to Synthduino, not only to give me a clearer roadmap of development (unit tests help focus effort), but also make sure my work is good quality (as I do want Synthduino to be used).

What?

I briefly considered writing my own framework, but I’m not really comfortable enough in C/C++ to do it, so I looked into frameworks. Based on what I read, I narrowed it down to two options: CppUnit and Ceedling. From what I gathered, CppUnit was the more established of the two, but Ceedling was built on Ruby to use rake. I decided to try them both out.

Where?

My development platform spans 5 computers and 4 operating systems (Ubuntu, Lubuntu, Linux Mint, and Windows 7), with a git server running here. I needed something that would work moderately well on that.

How?

I started out with CppUnit. The biggest issue I ran into was the complete lack of documentation (seriously – is there any? It’s advertised as a “C++ port of JUnit” – am I supposed to use the JUnit documentation?). I looked around online and found a lot of tutorials that all used completely different ways of doing the same things. On top of that, none of them actually worked on any of my machines – every attempt to do anything, no matter how minor, was a long string of compiler errors or stack overflows.

I looked to Ceedling. Off that bat, I did like it more – it came with example projects, and included documentation (though it was hard to find). I liked the ability to use rake, which I’m familiar with, including the ability to set up stubs for everything with an easy rake command. My biggest complaint would still be that the documentation is a little sparse (and online it’s nonexistent). BUT it’s certainly sufficient, so I can’t complain too much. I’m able to get things done, make my tests pass, and so on.

The only bug I ran into was in functions that used printf – I spend about 4 hours tracing stack calls, overflows, missing symbols, and so on until I tracked it down – an error that had been fixed in Unity in August but not found its way into Ceedling until December (after I had downloaded it). Otherwise it’s been great.

 

Next time I’ll actually look at some of the testing.

Now that I’m starting the New Year, I’ve settled into an organization system that works pretty well for me. I thought I’d at least record it here if anyone’s interested, including my future self who may look back on this system as an unmitigated disaster and wonder what I was thinking.

WEEKLY PLANNER

The first part is a weekly planner I carry around – yes, a hard-copy. It’s a little booklet folded down to 3.5 x 6, which I’ve found is a good size to fit in my pockets. It has 2 pages for notes, then a list of long-term goals I have (with weekly milestones), and upcoming events. The main point is a daily planner section in the middle (I make a new one each week). On the left page is all of my daily tasks, and the right side is a daily schedule. I update it diligently, and record my hourly usage (if i have time, I’ll upload a sample page).

The last page is a review section, where I total up how many hours I spent each day on tasks, and then write a review of the week, noting room for improvement, and so on. It also faces my 6 month goals, “this month” goals, and weekly goals (which also appear at the top of each daily task list, so I make sure my daily tasks align).

KANBANFLOW

I use the Pomodoro technique to stay productive. Last year, I was only using my planner, but was not NEARLY as productive as I would have liked. The pomodoro technique keeps me accountable, and assuages my guilt over doing things for fun (I’m a little obsessive about productivity). I work for 25 minutes, then take a 5 minute break. Every fourth break is 15 minutes.
I also use the kanban system, with each column being a day of the week (though the first is “today”, followed by “done”, and the last 2 columns are “next week” and “later”). I sort the cards by the order I need to do them and get started. The cards match tasks on the daily agenda in my weekly planner.
Kanban flow combines a pomodoro timer with a Kanban board, so it’s great. I’m still working on totaling numbers, but my average productivity has probably increased 8-10x so far this year!

OpenProject

Finally, to track work for software projects, I use OpenProject. It’s open source,  enjoy it, have contributed to it (okay, I fixed a grammar mistake…), and it’s been very useful. I put a lot of detail in the user stories and tasks, and then reference the work package # in my planner and Kanban board.

That’s my org technique. Time will tell (or has already told, future self) how it works. I imagine I’ll continue refining it, though – every week so far since mid-May 2014 I’ve made some minor tweak or another.

I didn’t want this to sound like an ad for AWS, but I’ve been very happy with it and do want to share my experiences.

EC2

EC2 – Elastic Compute Cloud – is the main “server” part of Amazon Web Services. This is the module where you spin up servers, select the hardward you want it run on, the operating system (or variants), and assign virtual harddrives and other storage. It’s a lot of fun to work with. The documentation is fantastic, and includes several walkthroughs – “How to set up a linux server”, “How to set up a LAMP server”, “How to install WordPress”, etc. In fact, you’re reading this on the realworld application of those walkthroughs.

These servers don’t automatically have persistent data, so when they’re shutdown, it’s gone. See below…

My only complain has to do with regions. AWS servers are located in regions, based on their datacenters, and essentially you can only have one server running in the free tier (yes, technically it’s by usage hours and storage etc., but it’s calculated based on one server month). In the online management portal, you can only see one region at a time. SO, when I spun up a server in one region, then shifted to another a few weeks later, I forgot I already had one going. I couldn’t see it anywhere – there’s no true MASTER view, only a master view PER REGION. This means that when I surpassed my free tier usage, I couldn’t figure out how to stop accumulating charges hourly. Finally I figured it out, but it would be SO helpful if they had a master view. For what it’s worth, they comped me a month’s worth of usage.

S3

S3 is the storage you get. The free tiers gives you 30gb – not bad at all. That’s persistent storage, so just in case my server goes out, I store important stuf here. I also symlink stuff that needs to stay in a certain place on the server, so I can spin up & down without a problem. I just load the persistent storage virtual HDD, and symlink back over.

Elastic IP’s

Elastic IPs are GREAT, and the free tier lets you have one. Basically, the real IP address (and also the URL) of a server is created when the EC2 instance if created. SO if you terminate one, and start another, the IP address changes. This is not good for keeping the DNS update. With elastic IPs, you have one that’s permanent and assigned to you, and you point it at the server you want. SO the DNS has your permanent elastic IP, and you control what server that elastic IP points to.

Email

Haven’t had any luck getting a SMTP server set up. I’m not sure if it has to do with Amazon’s anti-spam measures (a free server could get bad), but when I have time I’m going to spin up a new instance and see what happens. I applied to Amazon for them to relax restrictions re: mail, but I’m not sure if that applies to this server, or future ones, or even if that’s EC2 related at all (they do offer mass emailing services for money)

All-in-all, I’ve been very happy with Amazon Web Services. My only real complaint is the lack of a global overview in EC2.

For the last few years, I’ve been running a server out of my house – Linux, running on my old laptop (an old Dell Inspiron 1150 – currently ~$40 on Ebay). It’s hosted git repos, I’ve used it for email, filesharing, a wireless print server, etc. It’s been fun. It’s also hosted some web applications I’ve written for people.

In August, after having to update IP addresses several times due to my internet provider changing it, I decided it was time to migrate to a full rig, and to use some of the domains I’ve bought over the years (including this one). While I briefly toyed with the idea of just directing the DNS servers to my 10-year-old laptop, I decided to take the plunge and do “real” hosting online. Managing hardware hasn’t been as fun.

After looking around at various choices, I settled on Amazon Web Services. They offer a “Free tier“,  which provides, among other things, 750 hours monthly usage, 5 gigs of storage, and one “elastic IP” address. That last one lets the DNS servers find your server even if the specific IP address may change (it’s regenerated every time you launch a new server). You get a permanent IP that Amazon maps to the instance-specific one.

I’ve been using it now for about 6 weeks. Other than some confusion over extra billing charges, which although it was my fault, Amazon comped me a full month extra usage to cover it, it’s been smooth sailing. The documentation is great, and includes step-by-step walkthroughs for everything from starting it up to installing WordPress. It’s easy to get up and going. And it’s SO MUCH FUN to have a ‘real’ server on the ‘real’ internet with a ‘real’ domain name.

I’ll talk more about specific perks, as well as issues I’ve found, later on.