managing your enthusiasm

If there is one thing that is likely to unsteady the ship as a sole founder is lack of enthusiasm. No one there to push you on. No one to confide in. No one to help when things are just not working. All these things can lead you down the dark path.

 

I was reminded just how bad this can get over the last week or so when I got myself into a rut while trying to solve a problem with my code.

To be specific:  I use fabric.js to draw shapes on an HTML5 canvas, however, I have pretty much wrote my own way to scale objects up and down as I found that if you have an image as a shape background then you get unacceptable pixelization – I tried the static and dynamic resize filters but they appear too slow. All was working fine but things were going wrong when you have a group shape as resizing wasn’t working correctly – amongst other things.

What I thought should have been an easy task turned into a nightmare. I was hacking away – essentially like a headless horseman – grasping at any straw to solve my problem without ever really thinking about the problem. The real problem was just that I wanted to solve the big problem without really knowing what all the little problems were. You see this so much these days with code, folk expect to find the solution to their problem on StackOverflow, and if they do, they copy-paste in the solution without ever understanding the problem. The day always comes where you can’t do this or it doesn’t really solve your problem and you only realise too late. That’s another story for another day though!

So the more days that went past the more I got frustrated by this problem. As each day ticked by my daily schedule (as I talked about before) was just a worthless piece of paper that I filled out each morning trying to pretend that I was in control. Each day my enthusiasm was dropping exponentially. This is where is really hit home having no one to talk over the problem with – talking it over with friends is likely not much use as no one is likely to understand what you are doing deeply enough to make any technical contribution. I was honestly at the point where I thought it was easier to just abandon Neonburn before I’d ever given it a chance. It’s amazing how the mind works, you’re happy to give up something you’ve given months and months of your life over to just because of one little problem.

Thankfully I started making small wins by edging little by little to a working solution, at which point the enthusiasm gradually startsed increasing. I started to understand the problem (I abstracted it mathematically rather than trying to write code to solve my problem). All of a sudden things were looking better and it’s amazing how good it feels getting these little wins. It’s important to accept that just because you can describe the problem succinctly it means that the said thing is easy to accomplish technically – think hoverboard!

So I’m not entirely sure of the moral of this story is. It’s probably along the lines of give yourself little wins all the time which of course means you need to set yourself smaller tasks. Walking away from something is actually the easy option, but then again so is battling away without really thinking about what you are fighting. However, I in no way believe that this won’t happen to me again. The only thing I remember thinking at the time that got me back on track was: this has happened before, and you always get it fixed even if it seems impossible at the time (this is assuming that you know it’s at least possible). I don’t want to trout out the whole “never give up” mantra as that sounds too much self-improvement-guy, it’s more a “never give up without having a proper logical converstation with yourself about it” – that’s not as snappy though!

getting over the line

Why is getting over the line so difficult? When developing software and someone asks you how are you getting on, you always inevitably hear “it’s nearly done, just this, this and this” to do. Next week “this, this and this” are the exact same things. Your first thought is then to say “do we really need these things done before we launch/release?” Surely if they’re not essentially then you just release? The problem is, this will boil down to how you define “essential”.

 

For example, I have a really hard time with the concept of a MVP (minimum viable product). Maybe I’m just not the customer for someone launching an MVP as I would abandon it in minutes if it was (A) difficult to do what I wanted it to do, and (B) bug ridden. The thing is, I can’t honestly understand why anyone would tolerate an app with these shortcomings. Is it just the case that a product has to be 10% better to convince some to buy/switch? That seems low to me. I’d say in my case it’d need to be at least 50% better to motivate me to switch. (Those we made up numbers!)

In reality, I think I’d be more encouraging of MVP+ – basically MVP that does something but I’m not expected to deal and workaround shitty bugs all day, and you wouldn’t be embarrassed if you asked someone to pay for it. I don’t see a meaningful B2B product as something that you can do throw together in 2 weeks.

Having said all this, I think it’s important as a founder to realise when you need to show what you have created – what is the point in doing it otherwise. I’m one of the guilty ones when it comes to holding off for an MVP++! There comes a day when you need to get off your arse and take a risk – but just minimise it and don’t turn customers off for good by releasing crap.

So in summary, if your product is good enough that you’d feel happy charging money for it (assuming you have a reasonable moral compass) then it’s definitely time to let the world see it, otherwise I’d think twice.

PS. 50% better than others and I’d happily take money for it www.neonburn.com. Haha.

stop trying to show how smart you are

Sitting feeling bright and somewhat giddy with my own self confidence I decided to dust down my copy of “C++ Template Metaprogramming” by David Abrahams and Aleksey Gurtovoy. Thinking I really need to be the master of something and further thinking C++ template programming was that “thing”. I approached the task with some gusto for at least a few days hours minutes, however, as with most advanced study you really have to be willing to dedicate your life to it. It’s never long before the inevitable gloom of reality sets in and you realise that this is more than an afternoons work. Without much control your brain starts telling you that there are much more important things you could be doing. Thankfully for everyone else on your team you stop and decide that it’s just not worth it.

The fact is that writing code that only those who have decided to dedicate a fair chunk of their life to is never going to be the right choice. There is nothing worse than trying to understand code like this. There are times where it pays to be smart, like when you have a better (much) faster algorithm, or doing something saves you hundreds of lines of code. But most of the time you see this kind of code it’s people just trying to prove how smart they are.

Now C++ is not alone in this. Ruby has exactly the same problem – you can get stuff happening as if by magic. I could be wrong but I imagine most people read code from top to bottom, working line by line, they don’t expect code to be auto generated or happen as some elaborate method_missing technique. Sure I can maybe relax my vitriol for those building frameworks where things are being used in some generic unknown context. However the vast majority of applications out there don’t have to deal with these problems. The biggest problem these applications face is that the developers creating them want to architect some elaborate framework to fit a very specific use case – it almost makes me cry. I’m not sure why as a profession we don’t revel in an approach that oozes simplicity. This is certainly what the smartest developers I’ve working with have always managed to do.

writing code used to be simple

Not that many years ago, though more than I care to remember, the process of writing code started with pen on paper. I’m not talking about 1970 here where you had to apply for compute time on some mainframe to see your ideas come alive, rather 1995 where it seemed the normal practice was to encouraging us disobedient students to first distil our ideas (and code) on paper. I’m gathering that this practice is largely forgotten? Despite its disappearance this can still be good practice but is made difficult by the increasing complexity of modern code libraries and frameworks.

Take Ruby on Rails for example. This used to be the poster child of getting an application up and running quickly. This might be still be true if you have years of experience developing Rails applications but for a newbie, forget it, you are going to struggle. The introduction of the asset pipeline, amongst other things, has made learning how to use Rails a labour of love.

It’s not just Rails though. A lot of the code that you need will already be wrapped up in a library written by someone else. Creating a modern software application is essentially just a case of rearranging these packages into a new unique order. In fact, the need for the order to be unique can probably be dropped.

Using these libraries is good though? Right? I mean we should never “reinvent the wheel”?

So how many times do you find yourself spending more time trying to figure out how the fuck to use some library than if you had written it yourself? No-one really expects there to be detailed documentation, which is just as well, as in general you’d be thoroughly disappointed. Maybe it’s because I’m dumb, but figuring out how to use a library can be a complete time sink. I mean, I hate to encourage this, as it seems like I’m committing a heinous crime, ’cause that’s what I’m told I’m doing, but just write the code yourself if it gets it working quicker for you. Obviously don’t rewrite the exact library that you are shying away from using, that is just stupid, but if you only need a subset of it’s functionality then go for it.

Never forget though, there will be a team of astronauts telling you that’s not what you should be doing. They will be spouting dogma, about this and that. However, if abandoning some well written (and not so well written) rules allows you to create a product that people use and love then tell the astronauts there are people who want to hear their shit on the moon and let them go there.

the internet and i

If you are able to read this please keep it to yourself, don’t tell anyone about it. Your ability to possess information that others do not have access to is crucial to moving only in the forward direction. Yet there is very little information now that is not readily available to everyone. The internet is almost ubiquitous. Rewind a mere 15 years though and observe how the landscape has changed so dramatically.

Back then a PC was the something only the posh kids had. My school was fully laden with BBC micros that you could only use to control some piece-of-shit-circuit-board that you had cobbled together to light up a few LEDs in the form of a traffic light.

I still remember to this day getting my first PC – a (big) Compaq laptop of sorts. It was purchased second-hand from a shop that specialized in guns, knifes and guitars – I still have it somewhere. They probably didn’t even know what it was. When I read stories of people in the 80s having PCs it makes me laugh as, in my world, this stuff was sooo out of reach that you wouldn’t believe.

Still, what would the general public have done with the PC back then anyway (apart from gaming)? The internet was a luxury goods item at this point. Even at university only a few computers had internet access. However, between ’95 and ’97 it exploded. You were faced with a computer lab full of students looking up porn and printing out the pictures (I mean they were not much use unless you printed them out, right)! Those were the days of looking up what you wanted with reckless abandon without feeling that someone or something was watching you – soon to be quashed with the corporate firewall and government legislation. Make no mistake though, we are still in the golden age of the internet. Laws and legislation will likely mean that the freedom we associate with the internet now will not be the same 30 years down the line.

What puzzles me most is how we got by programming without the internet? If tonight I decided that I wanted to learn OCaml to build some super useful web service, then I would start by searching Google. I’d get binaries, sample programs and documentation in minutes. If I had a problem, I could search on Google or ask a question on Stackoverflow. The barrier to entry is just non-existent. I struggle to remember exactly what we had to do all those years ago. It must have been ridiculously hard compared to the present day. But this is good. We got information easily and moved forward. That said, the internet does have its bad points.

We now struggle to get things done because we can listen to music online, watch videos, read blogs, not to mention the ease with which we can communicate with friends using social networks. What the internet gives with one hand it takes away with the other.

I’m often tempted to unplug the router and see how I get on with work minus the internet. Invariably at the last minute though I find some compelling reason to have it switched on, then the thought of uplugging it fades into nothing and time just slips away again.

an afternoon with Don Knuth

It’s not often you get the chance to spend an afternoon with probably the most famous hacker/developer/computer scientist the world has ever seen. However, last week I got the chance to do just that when myself and three colleagues had the great fortune to spend some time with Don Knuth.

For the uninformed IT “professionals” that have never heard of Don Knuth this is the guy that brought to us the idea of analysis of algorithms and asymptotic notation (big-O notation), the Knuth-Morris-Pratt string searching algorithm, the Art of Computer Programming book series, the list goes on. However, he is not only an “algorithm’s guy”, he also developed the Tex typesetting system and the METAFONT language used to define vector fonts. So basically he is the most famous computing guy out there.

Knuth is a surprisingly easy guy to talk to. Sure, he can really lose you pretty quickly in a conversation, but he also has some great insights.

Our conversations tended to centre around stuff to do with algorithms. His next volume of the Art of Computer Programming will likely focus on constraint satisfaction problems and satisfiability problems – the former being something I worked on myself in the not to distant past. I asked what he thought was a good algorithm to teach people and he said he thought the biparite graph matching algorithm was a nice one in terms of beauty (he did mention another which escapes me now). Not everyone will find the algorithms stuff that interesting (you should!) but his view of beauty is maybe something more universal.

He also expressed a love for writing code, he said that when he gets up in the morning he thinks about writing code and misses it on days when he doesn’t get the chance. That is pretty cool by me and sits in stark contrast to many academics. I got the feeling that he wasn’t too keen on the “apps” developers as he called them. My guess is that his thoughts lie with more meaningful problems than fart apps – however people download them so who are we to say. Still, there was definitely some lamenting going on about the fact that people use software without ever trying to understand what the software is actually doing. That is, have at least a high-level view of the data structures and algorithms used that make the said piece of software useful. Having this kind of understanding allows you to select the right tools for the job. In my experience people that tend to have this knowledge and understanding are far better developers and is likely why Google, Microsoft, Facebook et al. try to attract developers with this kind of knowledge.

He was telling us that he watched The Social Network on the way over on the plane. He said he thought it was great how Mark Zuckerberg was also someone who just liked building stuff like him – this was something Zuckerberg said himself at Startup School 2010. What is even cooler is Mark Zuckerberg actually sent him a copy of the latest Art of Computer Programming book and asked him if he would sign it for him.

So Don Knuth himself will have long forgotten who I am but at least I will be able to recollect years down the line this encounter with a computing genius.

academics don’t care and industry doesn’t have the time

I’ve been reading some interviews recently (Coders at Work) and one interviewee (I can’t remember who, but I think it was Fran Allen) suggested that in the last 20 years programming languages have not progressed leaps and bounds as they appeared to in the early days. Are they right?

Personally, I think this might be a good call. How different are the currently programming languages from C? OK, today’s popular duo, Java and C#, have garbage collection and thus we don’t have to deal with those troublesome pointers. Then again, both LISP and Smalltalk have had garbage collection for a long long time. C# has lambda expressions, and other higher-level functions, but then again this stuff was around in LISP since day dot. It’s almost as if the “C-style” languages are battling to catch up with things like LISP – which is a language I’ve never used other than for some emacs hacking. However, LISP has been around since 1959, so why has it taken so long for people to realise that many of its features are incredibly useful? Have we been held back by the fact that most academics don’t really care too much about this kind of stuff, and so don’t push it, and industry is too busy telling us something needs to be done last week?

First, I know that there are academics who do care about this stuff but I don’t think there are enough. And by virtue of being an academic, they are not exposed to many of the problems that are faced by your every-day software developer, and as such, maybe they don’t see the future so well. Thus is it up to industry to innovate at this level? If it is, I don’t see us moving too far forward from our current situation in the next few years. Why?

Well, most small to medium sized software houses are not exactly making money in significantly large quantities to warrant throwing it away on research that some other company making similar software will use to create a similar/better product. So instead of spending money pushing forward the state of the art, you would be insane not to spend that money on making a better product. This combined with crazy schedules, does not leave much in the way of time for forward thinking.

So it appears that we need to rest our hopes on the large enterprises like Microsoft, Google, et al. It’s fair to say that these companies invest a significant amount on money in research, some of which goes toward trying to make programming languages better (F# and Go for example). However taking these two languages as examples we see that F# doesn’t appear to push that many boundaries, and I can’t comment too much on Go, as I’ve not looked at it closely, but it also doesn’t seem to include too many radical switches. Maybe radical does not sit well with shareholders, I dunno.

The thing is, I’m not professing to contain much insight on this either, and I’m not even sure I know what I mean by a radical switch. I just know it doesn’t appear that academia or industry are moving this forward quick enough – if the last 20 years is anything to go by. The only remaining vehicle for change is the programming “community” as a whole, but how much traction we can have is debatable. That said, we do need to move in a different direction, I’m sure of that, and maybe things like multi-core processors may force us kicking and screaming in that new direction. However maybe the problem is even deeper than this, and a switch away from the whole Von Neumann architecture is required. Who knows? Well, I hope someone does!

sticking with what you know

There comes a time in every programmers life when they have to learn new things and step out the box. Yeah it’s difficult, for sure. It’s all too easy to create the latest application in your software empire, using a language you’ve been developing in for the last 10 years. However, the real problem is thinking this is the only choice. When is it time to abandon this certitude?

First, we cover the forced abandonment. This is when you are pushed kicking and screaming into pastures new, whether you like it or not, i.e. the new job. Here, not only is the new language curve ball thrown (viciously), but you also get whole new set of business rules into the bargain. So what do you do? You program the new language like the old one, only translating the syntax in your head. This is not the best way to learn a language though. Why? Well consider those C programmers trying to program imperatively in Java, Java programmers in JavaScript, C++ programmers in Ruby, and so on. When there is a change in paradigm this mapping strategy just doesn’t work – a similar situation exists with languages that contain a more powerful expression set. It also encourages the behaviour where people learning enough to get the job done, without understanding what is really happening, or that there may have been a better way using “unmappable” language’s features. A better approach would be to write something small, and new, that allows you to explore the language’s features. I’m sure most people can think of something they could write. Furthermore, if you can make it useful to other people, or even your new employer, then everyone’s a winner! This is something I touched on before.

For many people though, this is the only time they will ever consider abandoning. This is sad, and a poor characteristic in a programmer. And to be honest, I just don’t understand it. That’s not to say that I don’t accept that people just do programming as a job, then go home and don’t think about it. However, it’s like most things in life, it’s nice to progress?

As a programmer there will also be other signs that the tide is turning, and you don’t have to be too alert to spot these. Previously I wrote “Perl is Dead, Long Live…Perl?” and being a big Perl fan it was sad to see the language apparently dying, so I know what it’s like. Some signs to look out for may be:

  • the language features are not moving on (Java watch your back) – the people who created it no longer care,
  • the community surrounding the language is dwindling – the people who use it no longer care,
  • there is little in the way of choice when selecting libraries/frameworks – the experts have fled,
  • other programmers have never heard of it – there is no buzz,
  • jobs using it are few and far between – businesses have given up on it, the death kneel.

However, this is all not to say that you give up on your language just because it’s no longer cool – popularity is by no means a great indicator that something will suit your needs. It need not be the case that you give up on your language of choice, instead it could be that you contribute and drag the language forward. But be careful with this one.

Finally, any decent employer will want to see that you are continually developing your skill set – their business needs are continually evolving, so why aren’t you? You are much more likely to land a better job if you contribute to your own education in some way. It looks good and it’s also something to talk about.

So go out and learn something new today, and stop sticking with what you know.

programming just isn’t that hard!

Programmers at times can take themselves, and their abilities, a little too seriously. The fact is that programming, in general, is just not that difficult. Sure, there are parts of it that are tricky but, at the risk of over generalising, the capabilities required by your run of the mill programmer are not that high. Are you sure I hear you say?

Well plucking figures right out the air I’d say that around 90% of applications involve a simple CRUD model. So what we are essentially doing is gathering data, processing data, and writing this data to the database. Two-thirds of this process is pretty simple, i.e. gathering the data and writing it to the database, this leaves the possibilities for hardness in the processing data phase.

Again in most situations the processing of data is pretty simple with no complex db manipulation or algorithmic mind games. Consider your typical web application for example. The processing of data is minimal, with little to no algorithmic work involved at all – I mean with Twitter there is literally nothing to do. Google on the other hand has lots to do: it has to make those search results shine. This is not to say that developing Twitter is simple, as the scaling issues will make your head hurt. However, scaling problems are only going to affect a very very small number of sites out there, but due to their ubiquity they are the ones we hear most about.

If all this programming nonsense is so easy then surely it’s difficult to make bad software?

Nope. The fact is that the only people who care what the code looks like are other developers. The code underneath could be shitter than an incredibly shitty shit and the end user wouldn’t know. As long as it carries out the task that they require the software to do, in a reasonably efficient and user friendly way, no one really cares. Oh apart from other developers.

Obviously nice structured code, that is easy to understand, free of bugs, and a maintenance dream is a good building block. However, it is by no means a guarantee that you are on to a winner. Marketing, user experience and coolness are all equally important, actually, they are probably much more important. I obviously can’t say for sure but I would imagine there are plenty of successful software products that are badly written but tick these boxes – maybe even most successful products, as they are free from the burden of the studious programmer.

So, essentially what I’m trying to say is that despite programming not being that difficult there are many other more important factors that contribute to the success of a software product. Many software products do their job, but the one that will be successful is the one that does it well.

All this is not to say that programming well is not important. On the contrary, it’s important to other developers who you work with and that is not to be underestimated. This is a topic for discussion another time though!

Finally, for those non-programmers out there, don’t start shouting “If it’s so simple why does it take so long”? Well just because something is easy it does mean there are not lots of easy things to do. I mean, hammering a nail into a piece of wood is pretty simple, right? But if I asked you to hammer 10 million nails into a bit of wood it would take you a long time. Remember this marketers and project managers.

strong coupling and web development

We are all well aware of the dangers of strong coupling: for me, the most dangerous being the increased maintenance costs in such a system. Much has been discussed about coupling and measures such as dependency injection are used to help control it.

However, as far as a Google search leads me to believe, but I didn’t search too hard, not much has been said about the strong coupling that can exist between HTML, CSS, and JavaScript files. Maybe I’m doing something wrong, or at least something different, from others but many times in recent weeks I have found that the dependencies between these files can be rather tight and restrictive.

To understand what I’m talking about, let’s take a look at an example. Consider the following snippets of an HTML, CSS, and JavaScript files respectively:

1
2
3
4
<div id="main_wrapper">
    <h2>Header</h2>
    <div id="content"></div>
</div>
1
2
3
#main_wrapper {
    color: #FFFFFF
}
1
$('#main_wrapper').attr('color', '#000000');

As you can see the id is referenced in at least three different places. Well so what?

The problem, as I see it, comes from the fact that it is more than likely these files will be maintained by different people. For example, a designer is likely to maintain the HTML and CSS files and a developer the JavaScript file. In the days before JavaScript proliferation, this maybe wasn’t such a problem. However, with the rise of jQuery and its awesomeness the landscape has changed a little.

In fact, jQuery actually compounds the problem, as we have all become accustomed to seeing JavaScript code littered with things like $('#main_wrapper'), i.e. CSS selectors are referenced throughout the code base. What this means is that when the designer changes the structure of the HTML file, or the class names and ids, it can have unexpected side effects.

The easy (part) solution to all this is that when you create ids and classes you never change/remove them or their container – or at least you never change them without searching globally rather than just locally. I’m not sure how restrictive this is though, maybe those with far more experience of this than I do can offer up some more constructive thoughts? Are designers are just not used to such constraints? I dunno?

To me it just seems like a hassle having to keep track of selectors, ids, and classes in so many different places. It seems to go against the spirit of good programmer where we try to localised any side effects when changes are made. With the above way of working this doesn’t seem possible – and combined with the divided level of responsibility between designers and developers, I imagine this causes an increase in the number of bugs and also in the cost of maintenance.

Maybe others don’t see this as an issue at all? Maybe I’m doing designers around the world a disservice? You decide.

At the moment I can’t offer up any better solutions, as I have not really thought about it too much yet. When I do though, I’ll be sure to report back 😀