Sunday, December 16, 2012

How much a wrapper of a lib should be documented?

I was looking for a service on the npm to install my Node.js application as a Windows service.  Pretty straight forward stuff really and I came across winser which is a nice module that easily allows you us it  to do this.  The documentation even shows how to do this when you install the npm module which I must admit is very nice using the npm script feature "postinstall" to install your module as a Windows service.  

All had been going well when I was using winser but when I install it on the testing environment things suddenly took a turn for the worse.  All of a sudden the test box became unresponsive and then rebooted itself.  All of this happened only moments after I had install the module.  I knew this environment was not setup correct and my application self terminates once it can not find the correct settings/start up as expected.  Problem here which I didn't know was that the library winser relies upon to register the application as service, nssm has more configuration options.  One important one is a throttling mechanism when your application exits after a certain period of time.  This works well if you are within this time period but outside of it your application is automatically restarted.  This is the default setting of nssm library when an application exits. This default period which the throttling mechanism comes into is 1500ms.  So once your application passes 1500ms nssm assumes the application to be up and running successfully.  Unfortunately this is not the case for my application which would exit after 1500~1600ms... which lead to the application tight looping on restarts... which lead to excessive messages going to the Windows logs and CPU consumption that lead to my test server crashing.

To me this shows that sometimes important default settings/behaviours are not obvious when they should be to a developer.  If I had knew the default behaviours I would have looked to change the defaults but I had not.  After reading a lot about nssm (probably more than I should have needed too) I am now excited about the next version of nssm 3 which will allow you to control a lot of these settings without editing the Windows registry.

Even though it is easy to wrap up functionality to make it available cross programming languages.  It is important to include the documentation/usage of that library from the original in the wrapper. 


Saturday, December 1, 2012

NodeJS connection times out after 2 minutes

I've been working on an application that is a REST based API.  It is fairly simple with the only exception that it transfers largish amounts of data (50MB+).  This is not particularly hard for Node.js to do and I would say a great fit as you really can build an application with a small memory foot print.  I was really please to be able to stream the data back to client with the application using less memory than the data sent.  At the time I was using wget to call the API with simple requests so I could test my code was working.  This is pretty straight forward and wget is a great little tool when developing this type of stuff as most browsers would crash when they tried rendering the 50MB of data.  So I had really good way of testing the API each time I added a part to it.

All was going until I came across a strange situation after completing development of another API call.  This new call would timeout at 2 minutes.  I was perplexed as I didn't expect this as nothing had gone awry but the call had a lot of computation which meant no data would be sent to the client for several minutes.  First I thought it was wget's timeout... but it's default is 15 minutes so no way was it in the wrong.  Wget has a few timeout settings so I had a go messing around with them but no joy... no matter what I did it would always at 2 minutes timeout with this new API call.  So I tried a browser on the off chance something was screwy with wget but again they had the same issue.  It occurred to me then that it had something to do with the Node.js and an internal timeout.  I read the http.Server documentation inside out and there is no mention of a connection timeout when sending data...  I was confused and annoyed at this point.  This feature/behaviour is not described/documented in the Node.js API.  After a bit of searching I found a some information that the response connection times out after 2 minutes (120 seconds).

If you look source code of http.js you will see at line 1700 (v0.8.15) the code responsable for the timeout.  Unfortunately there is not much in the code that states why the timeout is hard coded.

socket.setTimeout(2 * 60 * 1000); // 2 minute timeout

The documentation does mention the http.ServerRequest.connection object but makes no mention of the http.ServerResponse.connection object.  I must admit I am surprise that there is no mention of this as how else would a response get back to the client?  The corrective action is pretty straight forward by setting the timeout on the response object.


http.createServer(function (req, res) {
  res.setTimeout(0);  // Never timeout
  /* 
     Do stuff
  */
});


I did hunt around the code base to really understand why there is a timeout of 2 minutes defined.  The best I could do is following which comes from the following commit https://github.com/joyent/node/commit/7a2e6d674a94e01a17e856b4d51ec229fad9af51



Default to 2 second timeout for http servers
Taking a performance hit on 'hello world' benchmark by enabling this by
default, but I think it's worth it. Hopefully we can improve performance by
resetting the timeout less often - ideally a 'hello world' benchmark would
only touch the one timer once - if it runs in less than 2 seconds. The rest
should be just link list manipulations.


Do note the committer ry mistakenly stated seconds instead of minutes in the commit notes but it would appear the is reason is performance.  It has been interesting investigating the cause of my grief but I don't feel the need to remove the 2 minute timeout as it does server a purpose and my situation of keeping a response open which has long periods (minutes) of no data being written is not a common use case.


Tuesday, November 22, 2011

Thoughs on LESS CSS

So if you don't know about LESS CSS it is a way to generate CSS given a LESS file/object which consists of a lose set of rules.  First impressions this seems really powerful - a way to generate CSS using programming logic.  The problem is that is only limited to generating CSS and not adding additional functionality to CSS.  So even though there is or maybe has been hype around CSS for a developer who is not a CSS developer this has little to no value to you.  In fact I would go as far to say that it has limited use for CSS developers.  Reason I say this is a website design tends not to have a lot of design which can be reused nor is it easy to predict the reuse.  Only real exception to this is website which have a big templating base such as Wordpress.  I can see making different colour themes template really useful but that is really where it ends.  So for me it gets put in the really cool category but I don't have business case for it.

Friday, October 14, 2011

Are things getting more usable? Or just not for developers?

I was reflecting from the past week and recalled a number of UX/usability issues with products that I thought should be easy.  Maybe things are being made so easy that it is it making it too hard for someone like me to use?  I was using Microsoft's Outlook 2007 email client and all I wanted to do was to view the HTML source of the email.  Previously this was pretty straight forward and obvious to do (well from my recollection).  As I didn't need to search the interwebs for an answer.  The obvious choice of something being under the view menu was no longer applicable and eventually found it in a sub menu under Other actions (http://www.technologyquestions.com/technology/microsoft-office/61907-view-html-source-outlook-2007-a.html).  So why wouldn't this functionality be under the view menu?  I don't know the reasons why Microsoft put this functionality in this obscure location but it gets me thinking it is hiding functionality from the user.  Probably the real answer is that a developer(me) is no longer an audience which gets functionality aimed it.  Instead it is "hidden" or obfuscated from me, where  it is there but no longer obvious.  This is the same for Windows Vista/7 which I do my best to skin it back to Windows 2000 where I could quickly find functionality and be productive.  Windows Explorer used be a great place to manage files but now with My Computer taken over I find it to be productive manage files.

I understand I am not the target audience.  Scratch that I bought the product so should it not be aimed at me?  This is where I feel UX in software is failing because they can not be customized for me.  I was using my accounting software Xero today and all I wanted to do is search over my expenses.  Fairly simply functionality to do a search over a table right?  Maybe it's just that I want something that they have yet to implement?  Put it on the suggestion list you say?  The real issue is that my needs will never match everyone else.

This brings me to the thought that UX/usability should be focused less on a generic customer(s) and more directed to each customer.  You might be thinking "Hey Ross, to that you'd needs mega bucks.".  Well I would agree with you but I think this is the direction where software needs to move to.  We have seen a lot of movement recently with enterprise software being pressured to deliver consumer software type features.  In the enterprise world you would create one page/module which only a small number of users and it would exactly what they needed to be productive.  Where as consumer software you do not have ability for that level of customization to occur.  So instead consumers create new work-flows to get around those needs.  If they are lucky there is a product that better suit's there needs and they migrate to it.  This is what consumer software needs to learn from the enterprise.  Software needs to be to be customized to me.

Friday, October 7, 2011

Reflections on MAX 2011

Well I just got back from MAX and I would have to say it was probably my worst experience. Not because of the event in anyway but I got really sick with a virus/flu/MAX killer bug at 4am on Monday/day 1 of MAX. So I missed the entire day and only pulled myself together on day 2 to get to see the keynote and stick around for the famous "sneaks". I was about all out of energy once the sneaks had ended and bitterly disapointed I was going to miss the MAX bash. It looked really fantastic when the bus took me back to my hotel to rest and recover. So come day 3 I had to pull out as I was travelling home the next day back to New Zealand which is about 14 hours of flying. So in reflection it was my worst MAX but that was to do with me than the conference. I can't say it enougth that it sucked I was sick at MAX. When I had moments of feeling well I popped online to see what was up and caught up on day 1 keynote. I have to give Adobe kudos for getting these resources up so quickly.

I read a blog post which I think best reflects my opinion of the event in general.
http://www.rblank.com/2011/10/06/thoughts-on-adobe-max/

The only additional thing I'd like to say about the event is where was Adobe's CEO Shantanu Narayen? Of the past three MAX's he started the day 1 keynote. So why was he missing? The first MAX I attending was in 2008 I was like who is this Shantanu guy as he was not a familiar face to me but I got used to seeing him at the start of each keynote on day 1. Bit of a mystery to me.

The one thing on my MAX wish list was to see a new developer focused tool towards HTML/JavaScript development. Yes Dreamweaver can do it but it's audience is not focused towards developers. I want an awesome HTML/JS eclipse plugin similar to what Flash Builder is for Flex/ActionScript. Well we can all dream.

All that said I'm looking forward next year and the innovations ahead.

Monday, June 27, 2011

What I want in JavaScript

I was reading a coding horror post on JavaScript which went over how it was popular, how it can to be and the troubles which it faces in the future. I reflected on this point of view thinking a language's success can have very little to do with itself. JavaScript is no SmallTalk when it comes to languages an one could argue it just a child of previous modern day languages. That said I'm not that interested in new language features being added to JavaScript but would really like to see new deployment formats. The inline JavaScript or HTML script tag include element are to focused around the HTML document. A clean separation between layout and code should be created as to prevent and control JavaScript execution. It is far to easy to include JavaScript libraries and too easy for conflicts to occur until it is too late.

I would like to see a compiled format for the language which is sandboxed so you can be sure no one else overwrite a feature. Something like the SWF format would ideal as you have a lot more control over the binary. I know a lot of js files are minifyed and sent compressed via the web server but that gives you no control over the internals.

Also it would be great nice to see a new HTML feature designed around JavaScript which is being run. So you could state any restrictions on the code which is running. This would look at bring HTML applications inline with other apps that are built complied for deployment on other platforms such as iOS and Andriod.

Even though I dread the thought of HTML, CSS, & JavaScript being the base of modern applications that doesn't mean it will not happen. With web developers now pushing the HTML platform pass some document format syntax and to the base of applications we should expect it to grow as well.

Saturday, May 14, 2011

New tech but lack of tooling

I was flicked an email to the rome interactive music video. I was not sure what to expect but I was most impressed by it. This really is a great showcase of the mixing HTML5 and WebGL. The music was really cool... might even get myself a copy of the single/album . But once I had played it a couple of times through I was thinking exactly how easy is this? Changing pixels on the screen is one thing but how fast did it take them to do it?

My experiences with JavaScript is one of a love hate relationship... but I'll leave that for another post. So I was wondering how did they generate all the JavaScript files for the models etc. It's cool to have something that can play/run it but how about something productive to generate it? I was hoping the behind the scenes tech vid would shed some light on this but it didn't... other than going on about WebGL and HTML5. As a developer at heart I want to know how hard was it? Like really, how hard it was? It is really cool that you can do this but having something render your vision is one thing... how you create it in a form to render it is what interests me. Which really comes down to tooling.

Time will tell if this is just one cool example/demo application of these techs destined for history or is there a larger movement out there with tooling to follow? At the end of the day I'm always thinking about productivity and great tools make great techs hum. So for the meantime I wait an see what great tooling comes out on top for creating such visionary pieces of work.