Apr 16 2012

... are we really arguing about semicolons now?

Category: What NOT To Do | JavaScriptMatt @ 14:40

I know I’m late to the party, but this issue reported in Bootstrap has created quite a stir across the Interwebs and Twitterverse.  Chants of “Down with Semicolons!” by the “JsHipsters” and of “LEARN 2 CODE NEWB” by the “JsVets,” though entertaining, aren’t really doing anyone in the community any good.

More...

Tags:

Oct 21 2009

Please stop over-using the var keyword

Category: What NOT To DoMatt @ 05:21

I just watched an excellent presentation by Jimmy Bogard on UI testing with ASP.NET MVC.  While I really enjoyed the presentation and got a lot of really good ideas for how to improve how I work with MVC, I saw something that I really did not like: lots of var keywords.  Here’s an example (recreated from memory so it might not be exactly right) of one of the methods he showed during his presentation:

...

var id = UINameHelper.GetName(expression);

var func = expression.Compile();

var value = func(model);

...

What’s wrong with that code?  Well, unless I know the API, which I don’t, I don’t know the types of the variables.  I can make certain assumptions about their types (the first is probably a string, the next is presumably a Func<T1,T2>, the third I’m not sure about), but I don’t know for sure unless I inspect the code further.  Sure, intellisense is going to help, but I don’t have intellisense when I’m watching a presentation or looking at a code sample over someone’s shoulder. I have the code, and that’s it.  The code should be readable without an IDE.  It should be just as clear when I’m using Notepad++ or looking at it on your blog as when I’m digging through it with VS.NET.

There are times when the var keyword can make code more readable.  A great example is when working with complex generics:

//Less readable
Dictionary<int,Pair<Foo,Bar>> lookup = new Dictionary<int,Pair<Foo,Bar>>();

//More readable
var lookup = new Dictionary<int,Pair<Foo,Bar>>();

So, a request to all my fellow developers out there: please use the var keyword carefully.  I don’t think it should ever be used when assigning a variable to a method’s return value.  I think it should only be used when it actually *improves* readability of the code, and never when it takes away from it. 

Thoughts?  Is var the best thing since sliced bread or the worst thing since IE6?  When do you think it’s ok to use var?

Tags:

Oct 12 2009

The additive nature of bad code

Category: What NOT To DoMatt @ 03:06

It really shouldn’t be a surprise, but I have recently witnessed firsthand how a little bad code sprinkled here-and-there quickly adds up to a lot of BAD code that’s painful to work with.  That’s the scenario we’re facing now with our ASP.NET MVC application.  Our team conducts peer reviews for every bug/feature, but we’ve still built up a small mountain of technical debt that we’re going to have to pay sooner or later.  Our once-clean code base has become complex and difficult to work with.  Even simple changes are starting to take progressively longer to implement. 

How did this happen?  There are probably several reasons, but I think one cause is that we’re still too lax in our peer reviews.  If we don’t see something that’s HOLY CRAP THAT’S BAD, we let it slide, even if we don’t really like the solution.   At least, I think that’s how I’m contributing to the problem.  Starting today though, I’m going to resolve to be a lot more strict on my reviews.  If something doesn’t feel right, even if I can’t put my finger on it immediately, I’m going to spend the time to find a better solution.  The alternative is what we’ve done: start with good code, then progressively keep adding a little bad code here and there until we’ve ended up with a big heaping mess of poorlyuctured code.

Tags:

Sep 30 2009

Please don&rsquo;t listen to Joel Spolsky...

Category: What NOT To DoMatt @ 00:32

I’ve been debating whether or not I even wanted to touch Joel’s latest words of “wisdom”, but I finally decided that it’s just too damaging to ignore.  The issue has been beat to death (my favorite, the truth, another take, etc.) and most of the good points have been covered, so I won’t get too in depth.  To sum it up, I have lost all confidence in Joel as a being a guy-to-listen to as far as development goes.  Yeah, he’s clearly a smart business man, but a developer he isn’t. 

You know, now that I’ve said that, I’m not even sure how smart of a business man Joel actually is.  If he was as smart as he thinks he is, he would know that we as an industry have TONS of empirical evidence showing that fixing a bug in software development becomes exponentially more expensive as you move away from the point of introduction.  Doing simple things, like, you know, testing your code properly, is a great way to find bugs early so that they can be fixed cheaply.  Throwing code at the wall and certifying it with “WORKED ON MY MACHINE ONCE!” is a great way to add costs downstream. 

Joel also seems to still be laboring under the delusion that good software development practices, such as writing tests and applying good design principles, actually slow you down.  That’s simply not true.  If you aren’t experienced or just aren’t willing to take the time to learn how to apply them properly, sure, you’re going to flail around aimlessly.  If you take the time to master them and improve your skills, you will actually produce better code faster than you were before. 

I guess that’s enough of a rant for today.  I’m frustrated for two reasons.  First, I used to look forward to Joel’s infrequent posts.  His posts used to make me think, and I often was able to take something of value away from them.  Today, I can’t remember the last time I read one of his posts and felt like I learned something.  Nearly every time he posts or opens his mouth now I feel the overwhelming urge to scream.  I truly hope the next generation of developers are smart enough to recognize Joel for what he is: a former developer who has failed to stay current, a former developer who no longer wishes to improve himself, and a business man, selling bug tracking software, who really shouldn’t be giving advice to software developers. 

Tags:

Apr 17 2009

How to build an MVC application WITH 200% MORE SUCK

Category: What NOT To DoMatt @ 03:06

Obligatory apology for lack of posting: yeah, life is sucking pretty hard right now.  I have a paper due in two weeks, a 40-minute presentation on deep web crawling to give on Wednesday, a new .NET information retrieval application that has to be complete in about three weeks, a thesis that has to be defended within three weeks, a baby that’s apparently on the way (RIGHT NOW), two journal articles to submit for publication, a new Umbraco site to finish up… my life sucks.

But it doesn’t suck as bad as Kobe.  I am continually baffled by the things that come out of Microsoft.  You have great guys like Scott Gu, Scott Hansleman, etc, all of whom seem to really “get it” and are driving things in a positive direction.  Then, you have things like Kobe.  I haven’t looked at the entire thing, and after reading this, I don’t plan to.  I just don’t understand how the same division that saw the light and adopted jQuery is the same division that’s kicking out crap like Kobe.  It. Doesn’t. Make. Sense. 

So anyway, if you want to build a terrible application, it sounds like Kobe is a great place to get ideas.  You should also checkout Oxite

Tags:

Mar 4 2009

What's wrong with this code?

Category: What NOT To DoMatt @ 04:05

I just learned something new about C#.  You can cast an array of any type to an array of objects, like so:

   1: object[] oa = new string[] {"abc", "def", "ghi"};
   2:  
   3: //Will write 'abc'
   4: Console.WriteLine(oa[0]);

No compiler warnings, no runtime errors, everything is happy.  You can then modify the array, like so:

   1: oa[0] = "xyz";
   2:  
   3: //Will write 'xyz'
   4: Console.WriteLine(oa[0]);

Again, no errors anywhere, things just work, and you (hopefully) already knew that.

But what about this:

   1: oa[0] = 5;
   2:  
   3: //Uhh...
   4: Console.WriteLine(oa[0]);

What's going to happen there?  No compiler warnings, but runtime EXPLOSION. I have never encountered an ArrayTypeMismatchException in practice, but there it is. Neat!  Thanks, Channel 9!

Tags:

Feb 17 2009

Code that makes you go 'hmmmmmmm'...

Category: What NOT To DoMatt @ 02:31

Or, in my case, "AAAAAAAAAAAAAAAAAAAHHHHHHHHHHHHHHHHHHH".  Why would someone do this:

DateTime.Parse("1/1/1998")

Just... why?!?  WHY?!?!  KAAAAAAAAAAAAAAAAAHN!!!

Also, for the record, "Start Date" definitely does NOT equal "StartDate".  This is the problem with using magic strings in code. 

And yes, I'm behind on blogging.  School is killing me.  I seriously spent about 10 hours over the weekend doing one homework assignment, and I get one assignment like that a week in my Information Retrieval class.  :(

Tags:

Feb 3 2009

It's official: Joel Spolsky and Jeff Atwood do not understand testing (or math, apparently)

Category: What NOT To DoMatt @ 02:04

I have now listened to Stack Overflow Podcast #39, and things were indeed every bit as bad they seemed from the transcript that I linked yesterday. 

Before I dive in to a rant, let me prefix this by saying that I have been a fan of Joel on Software since I was an undergrad.  I have always thought that Joel was a smart guy with good advice.  I think a lot of people in this industry still listen to him and put a lot of stock into what he says.  And that's what makes me angry: here's a respected guy spouting his mouth off about things he doesn't understand and giving out terrible advice.  If you're just an average Joe on the web with a blog, that's fine, but if you're Joel Spolsky, it's irresponsible and dangerous. Anyway:

Joel and Jeff spend a good portion of the podcast trashing testing, test-driven development, SOLID, and a lot of other things.  Which is all well and good, except that they clearly do not understand what they're talking about.  Joel in particular strikes me as someone who clearly doesn't "get it" when it comes to testing and test driven development.  He describes some contrived example of how hard it would be to unit test a feature of CoPilot, and uses that as support for his argument, but again, his ignorance comes shining through.  Joel, if your code is so tightly coupled that testing it will require that much overhead, your design has failed.  If good design principles and test driven development had been applied, I guarantee you that there would be a clean, simple way to test the compression logic change, and that testing it would have been easier and faster than having to build the entire app, fire it up, then make manual comparisons to decide whether or not the code works. 

They go on to talk about education, and Joel makes the idiotic statement that Google is the lone example of math being applied in computer science that actually made someone substantial amounts of money.  WhatTheHell.  I have news for you: it's all math.  Even FogBUGZ, the flagship product for FogCreek (Joel's company) makes heavy use of Monte Carlo analysis, WHICH IS MATH.  Computers aren't magic boxes that you feed pixie dust into and out pops AJAX.  Well, my computer isn't at least.  Maybe it's defective... hrmm...  Oh, and yes, when I think Computer Science, I think Yale.  NOT.  Seriously Joel, what the heck... did you drink (a lot) before you did this podcast?!?

All in all, I would say that my overall respect for Joel (and Jeff) dropped from an A+ to about a C-.  Those guys are way out of touch with reality.  Fortunately, I think a lot of people saw right through their ignorance, as is demonstrated by the sheer volume of comments here and and some of the feedback here.  Seriously guys, buy a book on testing (and math!).  Or, I don't know, read a blog, or maybe listen to Hanselminutes.

</rant>

On the brighter side, at least not all of my heroes are giving bad advice.  Here's a nugget from Eric Sink on why you should pay attention to what other developers are committing.  It's good advice, and if you're on a small-ish team, something you can easily do while drinking your morning coffee/Red Bull.

Tags:

Jan 27 2009

The pain and horror of hand-coded data access code

Category: Databases | What NOT To DoMatt @ 07:51

At my day job, we finally decided to cull out a couple of columns that were no longer needed in our database schema.  We could have left them in, but since they were in a table that typically ends up with several hundred million rows, we thought it *might* actually save us some space in the long run.  Besides, taking out a couple of columns should be easy, right?

WRONG

Let's see what all this change actually entailed.  Obviously we had some code that was still wrapping these properties in our object model, so I had to remove the properties.  Then I had to update their tests (no, this does not mean that Evil Rob was right).  Then I had to update the data access code, and that's when things took a turn for the hilariously painful. 

Our data access code is very, very old-school.  Our Data Access Layer (DAL) is a single, massive repository that provides access to *everything* in the database.  The interface for the DAL is 1,342 lines long!  THE INTERFACE!  When the decision was made five or six years ago to consolidate all the data access code, we failed to realize that the simple schema we started with was likely to balloon, and that every single module in the system was going to want slightly different views of things... fast forward to today, and yeah, our data access interface is 1,342 lines of code and comments.

So, I had to update the interface.  Do you know how well Visual Studio works on files that are 1,342 lines long?  NOT VERY WELL.  Unfortunately, it gets worse: we have *two* classes that implement the DAL interface.  One is a stub that is used in testing.  The stub is 4,153 lines long.  Yeah, it's a complicated stub.  It tries to mimic a lot of the things that the database does, that way developers working on modules in the system can't do things that the real DAL won't allow.  So, I had to update the stub.  Again, how well do you think Visual Studio works with a file of that size?  I'll give you a hint, there is about a 10 second delay between my fingers hitting the keyboard and a burst of text appearing on my screen. Sidenote: some of this may be due to Resharper, but since using Visual Studio without Resharper is like rubbing a cheese grater on your brain, I refuse to disable it.

Alright, so I update the stub DAL.  Then, I update it's test fixture, which is 4,107 lines.  /scream

That's out of the way, so now I'm ready to hit the real DAL, a 11,882 line monster.  Again, we did things old-school: nothing goes directly to the tables, everything goes through stored procedures.  Recall that I just removed two columns/properties.  Well, I have to change things in four places in the DAL!  I have to change the code (removing references to the deleted columns/properties, removing all the 100% obsolete methods, etc).  Then I have to change all the stored procedures, deleting those that are completely obsolete and altering those that just referenced the deleted columns.  Then I can actually delete the columns from the table.  THEN I have to update the unit tests (which was actually the easy part, thank God we write decent unit tests). 

Finally, it is done, and I'm ready to commit.

Oh, wait, I'm NOT finished, there's a third class that implements the DAL interface: a web service wrapper class!  See, web services were all the rage five or six years ago, and we had to provide a mechanism for people to hook in to our application and access its data, so we stuck a web service on top of it.  This seemed like a great idea when we had about 5 tables.  It doesn't seem like such a good idea now that we have about 30...

Fortunately, updating the web service was the easy part.  The end result?  My SVN commit is going to be a 15,563 line diff that changes 40 files (some classes were rendered obsolete by the removal of these properties; it's kinda complicated).  All of this so that I could take two columns out of the database. 

The moral of this story is simple: DO NOT HAND-CODE YOUR DATA ACCESS LAYER.  It is error-prone, it is hard to maintain, and it will come back to bite you.  Even if you think "meh, I only have four tables, I don't need an ORM or code generation or anything", DO NOT DO IT.  Use some sort of tool that will make your life easy when it comes time to make changes down the road.

I think I'll try repeating the same process on another project here that uses ActiveRecord, and post my experiences with that.  I suspect that it will be a very different story.

Tags:

Jan 7 2009

When good architectures go bad...

Category: Best Practices | What NOT To DoMatt @ 17:44

Today has not been a fun day.  I have spent most of today and a large part of yesterday trying to fix a problem in our system.  The problem seems very simple at first, and indeed we came up with a dozen or so ideas for solutions to the problem.  In the end though, none of the ideas could be implemented given our time constraints.  Why?  How could something that seems so simple be so difficult to fix?  The answer lies in a decision that we made four years ago, a decision that seemed like a great idea at the time: we chose the wrong architecture.

Four years ago, we were designing the "final" version of a massive text and data mining platform after going through three previous major prototype efforts (each consisting of about 4-8 months of development and culminating in a somewhat-working product).  Based on our successes and failures in the previous prototypes, we decided we wanted to try something new.  Instead of writing a bunch of architectural infrastructure, we would jump on the web services band wagon, make everything a service, and stick it all in IIS.  It seemed so simple at the time... we would go with the pipe & filter pattern, using asynchronous web services as the transportation mechanism.  The feeling was that we didn't really need anything overly reliable or performant.  Our system was expected to take weeks to run, processing hundreds of thousands of documents.  We did the math and thought "yeah, this architecture should handle that fine."  Plus, we really thought that using IIS and web services would reduce the amount of architectural and infrastructure plumbing and  management we would have to do.  Everything would be loosely coupled (asynchronous and all), and indeed, it was and is to this very day.  It would be robust in the sense that it could recover from errors, and to a degree, this is true.  If something goes wrong while processing a document, the system will eventually try again.  And since we using IIS as the host, we wouldn't have to write our own hosting services, and again, we sort-of hit the mark.  But there were problems.  Oh man, there are still problems.

Fast forward four years, and I am convinced that our architecture has been more of a hindrance than a help.  Everything is asynchronous and decoupled, but it was a lot of work to get it there.  Did you know you can't send soap messages that are 50 MB long by default (at least you couldn't with .NET 1.1).  We found that out the hard way.  Did you know that the XML serializer, which .NET uses for web services, fails to escape a whole slew of characters?  Again, we found that out the hard way after a lot of painful debugging.   Do you know what happens when you fire a document off to an asynchronous pipeline?  Neither does the process that sent it!  Is it in there?  Did it come out the other side?  Should I resend it?  The only way we could address that was by "guessing" how long it would take the document to make it through, then essentially looking for it on the other end.  Did it come out?  No?  Then resend it! 

And that, my friends, is what I have spent the last two days working on.  Let's think about that strategy for a second.  We send a document to a pipeline, wait for some amount of time, then look to see if it has come out there other end.  If not, surely that means something went wrong, and the document died somewhere in the pipeline in a burst of exceptiony goodness.  Right?  WRONG.  The document may very well still be in there.  Someone may have fed a patent document in to it that contains a massive DNA sequence.  One of the processes in the pipeline may be faithfully chugging away, trying to figure out what the various letters in the sequence mean.  But we don't know that.  All we know is that the document never made it out the other end, so we have to assume the worst and send another copy in.  Great.  Now we have a second thread faithfully chugging away on the same DNA sequence.  Again, we wait, then look to see if it has come out the other end.  No?  Send it again!    We now have three copies of the document eating up three threads on a four core machine.  One more pass like that, and we have effectively clogged the pipeline.  Throw in 8 more copies for good measure, and you can rest assured that the pipeline is now permanently blocked until IIS is reset.  This is the bug I've been trying like crazy to fix for two days: how does our controlling process (which we weren't even supposed to have to create according to our original architectural grand vision) know what's going on?  There's no good answer.  I thought of a few hacks, but most wouldn't work.  The hack I went with was basically to try to detect documents that *might* contain genetic sequences and ignore them.  In a system that will see hundreds of thousands of documents in a week, I'm pretty confident that things will be filtered that shouldn't have been.

Anyway, the moral of this rambling post is simply this: the importance of architecture, especially in an enterprise application, is critical.  You do not want to get this piece wrong, or whoever takes over for you when you finally go insane from all the hacks you've had to implement to work around the danged architecture will pay for it.  Think through everything: how it will work under normal conditions, how it will work under load, how it will work when under attack, how it will respond to every conceivable error, how flexible it needs to be, how difficult it will be to maintain... do not skimp on this step, or you will be sorry. 

Tags: