Product Review: NDepend 4

I have to start this off with how I obtained a copy of NDepend Professional as it was actually a little funny. As I was going through and cleaning out email I came across an email that looked like it could be spam, but I usually don’t see spam offering licenses for software development tools. Taking a chance and opening it, there were some incorrectly spelled words and some odd grammar, but overall everything seemed like it was from Patrick Smacchia through @NDepend support. Taking a chance and responding to the email, it was actually was valid. Sometimes it does pay to actually follow up on “spam”. Purchasing a license for myself was already on my list for this year, so although I received the license for free (and thus the disclosure at the end of the article), it was already on my radar of a worthwhile product.

So being a regular follower of Ayende Rahien and enjoying his reviews of projects, I decided that RavenDB would be the first project I try running through NDepend. Being the bleeding edge kind of developer I like to be, I went out and pulled down the repository for changes through 2/8/2013 on the master branch.

While that was being pulled down, I went out and downloaded NDepend from the website. Considering that there’s a license file for Professional edition that needs to be saved along side the executable, I’m not sure I agree with the need to enter a License ID in order to download the program. Anyway, I proceeded with the download and found it was a zipped folder. Apparently the install instructions are basically an XCopy deployment. That’s useful I suppose, but there’s mention not to put it into a folder under %ProgramFiles% which says to me it’s built on the notion of always having rights to modify files in it’s folder structure. Normally I like to still deploy things to Program Files as it makes clean up easier, but in this case I’ll throw it into my C:\Utils folder. Like I mentioned, installing the license meant saving the saving the emailed XML license along side the unzipped executable files. I realize this tool is meant for developers, but a rough install process doesn’t usually bode well for the experience of the actual program. Admittedly this process isn’t so bad, but for being labeled as version 4 I’d think that some more thought would be put into an installer or having a way while running the application to install the license. On the flip side, keeping it as an XCopy deployment makes for including it in a code repository nice and simple for creating reports as part of a build.

With everything downloaded and having built RavenDB in regular Release mode (not one of the version specific releases, just plain “Release”), it’s time to kick open Visual NDepend. There’s a bouncing red arrow indicating I can open a Visual Studio project or solution. Okay, that makes it nice and easy.


I click on the link and point it at the RavenDB solution file. That was pretty simple, except I’m presented with a few errors about having multiple, different versions of files:


Considering there’s test files and Silverlight versions I’m not too surprised there’s multiple, different files for those assemblies. I guess this approach might not be the best start. Maybe I’ll just start with RavenDB server. Removing all the assemblies from here I add everything in the Bin folder of the Raven.Server project. That is more than just the Raven assemblies, but I’m trying to understand what the dependencies on those other libraries are, correct? After clicking OK and letting it churn for a little bit, I’m presented with a “What do you want to do now?” type of dialog along with an HTML report:



I think for now I’ll show the Interactive UI Graph. I can definitely tell there’s a lot of information here, but this feels a little overwhelming at first glance.


I’m going to save actually digging into most of the information next time, but for now what I’m seeing in the main view:

  • Based on the number of “N/A because one or several PDB file(s) N/A” remarks for things like Lines Of Code and Lines Of Comment, I’m I probably should have only pulled in those assemblies that are actually built by the project.
  • Based on the “N/A because no coverage data specified” remarks I’m guessing I could include the unit test assemblies to show code coverage. Definitely have to explore that further.
  • There seems to be a lot of items that violate the queries/rules. It’s not obvious why some of the rule groups have the orange-ish outline to them and other do not.
  • As the mouse moves over lines and boxes in the graph, a context sensitive help box appears that gives a rough idea of what the item represents. The help box also has some links to dig deeper into understanding the relationships.

There’s definitely a lot to this program, but I think I’ve covered a good first impression of it so far. Not too bad without any actual instructions on how to use it.

Disclosure of Material Connection: I received the product mentioned above for free in the hope that I would mention it on my blog. Regardless, I only recommend products or services I use personally and believe will be good for my readers. I am disclosing this in accordance with the Federal Trade Commission’s 16 CFR, Part 255: “Guides Concerning the Use of Endorsements and Testimonials in Advertising.”


Why do teams make their jobs so hard?

The more developers I meet, the more I realize that developers continue to make their own lives difficult. They simply accept that the environment they work in (technically) is all that they have. They don’t bother learning more advanced things about it, or even trying to customize it in any way. It’s even worse when an entire team chooses to just use the default environment and everybody is left on their own to do what they want to develop the product. Granted there is something to be said about just getting something out the door, but if you just open up your IDE and get coding without ever customizing it to how *you* code you’ll soon be running into headaches.

I may end up making this a series of posts so I can simply hand this out to future team members so they can understand how their life could be vastly improved. The first tool I’ll dig into is StyleCop.

If your team is doing manual code reviews (like every team does, right?) I strongly hope that the reviews don’t cover things about formatting the code or naming conventions. Make use of a tool that can enforce those rules for you automatically. This is especially important for a team of developers where everybody has different preferred programming languages. As an example, my current team consists of developers who have backgrounds in VB6, Clojure, Ruby, and Java. That alone is a plethora of patterns for how the code should be formatted. Because everybody’s background drives how they format their code going forward you can wind up with a lot of source code commits that are purely reformatting of code so the next developer can understand the code. Why waste the resources when you can have a tool tell the developer they aren’t following the rules of the team prior to even checking in?

There’s a lot of rules that some people that don’t agree with, but I’ve found good reasons for nearly every single rule that comes with StyleCop. One of the key things to do when using StyleCop is not to every change the default settings that are installed on the machine. StyleCop allows you to create a settings file for a folder (and all of its children folders) that override its parent. My general approach is that in the root of every project folder I place the Settings.StyleCop folder that should apply to every single project.

Now, in that base settings file, I typically change the following options:

  • Options –> uncheck “Cache StyleCop analysis results”. I’ve yet to hit a project that takes a long time to analyze, so there’s no point in caching results.
  • Settings File –> ensure that “Merge with settings file found in parent folders” is the selected option.
  • Rules –> C# –> Detailed Settings –> check “Analyze generate files”. This way if you’re doing some sort XAML work, any XAML generate fields will be flagged as part of the naming conventions.
  • Rules –> Documentation Rules –> Detailed Settings –> check “Ignore privates”, “Ignore internals” and uncheck “Include fields”. My main reasoning for turning off this level of documentation is that the classes should be very small and concise to adhere to the Single Responsibility Principle. If the class is so massive that you need to document your fields and all the private members, then you really should be having some lengthy conversations in your code review.
  • Rules –> Documentations Rules –> Element Documentation –> Check SA1609, SA1610, SA1628, SA1629, and SA 1630. This way you have everything documented (a) for Intellisense, and (b) so anybody can look at the (hopefully up-to-date) documentation and know what to expect when working with the item.
    • Depending on the team/project, I uncheck SA1600, SA1601, and SA1602. This way StyleCop doesn’t complain about having to document every public or protected element, but when there’s some level of documentation it must meet the criteria.
  • Rules –> Documentation Rules –> File Headers –> Uncheck SA1633, and check SA1639. I don’t believe every file needs to have documentation about the file. The rest of the documentation should describe all of that pretty clearly. But again, if there is any level of documentation in the file header it needs to conform.
  • Rules –> Readability Rules –> Regions –> Check SA1124. For the love of a higher deity, please stop using regions. If you need regions, your code is tackling too much.

That’s quite a bit to configure, but seriously makes your code at least consistent. Not only for yourself, but for your team.

Side Note: For those that use StyleCop, you’ll notice that this means that SA1101: PrefixLocalCallsWithThis. Yes, this is intentional because it does improve readability when I’m looking at the code outside of an IDE. Is the line BuildDetails.Name an instance property digging into the Name property, a static property digging in, and static class, or what? By saying this.BuildDetails.Name it is now much more obvious where I need to look for that code.


Is There A Good Way To Send A Mass Email?

This is going to be more about the options that are out there rather than true technical details.  I’ve done all of these before in the past, but never really put the thought into how the receiver sees these types of emails until lately.

I’m pretty open with handing out my email when a company wants to get it on a form so they “can keep in touch.”  Since GMail is my primary client, it’s capabilities let me deal with the bulk of the incoming junk messages pretty efficiently without a lot of hassle.  Also their spam protection has been pretty top notch from what I (haven’t) seen.  All that being said, the general filtering capabilities of it are the same as any other client: when messages arrive, perform specific tasks on them.  Common filters that people tend to have include “Sent only to me”, “I’m in the TO:”, “I’m in the CC:”, “Sent from somebody/some domain”, “Does it have an attachment”, etc.  When it comes to sending mass emails there’s a few different options and each has an impact on how the receiver’s email client deals with it.

All email addresses in the TO:/CC: field

Most people (and especially companies) shy away from sending a mass email with everybody in the TO: or CC: field these days*.  Why?  What if I’m a rival company to you and I put my email address on your mailing list.  Now you send me an email, along with the rest of your huge list.  I can now look over that entire list of email addresses and add them to my own mailing list and start soliciting to them.  Or taking that list and selling it off to other companies that will spam you.  Granted any company that uses this practice is most likely winding up in a spam folder anyways, but it does mean others now have your email address when you didn’t intend for them to get it.

As a business this is the worst way to mass email your clients.  Yes, the recipient’s email client can process the message based on it’s rules with no problem, but they may have a rule similar to “If I’m not the only recipient, make the message as low priority and move it to this folder I only read during a blue moon on the fifth Tuesday of the month.” 

* Although I’m sure we still have that parent/sibling/grandparent/friend/third-cousin-twice-removed that still sends those absolutely funny photos.  Just as annoying, but not the types of mass emails I’m talking about.

All email addresses in the BCC: field

Well, if we don’t want one recipient to see everybody, let’s just put the entire list in the BCC field since that isn’t sent to the recipients.  Problem solved, right? I suppose from the business’s point of view, it does take care of the concern about people stealing the list of recipients.  But it raises a couple more issues.  First, if you don’t have anybody in the TO: field, a lot of spam filters will deem the email much more likely to be spam.  There’s a number of other criteria that will determine if it is flagged as spam, but not having one email address in the TO: field doesn’t help.  The other issue is that it essentially gives a big middle finger to the recipient if they make use of rules to filter the deluge of emails.  Not having their email address show up in any of the normal fields means that the recipient needs to filter on the other criteria like who sent it.  Why is this an issue? Here’s an example:

Running my own business and domain I am going to receive a ton of email that I actually want to receive (current client emails, newsletters, convention information, partner programs, etc.), but I’ll also receive emails that I should be aware of but didn’t necessarily think about.  Things like emails to the ‘webmaster’ or ‘admin’ of my site are sent to my inbox because of a catch-all email account.  To quickly spot these types of emails it’s nice to be able to have a filter that has rules along the lines of “If <my email address> does not appear in the TO: or CC: field, flag the email for manual investigation”.  Since no email address shows up in the email this filter now catches all those BCC’d emails and flags them even though it’s a regular, legitimate email.

Or another example:  One of the trainers I communicate with at my gym regularly sends out newsletter-like emails, but about 1/5th of the time he also sends out emails directly to me.  The problem is that he tends to use BCC in both cases.  When trying to prioritize my inbox, I would like to quickly skip the newsletters but actually know what’s going on with the other emails.  Since the BCC hides any indication if I was the only recipient or part of a mass mailing trying to create a filter is neigh impossible.  That leaves wasting the time to look at each email.  Granted I know the business value I’m getting by keeping in contact with him so I tolerate it, but for most businesses I look at it as a sign of the business not really putting any thought into how their customers perceive them.

Mail merge each address individually.

This option seems to have the best of both worlds, but it presents a different issue.  When you perform a mail merge, you’re sending out nearly identical emails to lots of people.  There’s a number of spam filters that will start flagging these emails as spam because a large number of extremely similar emails are coming through in such a short amount of time.  Larger companies that send out newsletters and such have realized this and send out batches.  You may have likely noticed this if you subscribe to coupon or deal based newsletters that generally arrive in your inbox at the same time every day, but vary by a few hours.

The other issue with performing mail merges is that normal consumer tools (using Microsoft Word and Outlook) are very limited in the customization of the mail merge.  For example, the subject line cannot be merged.  You also cannot attach files as part of the merge.  To have better dynamic content (say optional paragraphs), they have to be part of the merge data rather than flags controlling whether to add them or not. Once the merge is completed it sends out every email at the same time rather than batching them automatically.

Sure there are some open source projects out there for advanced mail merges, but it takes that extra effort to get that type of functionality.  Most people will only need these tools once or twice, and thus never think about using them regularly.  If the usual tools had some of these features built in the experience would be much better for both the sender and the recipient.

Which option to go with?

So which method should people go with? It really depends on what you’re looking to achieve.  As an end user, I want to see my address in either the TO: or CC: field somewhere.  If I’m BCC’d on an email, I’m likely going to mark it as spam and not even bother reading it. The Mail Merge option would allow for a single individual to appear in one of those fields, but of course it has the limitations I mentioned above.

I know I tend to be an extreme power user for some things, but what other gripes do people have when it comes to handling and filtering email?


That Conference: Git More Done

The final session I attended was presented by Keith Dahlby.

  • Config
    • git help config
    • git config –l
    • You have the system level configs, user configs, and repository level configs
    • git config –e –global (will open up the global config in default text editor)
    • configs are pretty similar to INI files
    • core.editor = <path>
    • diff.renames = false|true|copies
    • difftool.prompt = true|false
    • mergetool.prompt = true|false
    • mergetool.keepBackup = true|false
    • help.autocorrect = 0|N|-1
    • log.date=default|relative|local|iso|rfc short
    • aliasing
  • Named Commits
    • SHA1
      • or the unique initial substring (6 often sufficient)
    • Symbolic references:
      • branch (moves with commits)
      • Tag (remains static)
    • remote references
    • Head = what’s checked out
      • reference to a branch
      • Arbitrary commit (detatched HEAD)
        • on commit, no branch to update
    • ORIG_HEAD = “undo” for big HEAD changes
      • Saved before reset, merge, pull, etc.
    • FETCH_HEAD = last fetched from remote
      • git fetch <url> <branch>
    • MERGE_HEAD = incoming merge heads
    • Relatively naming commits
      • suffixes
        • ~ = parent; ~n = nth-generation grandparent
        • ^=first parent; ^n = nth merge parent
        • 3 commits ago = HEAD~3 = HEAD^^^
        • @{n} = nth prior value for that ref
        • Undo last commit = git reset HEAD@{1}
      • git help revisions
  • Local workflow
    • Ridiculously cheap – use liberally
      • Write SHA to file in refs/heads
    • Branches to clean up
      • git branch – merged
    • Keep Master clean!
    • New topic branch from master
      • git checkout master –b topic
    • Commit more than feels natural
    • git stash
      • Stash away work in progress
      • Typical workflow
        • git stash save “comment…”
        • switch to fix and then switch back
        • git stash pop
      • git stash save –include-untracked
      • git stash list
      • git stash show –p stash@{1}
      • git stash drop stash@{1}
      • git stash branch
    • Temporary Commits
      • Advantage: WIP lives in branch
      • commit, then git reset HEAD~ to go back
    • Reset path in index to match commit:
      • git reset commit  -- <paths>…
    • Unstage:
      • git reset HEAD – stateged-file.txt
    • Reset HEAD reference to commit:
      • git reset [—<mode>] commit
    • Discard previous commit:
      • git reset HEAD~
    • soft reset = move HEAD; don’t reset index/work tree. Essentially an uncommit
    • mixed mode = reset index but not work tree (default). Unstage
    • hard mode = reset index and work tree (discard changes (ignores untracked files); VERY DESTRUCTIVE!!!
    • git add –patch allows for overlapping changes that should be in different commits
      • reformatting
      • refactoring
      • changing functionality
      • Allow for “hunks” (sections of diff) to stage
      • Key operations: y/n, a (all)/d (delete), s (split), e (edit)
      • Also can be done on reset, checkout, and stash save
    • Rewriting History
      • Permanent when pushed
        • Until then, pretend you were perfect
      • git commit –amend (easiest way to change the last commit)
      • alias.cia = commit –amend –C HEAD
        • git cia –a –reset-author
      • git cherry-pick
        • apply changeset(s) elsewhere
          • like to the wrong branch
      • merge vs rebase
        • think of it like shuffling a deck of cards (merging) vs cutting a deck (rebasing)
        • rebasing are a replay of commits
        • rebase –interactive allows for replay with modifications
        • rebase –autosquash … use “fixup! <message>” and git will automatically combine a commit with one awhile ago
        • config rebase.autosquash
  • Fixing “oops”
    • git reflog
      • ls –r .git/logs
        • HEAD, heads, remotes
      • git reflog –all
  • Finding bugs
    • git bisect
      • binary search through a commit space
      • git bisect visualize = view = overview in gitk/log
      • git bisect skip (current version can’t be tested)
      • run my_script = automated bisect
  • http://github.com/dahlbyk/Presentations
  • http://github.com/dahlbyk/posh-git

That Conference: Design for Software: A UX Playbook for Developers

This session was done by Erik Klimczak. I got into it late, and it was standing room only, so I had to jot notes down on my phone. Hopefully these still make sense and there’s not too many typos/autocorrects.

  • James Webb Young quote about ideas
  • Sketch out ideas
  • Sketch early& often to identify edge cases quickly
  • Define patterns, be consistent, reduce steps
  • Storyboarding as a form sketching
  • Drawing app in context of environment will help define usage
  • Tools: sharpie, moleskine, non-photo pencil, neutral gray pen
  • Wireframing to identify interactions, pattern layout, and get stake holder buy in early
  • Good wireframe: content, interaction, layout, hierarchy, functionality
  • Prototyping: whole point is to be wrong
  • Paper prototyping give sense of relative size. Think about things like touch screens and apps.
  • Prototyping is about taking it. Be clever, not complicated
  • Colors: start with solids and add gradients if necessary
  • Be aware of the psychology of colors
  • Use tints of color to add more colors to your palette of 2-3 colors
  • Shades of gray for the chrome of the app
  • Typography: if done right, you don't see it.
  • For print use serif body font with san-serif headers. Generally opposite for digital
  • Pick a typography and stick with it.
  • Rhythm & Scale calculator in Chrome app store
  • Pick a single family with lots of variations to give variety.
  • Visual Communication: telling the most with the least amount on a screen at a time.
  • Basically logically organizing UI.
  • If things look the same or are grouped together, the user expects them to behave the same.
  • Consistent margins, spacing, type size...
  • Reducing colors, and redundancy will greatly help.
  • Grids actually help break up a UI to help with consistent layout
  • Motion: easing should be between 250-350 milliseconds to always feel snappy
  • Animations don't always have to be complete for a user's mind to fill in the blanks.
  • Use motion for long running operations
  • Artefact Animator for C#/XAML
  • Interaction design: user flow is when the interface gets out of the way so the user feels productive
  • Avoid modal states & pop ups
  • Use color cues (red X)
  • Don't block UI
  • Read About Face 3
  • Solicit feedback: if the user doesn't know they can do something or doesn't feel confident about an action, they won't perform the action
  • Mouse adorners, touch/click states, audio, inline feedback (invalid input, password strength), wizards, refresh animations for long operations all are great forms of feedback
  • UX patterns: don't reinvent the wheel
  • Most every pattern you'd need is out there
  • He has a forth-coming book: Design For Software

That Conference: Erlang Goes to Camp!

This session was presented by Bryan Hunter. I figured it’d be interesting to learn more about functional programming in general and how it can interact with .NET more.

  • It’s a functional language
  • Grew out of Prolog
  • Erlang is great at pattern matching
  • OTP (Open Telecom Platform) has libraries, practices and styles)
  • ERTS (Erlang Run-Time System) is more like an OS rather than a VM running on an OS
  • Erlang is open, proven and cross-platform. It simplifies writing reliable, concurrent, distributed systems. Its pattern matching is FTW.
  • On Github
  • Compiled on any platform will run on any other platform
  • Erlang community is more in the Mac & Linux world, but runs great on Windows
  • Erlang supports concurrency very well; especially compared to other languages
  • Concurrency is hard because of sharing state.
  • .NET 4.0 thread allocates 1MB…Erlang Process (Thread) allocates 1KB
  • Distribution was built-in from the start because every process is isolated and everything is done via message passing
  • One Erlang process can essentially watch another process, even on another machine
  • It’s possible to have a true cluster in that there is no master process/node
  • Editing erlang
    • The shell
    • Text editor + command line + shell
  • Every erlang statement is terminated by a period
  • Every variable is upper case
  • Every “atom” is lower case (like function or key word)
  • Lists are defined with []; like L=[100, 200, 5000].
  • Tuples are {}, like Person = {100, “Joe”}.
  • When a node pings another node, it essentially gossips the known nodes to the new one. As soon as nodes know about each other (and the cookies match), there’s full trust to do anything.


That Conference: Testing Code From The Pit Of Despair

This session was presented by Phil Japikse.

  • http://www.telerik.com/zerotoagile for a podcast
  • Legacy Code == any code not under test, has a high technical debt, is icky
  • If the code is working and nobody asks you to change it, why change it?
  • Why change code
    • Killing bugs
    • Adding Features
    • Optimize Resources
    • Removing FUD
  • Make one change at a time
    • Fix or add just one thing
    • Test after every change
    • Commit after every passing test
  • Tools of the trade
    • TDD
      • Building up of the Lego blocks of code
      • Inside-out building
    • BDD
      • Don’t start with individual pieces to build the whole
      • Outside-in building
    • Mocking
      • Fakes, Stubs, and Mocks
      • Fakes have working implementations. They’re Hardcoded, not suitable for production
      • Stubs provide canned answers
      • Mocks are pre-programmed with expectations. Essentially creating a specification. They also record behavior
    • Dependency Inversion
    • UI Testing
      • Typically Manual
      • Often tested by customers
      • Tools becoming better
  • Testing and Refactoring
    • Be save; if you don’t touch the code it won’t break
    • Safety Blanket: Integration tests
    • Code Isolation and Refactoring
      • find a change point
        • Finding the point that breaks
      • create a seam
        • the weakest link of the system/class/method
        • use Dependency Injection
        • use commercial mocking
      • add a new test
        • Use test-eventually style
      • refactor
        • Extract Methods
        • Reduce Complexity
        • DRY
        • SOLID
      • Make sure you’re actually isolated
        • Continue to drill down
        • If code is self documenting, we don’t need as many code comments
      • Fix the target issue
        • Code is isolated, tests are in place, switch to T/BDD
        • It’s turtles all the way down
  • Test/Behavior Driven Development
    • Write a failing test
    • Make the test pass
    • Refactor: Remove magic code; add use cases

That Conference: Managing the .NET Compiler

This session was presented by Jason Bock

  • Talking about Project Roslyn
  • The compiler (csc or vbc) is simply a black box
  • There are 44 switches you can throw at csc!
  • Microsoft wants to stop all the re-implementations of the parsers, thus what Roslyn is meant to be
  • CTP == Could Trash PC
  • Only support for C# & VB initially
  • Can be used for things similar to StyleCop like enforcing consistency.
  • Can be used for a deep understanding of Code
    • Like a WCF Operation Contract that is one way, but is returning a value
    • Can provide quick fixes based on the analysis
  • No time schedule for release

That Conference: Truth and Myth in Software Development

This session was present by Leon Gershing (aka Ruby Buddha). The subtitle is “Truth, Myth and Reality.”

  • Git Immersion and Ruby Koans
  • “Do as I say or…”
    • You’re inferior
    • Your priorities are off
    • You can’t be successful
    • You are worthless
    • You’re not a developer
  • A different way of looking at the world is actually a good thing, stop being negative about it
  • Don’t be so dogmatic about something, it just takes practice.
  • Testing, like everything else, is a tool to help you get better at what you do
  • It’s okay to listen to dogma, but don’t just accept it at face value. Always evaluate the information.
  • Everyone must find their own path; their own subjective reality
  • Myth: that which preceded you is invalid.
    • Reality: waterfall is dead. Long live Agile!
    • Agile is Dead…in the same way that punk is dead
  • Agile manifesto was written by 10 hippies…
  • Perception is Reality
  • Where are you heading?
    • Only you can decide where you want to go
    • No one can tell you the answer except yourself
    • Changing your mind about your current path doesn’t bind you to that path, it changes the direction of the path
  • Find forced mentors
    • people that inspire you on any level
    • Have lots of them
    • remember that celebrity is just being popular, not that you want to be just like him
    • Seek out wisdom from all places; from people and locations of all ages and locations
  • Wisdom, like mustaches and mullets, can’t be given. They must be earned.
  • read the poem “The Perfect High” by Shel Silverstein
  • Pro-tips:
    • Read more code than you write, not books or anything else, just read code
    • Write code, and then write it again, then delete it, then write it again, then delete it, then write it again, then…
    • Play with others. There is a big difference between knowledge and wisdom.
    • Trust your instincts.
    • Think AND Feel. It lets you understand everything and have some empathy
    • Enjoy now.
    • Don’t rely on your own excuses (I don’t have the time).

That Conference: Vim–An Introduction for Visual Studio Developers

Presented by Michael Eaton. I attended this session mainly because I’ve realized how much I rely on the mouse lately and how much it slows me down. I enjoy working on my Lenovo ThinkPad because it has the little mouse stub in the keyboard so my hands never have to travel far from the keyboard, but it still takes time to navigate using the cursor just to navigate around. Not only is that slow, but it’s been killing the index finger to fly across a screen.

  • Everything comes down to “My <insert topic here> is better than yours!”. It really just comes down to “Use what you want to use”
  • Vim is definitely far better for touch typist rather than the hunt-and-peck typists.
  • Vim is designed to be really close to the home row.
  • Vim written by Bram Moolenaar and others.
  • vim/gvim/vsVim/viEmu
  • Command mode, insert mode, visual mode, execute mode
  • [i] to go into Insert Mode
  • [Esc] to go back into Command Mode
  • in CM, [Shift]+[A] to go into the end of a line and goes into insert mode
  • Basically look for a Vim tutorial
  • vim is a pure command line editor
  • gvim is the graphical vim editor so it’s slightly friendlier GUI version of vim
  • http://github.com/alanstevens/KickassVim
  • http://vim.org
  • http://pragprog.com/book/dnvim/practical-vim


That Conference: Designers? We Don’t Need No Stinkin’ Designers!

This session was presented by Jon von Gillern. Admits much of the presentation is based on Mark Miller’s good UX presentation.

  • Wrote Nitriq and Atomiq
  • Why focus on UI?
    • Happy customers == more money
    • Good UI is taken for granted, bad UI is easier to notice
    • Pour UI adds up
  • How can we measure UI
    • Keystrokes
    • Mouse Travel distance
    • Gaze Shift (eye travel distance)
    • Recall Rules of interaction
    • Find signal among noise (don’t make the user think about what they’re looking for)
    • Variance from user mental model (does the UI behave how they expect it to?)
    • Mental Time = Total Time – Physical Time
  • Mental Costs – things that prevent users from getting into the “flow” of being productive
    • Lose of focus
  • Good UI is:
    • Clear (in purpose)
    • Concise
    • Consistent
  • Tools for UI
    • Contrast
      • visual difference between two elements
      • “Visual Weight should match information relevance” – Mark Miller
      • Eyes are attracted to greatest contrast (similar to why young kid’s shows are generally primary colors because they are so different)
      • Nitriq example: Actions have more contract, Information has less contrast
      • Google “WCAG 2.0”
        • Relative Luminance
        • Contrast Ratio
      • Black Text vs White Text article
        • Green, Red, Blue for the order that the eye can tell differences in shades.
      • Testing Tools
    • Color
      • Can represent differences
      • Lots of color == Lots of noise
      • Work with a limited color palatte
      • Hue, Saturation, Luminesces vs Hue, Saturation, Value
      • Pick a hue for a color, and add others by changing the saturation & value
      • Kuler
      • Read the article on Wikipedia
      • ~5% of humans have some form of color deficiency
      • Use Red & Blue, not Red & Green to account for color blindness
    • Size
      • Easier to find large items
      • Important information should be larger
      • Relationship to precision
        • Common actions should be larger
      • Test on the worst device to allow for finding if sizes still work.
    • Ordering
      • You can scan up & down faster than left-to-right
    • Motion
      • Effective for helping guide gaze
      • great for entrance & exits
      • use acceleration/deceleration (easing)
      • Fade in with size at the same time to affect flow documents
    • Shape
      • Icons are a kind of shape
      • Depend on context (flow chart, etc)
    • Fonts
      • Sans-serif: quick to read/recognize
      • Serif
        • better for long chunks of text
        • Character by character legibility (“1” vs “l” vs “I”)
      • Dyslexia
        • Kerning, Symmetry
        • Comic Sans actually helps with this…or just buy a specialized font
    • Parallel vs Serial
      • Don’t display information in serial if it can be done in parallel
        • Modal Dialogs
        • Combo Boxes when seeing all options
    • Shadow/Glow
      • Can help differentiate UI that “sits on top” of other UI
      • Glow can help defuse similar colors that sit on top of other items
    • Gradient
      • Transferring gaze
    • Proximity
      • Similar data should be close
      • Similar actions should be close
      • Vice Versa
      • Bad proximity causes gaze shifting
    • Interaction Patterns
      • http://quince.infragistics.com
      • Remember Parallel vs Serial
      • Just because a pattern doesn’t mean it’s good
      • Immediate, continuous feedback
        • Progress Indicators
        • Preview Hinting
    • Know your users (who, how many, how often)
      • Discoverability
      • How often specific features used
      • gather telemetry
      • Don’t be afraid to steal ideas
    • UI Checklist:
      • What are the 3 most important screens
      • What are the 3 most important pieces of data
      • What are the most important actions
      • Are you using the same terms as your users

That Conference: Roughing It: .NET development from the command line

This session is presented by Jon Polfer.

  • In the beginning was the Command Line
  • Programming is about using words, why not use them as the primary way to interact with the computer
  • Vi/Vim for Unix; VsVim for Vi bindings in Visual Studio; Vi-Emu plays nicer with ReSharper
  • Graphical Vi-Vim Cheat Sheet
  • Using MSBuild
  • CScope
    • Have to create a cscope.files file to tell CScope what files/paths to look into
    • CScope –b –q for building the index only and quietly
  • http://db.tt/PQ2HWqjk for all the code, examples, & slides

That Conference: How to Rapid Prototype Like A Boss

The next several postings are likely to be brain dumps from sessions I attend at That Conference, so they may not make a ton of sense to those who didn’t attend. But then again, why didn’t you go? It’s being held at a water park in Wisconsin Dells in the summer so you can bring your family along with.

This session was presented by Matthew Carver.

  • Author of forthcoming book “The Responsive Web”
  • Wireframes
    • Can be truly rough hand drawings or digital layouts that are still cheap to produce
    • Until a line of code is written, everything is just theory
    • Wireframes can become a “Comp”: Comprehensive Layout
  • Comps
    • Comps are simply a snapshot of the UI, but they lack the full interactions and responsiveness
    • They’re too rigid
  • Why Prototype
    • Fill in the gaps
    • Can anticipate actual scope of the project
    • Offer site/content managers to start testing out ideas before any actual code is written
  • Tools
    • Foundation 3 by Zurb
      • Uses SASS
    • Bootstrap by Twitter
      • Uses LESS
      • Not built specifically for prototyping
    • HAML
      • Specifically for Rails, but makes for quick view development
    • SASS
      • A CSS3 extension
  • Basics of Prototyping
    • Reach conclusions from the wireframing
    • Placeholder images: placehold.it/300x300 == a 300 x 300 image
    • Foundation had a number of build in button style, which gets more into the art direction instead of just prototyping
    • Visibility classes for different browsers/sizes


Well I certainly feel accomplished

Another year drawing to a close and looking back I certainly have done quite a bit over the past 12 months.  The one I’m currently proudest of:

Introduction To Artificial Intelligence - Statement of Accomplishment for Steven Evans

For those that didn’t know, Stanford University tried something slightly new in online education.  They offered a few classes completely free for anyone and in each they had two tracks, a Basic one where you only had to watch the video lectures & answer quiz questions after the video, and an Advanced one where you also had to do some home each week.  I only tried the Basic track, but that doesn’t mean it was that much easier. There were Q&A discussion boards set up for each individual video, so if you got completely stuck you had a large population of people going through the material at the same time as you to help explain the material.

The experience was quite interesting in that rather than having an hour (or three) long lecture video, the instructors broke the videos out into 2-8 minutes long videos out on YouTube.  That way also if you struggled to understand a concept you could easily re-watch the specific video again. This was quite handy on several occasions since most of the class involved digging up my knowledge of probability math.

Overall though the class was quite an experience.  Keep an eye out for future offerings if you’re interested in learning an interesting university-level topic for free.


Are auto-updating applications really that useful?

Being incessantly nagged gently reminded by over a dozen applications (and some of their plug-ins) in the past week that there’s updates available, I really have to question whether all these auto-updating (or even just auto-checking for updates) applications are actually worth the hassle.  Sure, when you’re developing an application you’re thinking “My users are never going to bother checking for bug fixes on their own, we’ll have the application call home and check for updates itself.”  That’s all well and good until you realize that you are not the only one in the world developing an application and you’re not the only developer with that thought.  When a single application notifies you that it can be updated, that’s no big deal.  When you have a dozen of them over the course of a week that need to be updated, that becomes a major pain.  Especially for those users that still have UAC turned on.  Granted I seem to be in the very obscure minority of users who keep UAC on, it’s still an unnecessary step to require admin rights, even when the installer itself is never writing to a protected file/folder/registry setting/etc. that really should never require admin rights.

I think the biggest issue with installers seem to be that those that install the application in the Add/Remove Programs Programs and Features are actually creating a registry value in HKEY_LOCAL_MACHINE.  If only a purely user-based installer existed.  Or better yet, Programs and Features listed user-based installs as well as machine-based installs.  I think ClickOnce can support user-based installs, but not everybody wants to use ClickOnce.  Not even in the .NET environment where it’s part of the toolset.  Most teams seem to turn to easily scriptable options like NSIS or WiX.  In all honesty I haven’t looked at ClickOnce since it first came out with .NET 2.0/VS2005.  I could only find one situation where it actually fit my needs, but outside of that I constantly hit limitations and pain points with it.  Perhaps those have been fixed by now, but I haven’t had a need to write an installer in recent projects.  Outside of the .NET world (especially in open source applications that build installers), NSIS seems to be pretty common.  Ed. Note: usage of tools is purely subjective conjecture based on looking over code bases, noticing the standard UI rendered by the tools, and some the fingerprinting the tools leave out the installer artifacts.  I may be missing other common installer creators (InstallShield is out there, but is quite expensive) or grouping them together incorrectly.  A problem with this is that different installer types are created.  MSI’s are recognized as installers and thus require admin rights in order to run them.  NSIS outputs executable installers.  Depending at the flags in the script, the admin rights prompt may not occur until it first tries to write to a protected area.  I’ve also seen this hang up the installer or cause it to outright cancel the install.  Talk about a bad user experience.

I’m not really sure what point I’m looking to make with this post, but it’s a rant I wanted to get off my mind.


Don’t use ‘entities’ as the namespace for an Entity Framework model

In a .NET 3.5 project I created an Entity Framework model and wanted to consolidate the generated classes into a namespace that seemed appropriate to categorize them.  So I chose the namespace ‘entities’.  Apparently that’s an unwritten no-no and I’m the only one who’s run into it (or at least have no shame in admitting I ran into this error).  The problem is that it causes an “UpdateModelFromDatabaseException” error when trying update the model from the database (like you couldn’t guess from the exception name…).  Outside of that issue everything seemed to be working correctly with the model.  All database calls were working, no compile errors, no odd behaviors in the classes themselves.  It just wouldn’t update the model.  So when googling didn’t turn up any answers, I turned to StackOverflow.  Like a number of my questions, it wound up not getting any answers (and very few views at that).  Thus it came time to start experimenting by repeating the steps in creating the model but changing small things.  After a few hours it wound up being the namespace name.

So like title says: Don’t use ‘entities’ as the namespace for an Entity Framework model.


Make Sure Your Open Source Project Actually Builds!

I suppose this post could apply to any project, but since I like to dig into open source projects quite a bit that’s what I’m focusing on.

How hard is it to bring a new developer into your open source project?  Outside of the core IDE of choice for the language, what else needs to be installed on the developer’s machine in order to get it to build?  If you have to tell them to install a few versions of .NET, Ruby, Python, and then a slew of other tools, why not just put it in a README file at the root of the project instead of relaying that information to every developer who has to ask you directly?  If I can pull the source down from the repository of choice, I should be able to get up and running with the code base without having to spend even an hour trying to figure out how to build the project.  If it’s a .NET application, I would expect to open it up in Visual Studio and press Cntl+Shift+B and have a successful build.  Even better is if there’s a build script and some batch/shell script that will run the build.  That way the developer can find out if the way they’re compiling the code is the same as what’s available as an official release.  There’s been a number of projects that work 99.999% of the way I need it to, but there’s one small feature I need to add or comment out and recompile the rest to fit my need.  If your official releases are strictly Visual Studio builds, then mention it.  Yes that means *gasp* documenting your project.  Heaven forbid.  A little 1-2 line README file that’s not likely to get out of sync of the code isn’t going to kill you.

Even worse is if a 3rd party library needs to be installed for the code to compile correctly.  Include it in the repository if possible (when licensing doesn’t prevent it), or at least have a README or REQUIREMENTS type document that lets me (as a new developer project) know what else I need to install.  Say you’re developing a WPF project, and you’re using a Codeplex project full of controls, then let me know to install it.  Don’t let there be a missing reference in Visual Studio for me to spend extra time figuring out what it’s supposed to be.

All in all, I suppose this really boils down to asking “Can you build your project on another machine in one simple step?” or “Does your project pass #2 on The Joel Test?”  It really shouldn’t be that hard?


If you’re going to start a new OSS project, do your research first

I’m a fan of open source software the same as many developers.  So much so that I like to subscribe to the RSS feeds from sites like Codeplex to keep an eye out for new and interesting projects.  I have to admit that there’s some cool projects that look like they could have some real potential, but the I’m noticing that the projects fall into one of the following categories (ordered by perceived number of projects)

  1. So-and-so’s utility class(es)
  2. So-and-so’s school project.  This includes user group projects.
  3. Control(s) for a popular product (typically Sharepoint)
  4. Contrib projects to existing projects (both open & closed source)
  5. Port of another project (typically projects that are in another language)
  6. Proof-of-concept project
  7. Other minutia that generally has only one-off type usage.

The reason for this post is the prevalence of projects that fall into number 5 above.  I’ve come across a number of projects that are ports of Rail’s Migrations into the .NET space.  The problem as I see it is that a couple of them were started up because the originator of the project wasn’t even aware that other projects existed.  I’m all in favor of the competition is best for the consumer approach, but having multiple projects that do nearly the exact same thing because neither was aware of the other just creates lots of rework for no good reason.

So if you’re thinking of starting up an open source project, please spend at least 10 minutes flexing your Google-fu muscles to see if a project already exists that can fit your needs.  If there are similar projects, but lacking some key features, at least contact the developers behind the project to see if it’s on their list of features to implement.  If they’re not going to implement them, then either code the feature up and contribute it back to the project yourself, or fork the project and go down your own path with it.  If you can’t find a project that’s going to fit your need, then by all means do your own project, but at least acknowledge similar projects and explain how yours is different.  Not only does this help show that you’re intentionally going down a path that’s probably been trodden before, but it also establishes who your arch-enemy projects are going to be.  Don’t be afraid to have an arch-enemy project as it gives a great frame of reference in what you’re looking to achieve with the project.


Codemash retrospective

So I’ve been posting all these posts about the sessions from Codemash, but that’s not all that I learned during my time there.  So I thought I’d do an “outside the session” retrospective.

  • No matter how well the facility prepares, 700+ geeks are going to kill the network.  It may have been the wireless access points, but no matter the case people quickly became used to near dial-up speeds again because of so much traffic.
  • I really need to realize I know more about some topics than I think I do.  I went to a couple sessions that looked like they should be higher level and more technical, but ended up focusing on the 150-level type of coding.
  • Great presenters follow the “what am I going to tell you, tell you, what did I just tell you” pattern for presenting.  They also are aware of their time constraints and will make sure to allocate enough time for Q&A, or provide a way to contact them for follow up.
  • Developers have the odd behavior of always sitting near the back of the room.  Some of it makes sense when the power source is in the back or the projector screen is so huge that it would cause neck problems by sitting up front.  This really makes no sense because of technical issues like the mic not working make it easier to hear the presenter.  (ironically, I’m sitting in the back row while I’m writing this up.)
  • A developer conference being held in Sandusky, OH in the middle of January definitely doesn’t sound like a bright idea, but it’s obviously a great conference.  It sold out in about a month.  Maybe it’s the waterpark that’s attached to the hotel.  Speaking of which…
  • I now understand the draw of sitting in a hot tub outside in freezing cold weather.  The hot tubs in the Kalahari waterpark all have an outside area and the feeling of sitting out there was amazing.  That almost made me want to skip some extra sessions to head out to them.
  • Enter The Haggis is an awesome band.  Edge Case Software paid for the band to come out to the conference and perform at the after party after the first day  of the actual conference (day 2 if you attended the Pre-compiler).  You can never go wrong with a band that has a bagpipe in it.
  • Conference sessions are only valuable to you for the first few years of learning a new technology.  Either make use of Open Spaces (provided the conference has it), or network with others.  Open Spaces is great because if there’s something you want to talk about, get it posted on the open space board.  Maybe a time slot has no valuable conference session to you, so check out what’s going on with Open Spaces.  You can learn quite a bit more that way, or meet people you normally wouldn’t.  In my case I caught Jeremy Miller giving a presentation on Storyteller and his goal with it.  And for networking, I got several contacts for the Madison .NET User Group to hopefully get some more prizes coming in.
  • Speaking of Jeremy Miller, I have to admit I’m now wondering why he’s considered such a big deal.  Sure he has done some great things in the open source field, but his presentation skills are kind of lacking.  He has a lot of great ideas, but he takes forever to present them.  I now understand where his monolithic, thousands of words type of blog posts stem from.  Seriously, you’re talking to technical people who are children of the Internet.  Our attention spans aren’t that long.  Keep it moving and vary it up a bit.
  • You can (usually) quickly tell how old some of the pictures are for those that use actual images of themselves on Twitter.  Those that have a somewhat recent picture made it easy to recognize them.  Although it does give you that surreal, “Where the hell do I know you from” feeling when you are sitting right next to the person but can’t place a name with a face.
  • Vendors that use twitter to give away prizes at the conference can be very annoying.  In the case of Pillar, they had a retweet message about being entered to win a Kindle.  The problem is that people just kept retweeting it, even beyond when the conference is going on.  Seriously annoying as all it did was spam the #codemash hashtag.  Telerik took a better approach in that only those that attended the session knew what specific @’s and #’s to put in a message to be entered for a prize.

Overall an outstanding conference.  Definitely looking forward to attending next year, although I think I may actually spend more time in sessions on languages I don’t normally use everyday.


The Problem of Developers and the English Language

This is more of a rant on development rather than useful code, but hopefully it helps provoke some thought.  And while I may be targeting the English language in this post, I believe the other languages run into this issue just as much.

The very first post I made on this blog was a quote from the book "1984" about a pared down language because (a) it surprised me that everybody focuses on the Big Brother aspect of the story rather than the redefined communication, (b) an explicit language where each word can only mean one thing makes a lot of sense, and (c) having words that are meant to be a scale (good, better, best) actually having the same initial word in them (good, plusgood, doubleplusgood) is more in line with the Latin roots of the English language.  Think back to when you were first starting to understand all the oddities that make up the English language.  I before E except after C. To pluralize a word, add an S; except if it ends in these letters, then do this instead.  There’s a lot of rules to the language that don’t make  a lot of sense.  Although I’m pretty sure Ph.D. bearing learned people can give me the reasoning, the general answer to why it must be done this way is “Because that’s how it’s always been done”.  Outstanding.  Way to think outside the box.

When you're gathering requirements for a problem, how often do the developers interpret a different meaning than what the end users actually meant?  Like referring to the "home page" of a site as the first page a logged in user sees versus the first page that every user sees.  When the manager is talking about the project, are they talking about the entirety of building the software, the Microsoft Project file, the Visual Studio project file, or some other meaning to them entirely.  It gets even worse when a word is used that has a completely different meaning in another language (which at least one project has run into).  Assuming you're programming only in the English language, you have somewhere in the range of 475,000 to 600,000 words to work with.  Not only that, but more words are added to the collegiate dictionaries every year.  And then there are words that are commonly used that don't even exist in a standard dictionary.  So why must we overload the same words over and over again?  Stop being lazy and calling every application that serves out data or hosts another application a "service".  Give it a unique name.

Microsoft’s been getting taking flack about overly descriptive (but entirely accurate) developer product names.  Sure it’s easier to simply say “Astoria”, “Geneva”, or “Longhorn”, but unless you’ve heard of them before you have no clue what they’re actually for.  Now hearing “ADO.NET Data Services”, “Claims-based Identity Management”, or “Windows Vista”, you actually have some idea what’s being talked about without having to spend a lot of time digging into what the product actually is (albeit, not a much better idea in the case of Geneva…).  Sure we need to account for being able to talk about things abstractly in some cases, but we should be able to categorize whatever we’re talking about in a similar way that biologists categorize plants and animals.  IIS is a type of web server, which is a type of server, which is a type of computer, etc. StructureMap is a type of IoC containter, which is a .NET piece of software, which is a development tool.  Although there’s a lot of overlap when describing software, it seems like there could be an easier way to describe and categorize specific software.

If you really think about it, each and every word only exists because a group of people have agreed on a general meaning for it.  Words like "blog" and "podcast" were created to sum up new trends in technology that had not been defined at the time.  All it takes is for somebody to come up with a word and others to start using it for it to catch on.  In much the same way Scott Hanselman wants to have a word that says “I’m a technical person and know what I’m talking about”, I’m think I’ll start using Newspeak terminology to better describe parts of the software I write.