2010/01/15

Codemash: Analyzing and Improving ASP.NET Application Performance

This was presented by James Avery.  Talks about his experience with all the different sites he’s run, including TekPub.  Overall it wasn’t an extremely informative session, but still picked up a couple things worthwhile.

  • Tricks really aren’t helpful. 
  • Don’t pre-optimize.  Build it and then measure it.
  • If you don’t want to corrupt the code base, spike it out and then measure it.
  • A great process involves Measuring, Modifying, and Measuring again.  This gives you benchmarks to verify.
  • Caching is essentially cheating.
  • Used a tool called Pylot.  It’s been written in Python.  XML based test runner to retrieve data.  It’s an extremely simplistic program.
    • If you want to know if Pylot does X, the answer is “No”.  Well, mostly.
    • Use this when nothing else is running against the system.
  • Another tool is Ants Profiler (costs money).  Or dotTrace.  Either tool for profiling your code.
    • Use in conjunction with SQL Profiler to see all the calls.
    • The reason to combine with SQL Profiler is for when you find code that’s called a lot of times and is slow, check the SQL calls that are happening repeatedly.
  • Another tool is Fiddler or other network monitoring tools.
  • Can make use of DataLoadOptions in Linq-To-SQL and the LoadWith<> method to specify tables that should be eager-loaded.  It won’t always work as Linq-to-SQL sometimes will ignore the recommendation.

Codemash: Credit Crunch Code

Paying Back the Technical Debt

Presented by Gary Short.  The talk was about the shortcuts we have to make when coding and how we can start making up for them.  It’s great learning something from a guy who has a strong Scottish accent.  He is has an outstanding ability to present though.

  • Ward Cunningham was the one who coined the term “technical debt” in 1992
    • Came up with the term as he was doing work for a financial institution, so it was very similar to borrowing money.
    • With financial debt, it can be like “virtual debt” in that you’re aren’t earning as much money as you could on the money that incurred the debt.
  • By incurring the technical debt, you can cause the product to be shipped late (causing a bad brand name), a loss of market share, or loss of excitement to work on the product.
  • Just as financial debt isn’t all bad, technical debt isn’t all bad.
    • If you need to hit a milestone by a certain time to get the company buy in, it’s fine as long as you pay it back
  • The only safe amount of technical debt in a codebase is 0.  But that’s like saying a codebase has 0 bugs.
    • Really just need to get it to a point where you don’t notice the issue, even if it’s really there.
  • You don’t want to have a logarithmic graph describing the technical debt.  You want a saw tooth graph describing it.  So every couple months take some time to pay it down.
  • It’s difficult to calculate the technical debt.  He has a nice, complicated formula that factors in things like employee costs, hardware costs, software licenses and software brand costs.
  • Waterfall methodology is an anti-pattern
    • Because you’re making all decisions up front, the costs go up as the project goes further.  Since you can’t change, it’s an insane amount of cost.
    • Because the subject matter expert has to go through a learning curve (no matter what they know), there’s going to be things that aren’t known up front.
  • Agile isn’t necessarily the fix to the Waterfall anti-pattern
    • Try to incorporate elements of Agile into Waterfall so that the cost isn’t so high
  • Not-Invented Here anti-pattern
    • Development teams spend time developing software which is not core to the problem they are trying solve.
    • Bunch of reasons why not to use a 3rd party product.
    • By having a developer working on a component that they find is harder than initially thought and buy the 3rd party product, they’ve incurred major technical debt.
    • Buy the components if it truly will save time.
    • Use open source if you can.
    • If these don’t work, then work your own
  • Objects that are in code together stay together anti-pattern
    • The classic car example.  Car’s have no behaviors because it’s an inanimate object.  Actors perform behaviors on the object.
    • It’s technical debt because it’s a simpler object graph.  We repay it later because of the cost of adding functionality.
    • Because adding functionality becomes so hard, it damages the brand.
    • It’s not horrible because it allows you to quickly get to market.  Just realize the cost it will incur.  Decide if the debt is worth it.
  • Sensitive Tests anti-pattern
    • tests that are sensitive to context/interface/database
    • make sure tests are as isolated at possible.
  • How to spot technical debt
    • technical debt is pretty much invisible.
    • Take the astronomer approach to finding technical debt in the way that they find planets that can support human life.
    • Using things like the burndown graph to find when issues are coming up.
      • The average productivity should stay pretty consistent
      • If it’s going down, then technical debt is accumulating because it’s harder to add features
      • the number of tests per feature should be pretty similar.
      • if it’s increasing per feature, it’s showing how difficult adding the new features are becoming.
      • if the average team morale is decreasing, it is a good sign things are becoming worse and becoming harder.  Measure this by having the team members rate their happiness on a 1-5 scale anonymously.
  • Technical debt is a silent killer
  • http://c2.com/cgi/wiki?WardExplainsDebtMetaphor
  • http://www.martinfowler.com/bliki/TechnicalDebt.html

On a side note.  Andy Hunt’s keynote over lunch was outstanding.  He talked about the mother of all bugs: the human mind.  It essentially covered all the psychological defects that exist in the human mind.

Codemash: 0-60 With Fluent NHibernate

This was presented by Hudson Akridge.  He contributes to Fluent NHibernate.

  • Automapping
    • the classes need to be public (classes were A and B)
    • all properties need to be virtual
    • in the configuration

      Fluently.Configure()
      .Database(SQLiteConfiguration.Standard.InMemory())
      .Mappings(mapping => mapping.AutoMappings.Add(AutoMap.AssemblyOf<A>().Where(x=>x.Namespace == “…”))
      .ExportTo(Directory.GetCurrentDirectory()))
      .BuildConfiguration();
  • Fluent Mapping
    • Needs a parameterless constructor
    • Nested Expression Exposition
    • Nested Mapping
    • Reveal using string based names
  • www.fluentnhibernate.org
  • http://github.com/jagregory/fluent-nhibernate

The talk was given to the Chicago ALT.NET group back in July 2009.  You can catch the video here: http://chicagoalt.net/event/July2009Meeting060withFluentNHibernate.

Codemash: Software Design & Testability

This session was presented by Jeremy Miller

  • Testability & Design is not about mock objects/interfaces.  It’s about finding/removing/preventing defects.  Better partitioning or responsibilities.  Boils down to “Divide & Conquer”.
  • “I don’t code, I ship software”
    • Take advice from Mary Poppendieck about optimizing the whole of the project, not just the code.
  • Good Code vs. Bad Code
    • The quality of the code has a dramatic impact on how productive the developer can be.
  • Interesting quote from Michael Feathers: “I don’t care how good you think your design is.  If I can’t walk in and write a test for an arbitrary method of your in five minutes…”
  • First causes of good design
    • Feedback, Cycle Times, Batch Size
    • Orthogonality
      • Being able to focus on features independent of other features.
    • Reversibility
      • The ability to change a decision that’s been made about the code.
    • Understandability
      • Is the code readable and can be followed easily?
  • Writing Automated Tests
    • Set the system into a known state using known inputs
    • It has measurable outcomes
      • State based tests.  I.E. Calling a method to write a file results in a file actually being written.
      • Interaction based tests. I.E. Verifying that a method is actually called.
  • What makes tests better
    • Repeatable
      • Running the same test over and over always has the same result
    • Runs Fast
      • If you can’t run the test quickly, you won’t run them that often.
    • Easy to write
    • Easy to understand
      • If you can’t quickly and easily glance at the unit test to verify what the test is aimed at, it’s not easy to understand
  • Systems that are hard to test
    • invoicing rules engine
    • rules defined in an xml file
    • invoice data read from stored procedures
    • no seams in the application to break it apart.
  • How do I test
    • WorkflowProcessor
      • Basic workflow logic
      • State persisted to the database
      • Sent emails at various times
  • Things that are hard to test
    • The database
    • Active Directory
    • Web Services
    • Messaging
    • Windows Services
    • Remoting
    • SCF
    • WPF/WinForms
    • System.Web Namespace
    • External System
    • Chatty APIs
  • “Isolate the ugly stuff.  That essentially comes down to anything written by Microsoft”
  • Test small before testing big
    • Verifying that the pieces actually fit together
      • Start working on the pieces you know how they work.
      • It’s just the small pieces you’re working on, not the entire system as a whole.
    • Code from the bottom up…
      • start with the small pieces
    • …or from the top down
      • Use mock objects to flush out API and how things will fit together.
  • Talked about Dependency Injection (with a rather strange bias towards StructureMap. Gee…wonder why…).
  • Keep a short tail
    • Can you pull it off a shelf and work with it without pulling down/in extra classes.
    • Isolate the Churn
      • Isolate when there’s a lot of change.

2010/01/14

Codemash: T4 Code Generation with Visual Studio 2008

Presented by Steve Andrews.

  • What is it?
    • Was originally a DSL for creating code based on your model.
  • Demo
    • Need a template directive
      <#@ template language=”C#v3.5” #>
    • Output directive
      <#@ output extension=”csv” #>
      one,two,thee
      four,five,six
    • Expression blocks
      <#= DateTime.Now.ToString() #>
    • Statement blocks
      <# for (int i = 1; i < 6; i++) { #>
      <# } #>
    • Class feature blocks
      <# HelloWorld(); #>
      <#+
      private void HelloWorld()
      {
      }
      #>
    • Include directive
      <#@ include file=”_IncludedFile.tt” #>

      remove the custom tool for the included file if you don’t want it to run.
    • Assembly include directive
      <#@ assembly name=”System.Xml” #>
      <#= new System.Xml.XmlDocument().ToString() #>
    • Import directive
      <#@ assembly name=”System.Xml” #>
      <#@ import namespace=”System.Xml” #>
      <#= new XmlDocument().ToString() #>
  • T4 Toolbox
    • Can generate multiple outputs per template
    • Since the generator has to create a file by default with the same name, use it as a log of the output.
    • Can create classes within the template that can extend Generator
    • Can extend templates as well.
  • Custom Directive Processor
    • Need to put create some registry keys for the custom directives
  • Custom host for running T4.
  • Debugging T4
    • Registry setting that you have to change from 10 to 2 if using above .NET 3.0

Have to say that I thought I knew quite a bit about T4, but really a great presentation overall.

Codemash: Powershell: Ten things you need to know

Presented by Matthew Hester and Aaron Lerch.  It’s interesting because much of the audience really hasn’t seemed to have even touched Powershell.

  • What is it?
    • Rich script environment
    • Bulk operations
    • Interactive environment
  • The Active Directory Administrative Center that ships with Windows 2008 R2 is all Powershell driven.
  • When to use it
    • consistent, repeatable tasks
    • talking with Active Directory, registry, WMI and others natively
    • to create aliases to create commands that you’re used to.
  • Make use of commandlet (a verb-noun syntax) and the parameters (the name-argument pair).
  • Making use of the pipeline
  • Talked about the Powershell ISE that comes with version 2.
  • Useful commands:
    • get-module –listavailable
    • help “command” –examples/-detailed/-full
  • Providers
    • Get-PSProvider
    • Custom
      • DriveCmdletProvider
      • ItemCmdletProvider
      • ContainterCmdletProvider
    • The ShouldProcess capability means it handles the –whatif flag
  • Use PSCmdlet when creating your customizing commandlets.
    • System.Management.Automation is the assembly to reference
  • PSHost for hosting the powershell run space.
  • The Windows Troubleshooting Pack is built on Powershell.

 

On a side note, this was after lunch, where Hank Janssen gave a keynote on PHP and Microsoft.  Pretty interesting stuff.  It’s great to hear about the efforts that Microsoft does with the open source projects that are out there.  Although he didn’t really plan on some of his demos being that horrible with the crowd killing the internet connection.

Codemash: Building maintainable ASP.NET MVC

This was presented by Chris Patterson from RelayHealth.  He’s an active open source developer, so that’s probably why his name sounds so familiar to me.  He’s also a Visual C# MVP.

  • Since much of the audience are not ASP.NET MVC programmers, he covered a lot of the basics.
  • Talked about some of the MvcContrib/ASP.NET MVC 2 features.
  • Controller should be the only thing that talks to the Domain Model.  The Controller, View Model, and View all have an idea of each other.
  • View Models should only have data.  No behaviors.  It should be a flattened structure.

Overall the presentation was aimed at those who’ve never looked at the framework before.  Maybe if I hadn’t already done a project using this technology before it would be worthwhile to me.  But overall it wasn’t quite what the session abstract stated.  So that means it’s time to kick in the law of two feet and go find a session that I’ll actually learn something.

Codemash: Agile Iteration 0

This was presented by Ken Sipe from Perficient.  It basically covered how to get Agile accepted into a business, starting out with the first iteration and what it all includes.  Overall the presentation was great, but he was a little scatter brained in actually presenting it.  It didn’t flow very well as it felt he was jumping back and forth quite a bit.

  • Talked about waterfall development being the standard way companies generally tackle development.
  • Iterative process’s biggest usefulness is the feedback.  But companies don’t like it because they can’t plan long, long-term goals of when things will be delivered.  Since priorities can be shifted around, knowing what will be delivered is nearly impossible.
    • Talked about college experience when finding out you have a paper due in 3-6 months.  Most people finish it the night before/week before/weekend before.  When the instructor requires certain aspects of it at given points throughout the course, it’s an iterative approach.  You get feedback from the instructor earlier and can make the paper that much better.
    • Similar approach with the NASA Apollo missions.
  • Finding the iterative length is really about what feels right for the company.  Longer iterations means less feedback.  Shorter iterations means not being able to get things done.
  • Iterative delivery helps develops trust between the development staff and the business.  Building the trust is essentially actually doing what you say you’re going to do.
  • Agile is not evolutionary, no documentation, no architecture, cowboy development.
  • Pair programming is extreme of code reviews.
  • Programming engages the logical side of the brain.  Taking a break causes it to disengage and lets the creative side tackle the problem.  Pair programming allows a full brain to tackle a problem.
  • Pair programming is actually about 1.5 develops worth of work, not just 1.
  • “Bus Number” – the number of people on the project that need to be hit by a bus before the project can’t continue forward because expertise is gone.
  • Agile at the micro view:
    • Initial opening meeting
      • *everybody* is there to make sure everybody’s on the same vision of the project.
      • Agree on acceptance criteria
      • Agree on the priority
      • Break down into groups to figure out what some resources already exist or come up with estimates
    • Opening meeting
      • Should be attended by Developers, DBA, User/BA, Architects, QA
      • Using a BA or having limited contact with the user, generally a failing point is not meeting the acceptance criteria.
      • Assign tasks.
    • Standup Meetings
      • Can be sabotages by the Project Manager because they’re trying to get and provide too much information.
      • Only pigs are allowed to talk.  Chickens don’t talk.
    • Closing meeting
      • What was accepted by the user?
      • What is the velocity?
      • What architecturally significant  has changed?
      • It is a quality check of general
      • Were the estimates accurate?
      • Is the team performing as expected?
      • Is QA catching bugs that weren’t functional bugs?
      • Are functional bugs making to QA? Are the unit tests not being that effective?
  • Agile at the macro view:
    • Starting is the hardest part.  Having a mentor who’s been through it is the greatest point
    • Pre-iteration 0
      • Project inception
      • Stake holder level
        • Business opportunity/concerns
      • Collection of stories
      • Estimating ROI and project justifications
      • Building up team/resources
        • co-ownership of code
        • prepared to steal tasks
        • pairing capable
        • expected velocity.  Adjust story alignment and release plan.
        • Team phases:
          • forming
            • excitement/optimism
          • storming
            • resisting task, disunity
          • norming
            • constructive criticism
          • performing
            • self directed
      • iteration sizing
      • initial list of risks
      • release plan
    • iteration 0
      • 2 store development approaches
        • Majority of the stories upfront with the major understanding that you will likely discover more
        • sone stories upfront to prime with the intent that you’ll have a trailing analyst
        • Either approach needs an Analyst, BA or PM to keep feeding stories into the next iteration.
      • Automate as many things as possible.  Not just in software, but bringing on new people to the team.  Take out as much manual work to a simple button click/one command line.
      • Perform some spikes to learn about the new tools for the project
      • Build system/continuous integration is a necessity.
      • Set up reporting: burn down charts
      • Get a story reposity/wiki going
      • The general standards, annotations, and upfront patterns (MVC, presentation model, logging in all aspects…)
    • feature slip are those features that don’t get completed in an iteration, but were meant to be part of it.  Similar to RUP’s time slip.
    • If you have humans doing regression testing, you will fail.
    • Talked about different options in working with QA in the iterations.  Will post pictures later.

Codemash Keynote: Mary Poppendieck – Five Habits of Successful Lean Development

What a way to kick off the actual part of the Codemash conference with a keynote from one of the major voices of the Lean development segment.  None other than Mary Poppendieck.  Some of the points of her talk:

  • Key tenants of Lean:
    • Eliminate Waste
    • Focus on Learning
    • Build Quality In
    • Defer Commitment
    • Deliver Fast
    • Respect People
    • Keep On Improving
    • Optimize the Whole
      • Think about the entire process, not just the software
  • The 5 Habits
    • Purpose
      • Many developers are only doing their job because somebody told them to do it.  Sure coding it fun, but do they really know why they’re doing it?
      • Keep the development staff near the customers to focus on what the end users actually NEED vs keeping them completely separate and all the requirements are tossed over the wall where the developed software is most likely never used.
      • Essentially just get the developers involved in the business to better understand what the needs of the business are.
      • Possibly have the developers actually do the job that they’re developing the software for to understand the pain points.  e.g. call center software.
      • If nobody’s requesting new software features, it means nobody’s using the software.  It doesn’t mean the software is feature complete.
      • That’s why open source software seems to be so easily programmed.  Those programming it are those that use it because they need to use it.  They know where the pain points are.
    • Passion
      • Developers like getting passionate about their work.  If they’re not passionate about their purpose at the company, they tend to get passionate about the minutia of development tasks.
      • Cost Center Disease – focus on cost reduction instead of delivering value.
        • Typical places this is found: IT departments, government organizations, some consulting firms
        • The problems include not being able to focus on giving better customer outcomes, no real engagement with customers.
      • Example of a passion – the Launchpad developers back when it was a for-pay development package.  Even though it was for-pay, it was very open source style based.
      • Typically if you are very passionate about something (like programming) it’s hard to make a living at it because you enjoy it too much.
    • Persistence
      • There’s no substitute for being careful and doing really good work.
      • The most accomplished people need around ten years of “deliberate practice” before becoming world-class.  This is also known as the ten-year rule.
      • Identify a specific skill that needs improvement.
      • Devise (or learn from a teacher) a focused exercise – designed to improve the skill.
      • Practice repeatedly.
      • Obtain immediate feedback – adjust accordingly.
      • Focus on pushing the limits – expect repeated failures
      • Practice regularly & intensely – perhaps 3 hours a day.
      • Open source development is a good way to learn – you generally have a teacher, are challenged, get immediate feedback, and dedication to the project.
      • Improvement kata
        • Visualize perfection – visualize what the ideal world is
        • Have a first hand grasp of the situation – understand how you can improve the situation
        • Realize that there’s a huge gap between the first 2 steps – find the minor steps inbetween the 2 points
        • understand obstacles that come up between the minor steps and overcome them.
      • Dijkstra’s Challenge
        • If you want more effective programmers, you will discover that they should not waste their time debugging – they should not introduce bugs to start with.
        • Find as many bugs early so the “code freeze” phase can be brought down to less than 10% of the release cycle.  Typical is around 30% of the cycle, sometimes 50%.
    • Pride
      • Story about a philosopher asking 3 stone cutters what they were doing.  “I’m cutting stones”, “I’m earning a living”, and “I’m building a cathedral”.
      • Move responsibility and decision-making to lowest possible level. 
        • “If you’re a manager, your job is to be lazy.  Have those lower than you helping drive decisions”
      • Litmus test for those with pride: how do people handle their frustration with their job? Do nothing, complain about it but overall do nothing, or find a way to fix it.
    • Profit
      • The examples given are large companies, consistently profitable, they dominate their industry and for a long time.  The front-line people are highly valued, expected to make local decisions and effectively engaged in delivering superior customer outcomes.
  • Talked about Tandberg’s successful implementation of Lean
    • Talked with really individual, front-line workers
    • The workers knew by heart the 1-line pitch of company for what the company’s purpose was.
    • The workers also knew exactly why they were doing their job.  They were passionate about what they did because they knew why they’re job was in place.
    • http://www.pvv.org/~oma/SoftwareDevelopmentAtTandberg_March2009.pdf

Overall really great information presented.  If you ever have the chance to listen to Mary give a presentation, don’t miss out.

2010/01/13

Codemash Pre-compiler: Software Craftsmanship

This was presented by Steve Smith (@ardalis | Codeproject.com) and Brendan Enrick (@brendoneus | NimblePros.com).

  • What is It?
    • Software Craftsmanship Manifesto ()
  • Why practice?
    • Basically it’s like Microsoft Certification tests; memorize the techniques so that when it’s time to solve a problem, you have a number of techniques you know like the back of your hand rather than having to look it up each time.
    • Using Kata’s helps for improving your “muscle” memory when it comes to tackling similar problems.
    • If you’ve already explored the problem domain using one approach (strictly OO using .NET 2.0 way), you should already have the unit tests that, in theory, can be used when you rework them in the underlying logic (using LINQ).

Kata’s practiced:

  • Bowling game (as Uncle Bob has tackled the kata).
    • the TDD approach that Uncle Bob took to tackle the problem
    • Took an approach with Mike Letterle that might have been a little more complicated than it needed to be, but was a good exercise in thinking it through.
  • A grocery shopping register.
    • having to account for discounts like “Buy N and get the N+1th free” and “Buy N for $M”.
  • FizzBuzz

Overall it was a cool session, but more for talking with developers outside my normal circle and getting a different point of view in how to architect the code.  It was great taking a TDD approach to tackling these kata’s though.  Especially pairing with a Java developer on the Grocery store kata.  Kind of wish I would have gone to a different session to actually learn more content as I’m thinking I’ll probably end up in the coding room one of the days to pair with people just to learn more from them and to actually be able to do some pair programming as it’s such a foreign concept every place I’ve worked.  It’s amazing how much it helps to be able to think through the implications of a certain architecture before ever actually laying out the code.  Although if you have two developers that over think the  problem it can lead to issues.  Amazingly I was able to no do that at all today and instead focused on “this is what’s being asked for, so let’s actually deliver that”.

Codemash Pre-compiler: Software Engineering Fundamentals Workshop: OOP, SOLID, and More

It’s that time for Codemash again.  So I’ll be doing some stream of conscious type blog posts again.  If others find this information kind of interesting I’ll expand on it some more, but mostly it’s for my own recollection.  The first session I attended during the pre-compiler was “Software Engineering Fundamentals Workshop: OOP, SOLID, and More” presented by Jon Kruger.  He kindly has the slides up at http://jonkruger.com/solid/OOP-SOLID-CodeMash.pptx.  Some of the points of his presentation:

* Started out talking about OOP.  Good analogy between Legos and code.  Glued together Lego pieces are like tightly coupled code.

* “Just because you are using an OO language does not mean that you are doing object-oriented programming.”

* “Avoid Not-Invented-Here syndrome”.  Promote reuse of objects, not just methods. And not via copy-and-paste.

* Great example of Encapsulation –> home electrical wiring.  You have abstraction/interface on top of abstraction/interface on top of abstraction/interface.  You only need to know what’s under the hood if you really HAVE to, otherwise go with the easiest interface.

* Object-orient programming is about behavior, not just fields.  Encapsulation is about hiding the fields and the only interaction is via behaviors.

* Rethink about object Inheritence by using composition.  It helps break down large methods.  It’s like going to a grocery store to choose what food is available to you vs. being a farmer and tied to what food you grow.

* Easier programming does not mean you shouldn’t learn new techniques/programming languages/etc.  It’s sometimes FAR easier to learn something new to help you out overall.

* The SOLID talk is very similar to what I presented to the ALL.NET group awhile back, but focused more on “Don’t do something that you really don’t need to do because it doesn’t fit your situation” and “Think before you implement.”

I have to give kudos to Jon for integrating the pairing session into the session.  It was a great opportunity to take an existing (crappy) application and refactor it using the techniques he had just presented.  I paired with Curtis Mitchell which was a great time, despite the fact we didn’t get a chance to dig too far into actually fix much of the code.

2010/01/11

Best Macro to use in Visual Studio

I wrote this little macro awhile back, and after getting burned by yet another accidental hit of the F1 key and waiting the agonizing amount of time before the Help came up, I realized it's time to get this macro out there so I can "install" it wherever I'm at. Since whenever I’m looking up documentation on a .NET class I copy the class name, go to Google, and generally the answer is within the first couple links. So why not automate that.

To add it to your Visual Studio macros, bring up the Macro Explorer panel (View –> Other Windows –> Macro Explorer; or Alt + F8). Edit a module and add sub to the module you want to.



Public Sub SearchWord()
Dim objDocument As EnvDTE.Document = DTE.ActiveDocument
Dim sSearchText As String

Dim currentSelection As TextSelection = objDocument.Selection

If currentSelection.Text <> "" Then
sSearchText = currentSelection.Text
Else
Dim objTextDocument As EnvDTE.TextDocument
Dim objTextSelection As EnvDTE.TextSelection
Dim lineNumber As Integer
Dim colNumber As Integer

' Get the text document
objTextDocument = CType(objDocument.Object, EnvDTE.TextDocument)
objTextSelection = objTextDocument.Selection
colNumber = objTextSelection.ActivePoint.DisplayColumn
lineNumber = objTextSelection.ActivePoint.Line

objTextSelection.WordLeft(False, 1)
objTextSelection.WordRight(True, 1)

sSearchText = objTextSelection.Text
objTextSelection.MoveToDisplayColumn(lineNumber, colNumber)
End If

System.Diagnostics.Process.Start("http://www.google.com/search?q=" + sSearchText)
End Sub


Save the module and you can close out of the Macro editor. Now back in Visual Studio, open up the keyboard options (Tools –> Options –> Environment –> Keyboard). Remove the existing F1 binding by searching for “Help.F1Help” in the “Show commands containing” textbox and clicking the Remove button to unbind it. Now search for the macro you just created. Press the F1 key in the “Press shortcut keys” textbox and click “Assign”.



Now whenever you hit F1 it will highlight the current word and search for it in Google in your default browser. Pretty simple stuff!


EDIT (2010-01-18): Based on feedback I updated the macro to not change the selected text if there's already something selected, and won't leave an item selected if you didn't have something selected already.

2010/01/07

Speeding up podcasts within iTunes

What started out as a simple Superuser question became a long trek to find a solution.  The only answer at the time pointed me to do batch processing through Audacity, but I was trying to avoid automating the GUI as much as possible.  Although I know how to make use of AutoIT and AutoHotKey to automate a GUI application, it’s not the approach I wanted to take with this.  The machine I sync my iPod and iTunes with I use quite a bit so I don’t want to have miscellaneous windows popping up on me and worrying about my current activity accidentally overriding what the script is doing.  Yes, using the iTunes COM SDK causes iTunes to open up if it isn’t already, but I’m okay with that since I keep iTunes open most of the time to update my podcasts so it doesn’t affect me very much.

So all that being said, as you probably guessed since I actually posted this, I came up with a solution that doesn’t involve AutoIT/AutoHotKey.  Doing a little digging I found an alternative to Audacity (which only operates through a GUI) called SOX (which is command line driven).  In a similar vein that Audacity suffers, they can’t distribute compiled applications that work with MP3’s because of the wonderful licensing of the format.  Thankfully SOX is open source, and others have already dealt with this issue, so I’ll make use of their outstanding work.  If you really want to do the work yourself, take a look at the steps on this Code Project article.  I attempted the steps, but ran into a number of issues when trying to compile everything myself.  It may have been my very rusty C++ skills, but who knows.  So instead, the author of that article published the output of the steps and that’s what I downloaded and made use of (the sox.zip link).  It is a number of versions behind the current version of SOX, but for my needs it’s fine.  To follow what I’ve done, download that file as well, and put it somewhere on your machine.

The next step was creating a Powershell script to get all the podcast tracks that iTunes has downloaded and modify them.  Not sure why, but this appears only only work in Powershell v2, so I accomplished that using the following script (note the line pointing to where Sox.exe exists), which I called “”SpeedUpPodcastsIniTunes.ps1”:

 

# Comment that will be applied to all podcast files that will be updated.
$modificationComment = "::Modified By Powershell Script::"

# Location where sox.exe exists on your machine.
$soxFile = "C:\Path\To\Sox.exe"

# The file types to modify. Make sure that Sox.exe can handle the file types
$extension = ".mp3"

# This is a list of all the podcasts that should have every file modified. This is an
# opt-in process for each podcast. The podcast names are case sensitive.
# Format should be like the following:
# ... = "Podcast 1", "Podcast 2", "Podcast 3", ...
#$podcastsToAffect = ".NET Rocks!", "RunAs Radio", "The Thirsty Developer - Podcast"
$podcastsToAffect = "The Thirsty Developer - Podcast",
".NET Rocks!",
"RunAs Radio",
"Hanselminutes",
"Herding Code"

# The tempo speed is how fast to speed up the podcast. A value of 1.0 is the same speed
# it's currently at. 1.5 is 150% faster, so the overall length would be 66% of what it
# currently is.
$tempoSpeed = "1.5"


$itunes = new-object -com itunes.application
if($itunes -ne $null)
{
Write-Host "iTunes is running..."
# Sources.Kind == 1 (ITSourceKindLibrary)
$itunesLibrary = $itunes.Sources | Where-Object { $_.Kind -eq 1 }
Write-Host "Retrieving Podcasts"
$podcastsPlaylist = $itunesLibrary.Playlists | Select-Object $_ | Where-Object { [string]::Compare($_.Name, "podcasts", $True) -eq 0 }
# Tracks.Kind == 1 (ITTrackKindFile)
Write-Host "Filtering Podcasts"
$downloadedTracks = $podcastsPlaylist.Tracks | Where-Object { ($_.Kind -eq 1) -and ($_.Podcast -eq $True) -and ($podcastsToAffect -contains $_.Album) } | Select-Object $_ | Where-Object { (([string]::IsNullOrEmpty($_.Lyrics) -eq $True) -or ($_.Lyrics.Contains($modificationComment) -ne $true)) -and ([System.IO.Path]::GetExtension($_.Location) -eq $extension) }

Write-Host "Processing..."
$downloadedTracks | ForEach-Object {
if($_ -ne $null){
$trackLocation = $_.Location;
$currentLyrics = [string]::Empty
if([string]::IsNullOrEmpty($_.Lyrics) -ne $True){
$currentLyrics = $_.Lyrics
}

$tempFile = [System.IO.Path]::GetTempFileName()
# GetTempFileName() actually creates the file, which we don't need
[System.IO.File]::Delete($tempFile)
# Need to give the temp file an appropriate extension because Sox.exe requires it.
$tempFile = $tempFile + $extension

Write-Host Converting `( $_.Name`)
& $soxFile $trackLocation $tempFile tempo $tempoSpeed
Write-Host Done!

# Need to delete the current file because Move() doesn't let you overwrite the file.
[System.IO.File]::Delete($trackLocation)
[System.IO.File]::Move($tempFile, $trackLocation)

# Update the Lyrics for tracking the files that have been changed.
$currentLyrics = $currentLyrics + "`r`n" + $modificationComment
$_.Lyrics = $currentLyrics

$_.UpdateInfoFromFile()
}
}

# clean up memory
[void][System.Runtime.InteropServices.Marshal]::ReleaseComObject([System.__ComObject]$iTunes)
}


You’ll notice that I’m making use of the Lyrics section of the podcast to track if it’s been modified by this script before.  I initially was using the Comments, but apparently some of the podcasts actually fill in that information and there’s a limit of 256 characters on the field.  My first trial runs ended up with a few “Cannot change the Comment” errors being thrown by the COM interop because of that limitation.  So far I haven’t run into an issue with the Lyrics section.



The final step is setting up a scheduled task to run this script on a schedule.  For the command on the scheduled task itself, it looks like this:



C:\Windows\system32\windowspowershell\v1.0\powershell.exe –NoProfile –NonInteractive “C:\Utils\Scripts\SpeedUpPodcastsIniTunes.ps1”



And the Start In directory is set to `C:\Windows\system32\windowspowershell\v1.0`.



So far it seems to fit my need pretty well.  It does have a noticeable lag when it’s filtering the podcasts, but not too bad.



NOTE: The first time you run this, it’s advisable to delete all but the most recent track for a podcast.  It does take several minutes to convert the files, so even a handful of them can take a good deal of time.

2009/06/04

Switching to the @Mixero Twitter Client

*NOTE* I have only watched one of the video’s on Mixero’s site for features/usage, so this is nearly all from my own experience with the client after a single day.

I had heard about the Mixero Twitter client what seems like a few weeks ago now (or about a week ago according to when I first tweeted about it), and this morning I was pleasantly surprised with an invite code for it.  Now I wouldn’t consider myself a heavy user of Twitter, but I seem to be more active among those I talk with in person regularly (outside of my friend Curt).  I like to see what people have on their minds and thus I enjoy watching the stream of tweets float on by.  Previously I’ve used TweetDeck to keep up on things, but I quickly hit a limit in the number of columns to effectively keep open at any point.  Considering that in TweetDeck a group only existed as long as it was visible, it was inefficient to remove a group for a short time because if I wanted to bring it back I’d have to recreate the whole thing.  Major pain.  Plus, even when I had the application full screened, the number of columns caused the horizontal scroll bar to appear.  Plus the notifications for @replies and direct messages wasn’t exactly noticeable unless you kept those columns open as well.  So that horizontal scroll bar was used far more than was necessary, often for no good reason.

That’s part of the reason I’m digging Mixero so far.  The notifications section is just plain awesome.  Direct Messages and @replies appear as little speech bubbles off your avatar.  Pretty cool stuff right there.  When you have unread items in either, the icon visibly changes.

image

Like TweetDeck, you can create groups to easily manage your stream.  The nice thing in Mixero is that you can create groups, but not necessarily keep a visible display of that group’s updates open at all times.  So you can have a group set up for future knowledge.  Like people you know that generally talk about conferences, and you’re only interested in that information for a couple months out of the year.  Set up the group and forget about it until conference season comes up again.  You can also associate 48x48 pictures to a group as well.  They have a limited canned selection to choose from, but they also perform a Google search based on the group name and returns the first ~8 results to choose from as well.  These are seen in the Active List.

Mixero ClientAn interesting feature of Mixero is the “Active List” and Contexts.  I haven’t quite found a good use for multiple contexts yet, because I like to have as much of the information available when I open the client without having to click and change things.  But with a context you have an associated Active List.  An Active list can hold a group or individual users, and it will give you and update of how many unread tweets there are from those groups/users.  Nice and easy way to see at a glance if there’s anything to read up on.

The next cool feature of Mixero is when you open up a user or group’s status, you have the option of creating a new window that’s movable separate from the main client bar.  This means you can move a window full of updates wherever you want to on your screen.  Keep a couple important ones open at all times, cover your screen with windows (similar to TweetDeck running maximized), or only open up the groups as you read through them.  Because I like to clear groups of tweets at a time, I’ve gone with the “Cover your screen” approach.

image

I haven’t really run into any bugs, per se, but more like “well that’s a strange way to behave”.  Like the difference a read update and an unread one is that the unread updates are in black text and the read updates are in ~56% gray text.  Personally I like TweetDeck's visible indicator of read/unread, but that may just be my familiarity with it talking.

If they steal gain insight from some of TweetDeck's features, it will definitely become the key Twitter client for power users.  Start following them on Twitter now and soon you may get an invitation code to start using the client as well.

2009/05/30

Chicago Code Camp Retrospective

Today was a long day, because I made the trek down to the Chicago Code Camp.  Aside from the two hour drive each way, it was a good experience.  Here's a summary of the sessions.

Trends in Continuous Integration with Software Delivery

This was presented by Sean Blanton.  Essentially it was talking about the benefits of having a build server in your environment that creates a build more than just a nightly build.  There was very little technical information in the session, as most of it was a higher level view of the need of CI.  Of course the concept of build automation came up, but he also brought up about about workflow automation.  Essentially the pipeline concept that Cruise uses is a good example of workflow automation.  A build happens and the outcome will determine the next action.  Run the unit tests, run integration tests, code coverage, email status/reports of the build somewhere, create installation package, deploy to another environment, etc.  All part of the workflow automation.

Guarding Your Code with Code Contracts

This was presented by Derik Whittaker.  The topic was about the Code Contracts project that came out of Microsoft DevLabs.  It's going to be part of Visual Studio 2010, but is available now.  Of course it's still pretty early in development so the interface and functionality are a little clunky and are quite likely to change.  I recall reading an article about it not that long ago, but I can't seem to find it at the moment.  Overall the project seems awesome.  There's two extremely awesome things that Derik brought up in the presentation.  One was the ContractInvariantMethodAttribute.  What it does is insert a call to the method prior to any method returns for every other method in the class.  This comes in handy when you want to ensure that a class remains in a valid state after any method call.  And it saves the developer from having to manually add that call to every method.  The other awesome thing is that the contracts calls can undergo static analysis.  So being able to compile the code and see where there is violations to called methods is simply brilliant.  Granted they currently only show up as warnings in VS, but still awesome.

Testing GUI's

This was presented by Micah Martin.  During the session I re-read the abstract of the session and wished there was a little more detail about it.  Basically it dealt with reworking the UI in Ruby applications (both rich client and web apps) using a framework called LimeLight.  While I'm pretty sure I'd never end up using the framework, Micah did a pretty good job with the presentation despite the feeling that nearly the entire audience was expecting something else.  About the only thing I got out of the session was a reminder that I still want to learn Ruby at some point.

MassTransit

This was presented by Dru Sellers.  Mass Transit is a messaging bus that promotes the publish/subscribe design pattern in a very decoupled way.  It's under development by Dru and Chris Patterson.  Having read a few posts about it didn't really shed the light on what the project is or how it's meant to be used quite the same way that Dru explained it.  It was a very informal type of presentation, more like a group talk with Dru leading most of it.  While I can't currently see the need for a framework like it in most solutions I've worked with, it will be an interesting project to keep in mind for the future.

Developing Solid WPF Application

This was lead by Michael Eaton.  Despite being the last session of the day, Michael managed to present some great material.  He essentially took a WPF application that would be very representative of how a WinForms programmer would approach it: everything in the code behind, very simple use of bindings, extremely painful to unit test in any fashion.  Taking this horrible code, attempts to refactor it to make better use WPF features like RoutedUICommands and better bindings.  As well as decoupling the code and attempting a MVC pattern.  While that pattern can work, he then went into how the MVVM.  Unfortunately he was doing a great job of explaining things that he went short on time.  Also being that late in the day it was hard to stay focused on the presentation, despite how great the material was.

2009/05/01

Updating the Last Modified Date of a directory

Working with a lot of compressed (.ZIP) folders can have some interesting side effects when you decompress them.  Windows will list all the folders (and subfolders) as having a last modified date of when you decompressed the file.  That may be well and good for most people, but it annoyed me to no end because I typically sort directories by the Last Modified Date as I work with the most recent files the most often.  Plus, downloading a file that hasn't been updated in a couple years and unzipping it can cause some confusion when you see the folder as being last modified today, but every file in it was created/modified a few years ago.  Why not have the folder actually reflect the date it was last modified by the files, not the OS?  That makes better sense to me, so I created a dead simple console app to do just that.

The console application will recursively go down a directory structure and give you a status of the folders and the Last Modified Date's new value.  By default, it will go down a maximum of 300 folders.  That should more than cover most directory trees.  If you want to limit it to only a couple levels, you can call the application from the command line, passing in a number after the folder and it will recourse only that many levels.

image

One thing you'll notice in the output (more so if you run it in a console window rather than from the context menu that's part of the installer) is that the path names are rarely wider than the screen.  There's a few different ways to accomplish this task, and since I'm not a fan of depending on system libraries in managed code, I took the approach of implementing the function myself based on code I found online.  I can't recall where I originally came across it, but I modified a couple logic errors that were in it.  Take a look at the DirectoryUpdater.CompactFilePath() method if you're interested.

I also built a Windows Installer Xml (WiX) installer because it's so much easier to just right-click on a folder and tell it to update the last modified date.  So as a custom action in the installer it creates the appropriate registry keys for the context action.  As I'm still extremely new to WiX, the only way to have it install them is to choose a Custom install and change the "Context Menu" item to "Will be installed on local hard drive".  Hopefully will get that figured out at some point.

image

image

Source code download from here.
Executable download from here.
MSI Installer download from here.

2009/03/15

Permanently Get Rid of "Unblock" Button on Downloaded Files

I'm really surprised at myself for waiting so long before I removed this friction point from my daily environment.  Like almost all developers, we look for the path of least resistance when we're trying to get work done.  And I'm going to be generous and say that we're all smart enough to know that we take some level of risk with downloading and executing files from the Internet.  So why should we be bothered by Windows telling us that a file we just downloaded (especially intentionally just downloaded) may be unsafe and tries to protect us?  I don't care, I want to execute it without all the extra safety precautions.  I don't want to have to go into every file I download and click on the "Unblock" button so it will function as expected.  I've read some posts that this is unique to Internet Explorer only, but that's not so.  My browser of choice is Firefox (haven't had the desire to switch to Chrome yet) and it does the same thing.  This is especially true when downloading CHM help files as nothing loads because the content is blocked.  Even more so when you unzip a file that is still "blocked" and all the expanded files are now also "blocked."  You end up with a bunch of file properties that look similar to this with that not so attractive "Unblock" button at the bottom:

image

So how can you permanently get rid of it? Well Windows is applying an NTFS stream to the file that keeps what zone the file is originally from.  The easiest way is to edit your Group Policy settings (normally handled by your domain settings if you're in a corporate network, but also exists on a personal machine).  Since this isn't normally something configured for home use, I'm not entirely sure which versions of Vista it can be found in.  The disclaimer on the setting says at least XP Professional SP2.  Anyway, you will need to run %windir%\System32\gpedit.msc as an administrator.  Next, navigate to Local Computer Policy -> User Configuration -> Administrative Templates -> Windows Components -> Attachment Manager.  The setting "Do not preserve zone information in file attachments" is most likely at a status of "Not configured".  You will want to go into the properties of it and change it to Enabled.  No restarts necessary to apply it.

image

From that point on, any file you download will no longer have to be "Unblocked." 

2009/03/12

The difference in random number generation in .NET

The next project I plan on working on is going to rely heavily on a Monte Carlo method of populating the initial state. Knowing this, I started digging into the different ways of generating random numbers in .NET. Everybody immediately goes with System.Random when they're starting out since it's readily visible in a new class file. This is all well and good, but you run into the issue that what it gives you is a pseudo-random number. It's actually pretty interesting to pop open Reflector and look at what it's doing under the hood. Granted it uses the system's ticks to seed the random number if you don't provide one yourself, but it does mean that processes (or even threads) creating an instance in the same tick will have the same random numbers. Well that can be a problem...

So how do we get a more *random* random number? That's where System.Security.Cryptography.RNGCryptoServiceProvider comes into the picture. Digging into this class it's making an external call, so not quite as easy to figure out how it's getting it's randomness. It really doesn't matter too much because it does come up with a better randomness. The drawbacks of using it though are that the way to work with it is quite a bit different than the System.Random class, and it is quite a bit slower. With the RNGCryptoServiceProvider you have to work with a byte array and do the conversions to whatever random type you want to work with. So if you build out your classes to use System.Random and later want to switch to RNGCryptoServiceProvider you have your work cut out for you. Or what about using RNGCryptoServiceProvider in production for "better" randomness, but use System.Random when you're unit/integration testing because you can control it? There isn't a common interface to use in that case, but I've been tinkering around getting such an interface set most of the day. Once I have it fleshed out more I'll post it up here.

Now back to the slowness of RNGCryptoServiceProvider vs. Random. Finding a few useful answers on Stack Overflow there was mention of it being slower, but never really gave any indication of how much worse it was. I've seen too many statements like this and the difference really only came down to maybe a few hundred milliseconds (so not a noticeable problem in the applications I've worked on). So I decided to toss together a test and find out just how much slower it is. Here's what I did to test it:

   1:  namespace ConsoleApplication1 
   2:  {  
   3:      using System; 
   4:      using System.Diagnostics; 
   5:      using System.Security.Cryptography; 
   6:    
   7:      class Program 
   8:      { 
   9:          static void Main(string[] args) 
  10:          { 
  11:              Stopwatch stopwatch = new Stopwatch(); 
  12:              RNGCryptoServiceProvider gen = new RNGCryptoServiceProvider(); 
  13:              Random generator = new Random(); 
  14:              //Since I'm only working with int for both randomizers... 
  15:              byte[] randomValues = new byte[ sizeof(Int32) ];  
  16:    
  17:              //step up the iterations by from 1 to 100,000,000; advancing by power of 10 
  18:              for (int iterations = 1; iterations <= Math.Pow(10,8); iterations *= 10)
  19:              { 
  20:                  Console.WriteLine(String.Format("Iterations: {0}", iterations));
  21:    
  22:                  //Start the RNGCryptoServiceProvider timing 
  23:                  stopwatch.Reset(); 
  24:                  stopwatch.Start(); 
  25:                  for (int i = 0; i < iterations; i++) 
  26:                  { 
  27:                      gen.GetBytes(randomValues); 
  28:                      BitConverter.ToInt32(randomValues, 0); 
  29:                  } 
  30:                  stopwatch.Stop(); 
  31:                  TimeSpan rngRandom = stopwatch.Elapsed; 
  32:                  Console.WriteLine( 
  33:                      String.Format("\tRNGCryptoServiceProvider:\t{0}", stopwatch.Elapsed)); 
  34:    
  35:                  //Start the System.Random timing 
  36:                  stopwatch.Reset(); 
  37:                  stopwatch.Start(); 
  38:                  for (int i = 0; i < iterations; i++) 
  39:                  { 
  40:                      generator.Next(); 
  41:                  } 
  42:                  stopwatch.Stop(); 
  43:                  TimeSpan sysRandom = stopwatch.Elapsed;
  44:                  Double speedFactor = Convert.ToDouble(rngRandom.Ticks) / Convert.ToDouble(sysRandom.Ticks); 
  45:                  Console.WriteLine( 
  46:                      String.Format("\tSystem.Random:           \t{0}\t~{1:0.00}x faster", 
  47:                                      stopwatch.Elapsed,
  48:                                      speedFactor));  
  49:                  Console.WriteLine(); 
  50:              } 
  51:              Console.ReadLine(); 
  52:          }  
  53:      }  
  54:  }

And here's the results:

image



The times varied a bit on my Core 2 Duo 2.50 GHz, 32-bit Vista laptop, but this isn't too far from the norm. It wasn't until 100,000 iterations that it took more that 1/10 of a second to finish, but at that point System.Random was ~279.2 times faster! The big thing is that even though the number of iterations are was growing by a factor of 10, RNGCryptoServiceProvider seemed to increase the time by more than a factor of 10.



So I guess the bottom line in the System.Random vs. RNGCryptoServiceProvider argument over slowness is in small numbers it doesn't greatly matter. Now if you're going to be generating more than 100,000 numbers in a very tight loop, it might be worth sacrificing "true" randomness with speed.

2009/01/09

Code Mash session: A Programmer's Guide To User Experience

  • Look for most experience, honest, and knowledgeable people for coming up with UX
  • When "interviewing" people for what they're looking for in the app, need to have a conversation.  Given through scenarios: "Process a credit card", "Answer a support call"
  • Put all scenarios into a very high level view into a specification document
  • Give a one-sentence description of what the app does
  • Group features together into sub-projects.  Go over these sub-projects with the users to see if they really fit together and make sense to the user.
  • "Is this REALLY something we're really going to need?" <-- feature that you might want to take away/never show to the user
  • "Is this something we CAN'T do without?"
  • "Maybe we need it..." <-- nice to have's
  • Use a sharpie marker for designing the interface so you don't get into the nitty-gritty details
  • No laptop during the design phase.  Gets you really thinking about the UI from the user's perspective
  • Use native controls for web pages because people know what the controls look like and know what to do with them.
  • Typography is extremely important.  Serif fonts are useful for print/small text sizes (< 14pt), San Serif more important for headers.
  • Black on White is not always readable.    Use a dark gray #333 or so
  • 1.5em line spacing helps improve readability
  • Whitespace is helpful because it improves readability
  • Blur the design.  Can you still tell what the point of the design is?
  • Designing interfaces is the same as Agile methodology.  Iterations are necessary to build them out appropriately.
  • Great way to verify usability when you toss the UI in front of the user
  • Watching a user sometimes provides the best way to find out if the UI is really working.
  • Paper prototyping can be the most useful that doesn't end up costing too much in terms of development time.

Overall the session was pretty decent.  A little short on length without the Q&A though. 

Code Mash Keynote #3: JavaScript Will Save Us All

The keynote was given by Eric Meyer.  After a few days of getting up MUCH earlier than I'm used to, I was mostly awake for keynote.  So overall here's some notes from it:

  • "How I learned to stop worrying and Love the DOM"
  • Typeface.js
  • Squirrelfish - JS engine in webkit.
  • The canvas tag can do amazing things.  IE doesn't support canvas, but there's a JS that will convert it to VML
  • dean.edwards.name/ie7 - fixes CSS issues of IE5 & IE6
  • Bluff - JS port of Graff graphing engine from Ruby
  • Web browsers can soon become "Speaking Browsers" in that they will read off the content to the user
  • Microformats are useful, but they're generally invisible to the user.  There's a Firefox plug-in that will pick them up, but it groups all of them on a page together and it isn't always obvious for the user to keep an eye on the bar.
  • Processing.js is an interesting project that makes use of the canvas element
  • Objective J came about as a way to carry Objective C to the browser
  • 280slides.com is a presentation software that's entirely web based
  • IETF is the group that takes the "Innovate first, standard second" as opposed to W3C which is "Standardize first, innovate second"