Wednesday, November 28, 2007

CAPTCHA foiled by Mechanical Turk

An article on CGI Security about CAPTCHA highlights a very obvious flaw in the whole philosophy of CAPTCHA. For those who are too lazy to click the link and read the 2 paragraphs the article basically describes how some very inventive worm writers enlist the help unknowing participants who are visiting pornographic web sites to play a game that offers to reveal a pornographic image piece by piece as the user breaks a code (obviously the CAPTCHA the worm is currently observing at the site it is trying to create a bogus account for).

I have to concede this is a very clever use of the Mechanical Turk idea. CAPTCHA is designed to weed out humans who are genuinely trying to sign up to a site and create accounts from computer based worms that are trying to create bogus accounts for their writers evil ends. CAPTCHA actually stands for Completely Automated Public Turing test to tell Computers and Humans Apart, and there is the flaw... the Turing test is designed to distinguish between humans and computers, if you the computer you're testing can enlist on-mass large numbers of humans to take the test for it, then the test will be rendered redundant.

This raises some very interesting concerns for identity verification. Everyone wants their site to have the smallest possible barrier to entry, even the simplicity of CAPTCHA systems can sometimes prove to be a barrier eliminating a small number of false positives. So what's the alternative? I'm not quite sure, maybe something like Windows CardSpace, or a competing identity management protocol that allows a third party company trusted by both the provider and the client to validate credentials, but of course there would have to be overwhelming take up of the same protocol set by both providers and clients which seems to me to reduce to a chicken and egg kind of problem. Also the cost of maintaining this systems integrity by the trusted third party would need to recouped from either the client or the provider. I have to confess, I'm not really an expert on identity management, so I'd love to hear others thoughts.

Web Directions North

Well, it's official, I'm going to Web Directions North 2008. Last year I went to Web Directions South, and really enjoyed it, this year I was in Cyprus during Web Directions South, so I'll have to settle for Vancouver. It's a real pity because I would have loved to have heard Chris Wilson (IE Platform architect at Microsoft) presenting on Moving the web forward. I must say, he is a brave man as a lot of people in the crowd would not exactly have been Microsoft fans, and would blame IE's standards issues for making their jobs harder, but I think his slide deck, and his audio interview with Kevin Yank from Sitepoint, state the dilemma fairly well.

So what am I looking forward to at Web Directions North?

I decided that considering I am travelling so far for this conference, I should probably attend the workshops, so I've booked myself in for Transcending CSS, and Desgining really usable websites. As a developer I feel that one of my weaknesses is web design, and as such I think that these workshops will be hard, but rewarding.

As far as the conference sessions go, all of them sound really interesting, but I'll just highlight a few that I am really excited about. Being interested in security, the Ajax Security session is looking good, it will be interesting to see Silverlight and Adobe Air compared, given my background in PDA development, I am interested to see where Mobile Web Design & Development is going to go, and given the presenters (Dave Shae and John Allsop), Where's your web at should be very interesting indeed.

 

Technorati tags:

Speaking on Visual Studio 2008

I have become a big subscriber to the Chris Hewitt school of self education. I have been working with Chris at Readify for bout 3 years now, and I remember back when I first joined he told me "If you want to learn a new technology, simply book yourself in to do a presentation on it in 2 months time". It is so true, the fear of making a complete fool of yourself in front of your peers is ample encouragement to get yourself across any topic. I have done this a few times now with technologies like Ajax, Vista, and SQL Server. I am now doing it for Visual Studio 2008 which has just RTM'ed. I have been playing with the beta versions of VS2008 for a while now, and today I am now installing the RTM version. I am speaking at Victoria .Net SIG in December. There are about 5 speakers doing 20 minute segments on "What's new in Visual Studio 2008 ...",  I am doing "What's new in Visual Studio 2008 IDE".

Check the Victoria .Net SIG web site closer to the date for more details, both the date and place are still up in the air at the moment.

Tuesday, November 27, 2007

ASP.Net and Standards (cont.)

My collegue Damian Edwards pointed me to an article on MSDN (ASP.Net and XML) that explains the behaviour I described in my previous post on Asp.Net and Web Standards . From the article...

"If you submit an ASP.NET Web page to a validation service such as the W3C Markup Validation Service, ASP.NET might render a version of the page that does not conform to XHTML standards. This is because the validator service does not report itself as a browser type that ASP.NET recognizes, such as Internet Explorer or Mozilla. When ASP.NET cannot recognize the browser type, it defaults to rendering downlevel markup, which does not include XHTML-conformant elements and attributes, or features such as cascading style sheet styles."


The article also discusses how to configure ASP.Net using a browser capabilities file to force it to render valid XHTML. I'll have a play with this tonight when I get home.

Cross Site Scripting Detection Tool

Just discovered a very interesting tool from Microsoft to combat Cross Site Scripting Vulnerabilities.

The XSS Detect Code Analysis tool has been released in beta, and does static code analysis to determine potential XSS Vulnerabilities within ASP.Net applications. I ran it on my standard XSS test application and managed to detect the standard XSS mistakes. I will say this though, it is still in beta, and when I tried to run it over our real site, the tool managed to crash Visual Studio. I can't wait til it RTM's, I'll be following this tool with a lot of interest.

Saturday, November 24, 2007

ASP.Net and Web Standards

We were having a discussion the other day on our internal tech list about Web Standards and validation, and I made the comment that I like to treat web standards validation like compiler warnings, in production code ideally you should not have any, because even though the vast majority of them are benign, and browsers will happily ignore them, they could be hiding 1 or 2 that are going to cause you grief. A colleague of mine Darren Neimke, challenged me to show any of my production code that I had that would stand up to this test. So I hurriedly tried to search around for something and ended up sending him a link to my, very much work in progerss site that I set up to play around with some ajax stuff scottbalwin.com.au, just to show him that I do try to put my money where my mouth is. Of course I did double check to see if the front page validated, in fact I checked a few of the links, and all that I checked validated fine. What I hadn't quite expected was for Darren to take the time to go through all of my links until he found one that didn't. On one of my ajax playground pages I had 6 validation errors. Ok, my bad, so what was I doing wrong? Well it turns out that 2 of those errors were due to me using a code syntax highlighting control, that in and of itself produces standards compliant html, but when you wrap it inside a <pre> tag, as it suggests in the sample that comes with it, you get an error saying <pre><div> aren't aloud to go together in xhtml transitional. Ok, simply change the <pre> to a <div style="white-space : pre"> and the problem goes away with no loss of formatting... Yay, but what about the other 4 errors?

What was really weird is that when I did my usual test of openning it up in IE from my dev environment, viewing the source and then pasting the source into the validator, it validated perfectly. So confident that I'd fixed ALL the issues I uploaded it to my website. I then used the validator to test the url,  and lo and behold 4 errors... doh! it took me a while to figure out what was going on, but eventually after realising that some controls in ASP.Net can emit different html depending on the user agent that is requesting the page, and a quick little test later on my own machine and sure enough, if the user agent is IE (or firefox for that matter), the calendar control emits perfectly standards compliant xhtml transitional code, but for reasons best known to a small group of developers at Microsoft, if the user agent is "Unknown" or whatever the W3C validator claims as the user agent, it emits non standards compliant html. So I am currently trying to figure out the best way to fix this issue so that even unknown user agents receive valid html.

I now understand why people who are really serious about standards often end up writing their own ASP.Net in place of the standard Microsoft ones.

Oh and by the way Darren, don't bother checking to see if my blog validates, because it doesn't, and it is simply due to laziness that I have just selected a standard template from blogger,and haven't attempted to beat it into standards compliance. One of these days....

Tuesday, November 20, 2007

Vote below the line campaign (more info)

I Found a really good article in The Age that explains just how important your preferences are.

Friday, November 16, 2007

Slow and steady fixes the bug

I just recieved a notification from Microsoft that a bug I reported in the XML subsystem of SQL Server 2005 in March of 2006 has finally been fixed and will be shipped with the RTM (I assume SQL Server 2008 RTM). It took them a while, but they eventually got onto it. It was an extremely obscure bug, but it seems as though I wasn't the only one to see it.
Unfortunately because I no longer work for that client any more, I will not be able to verify the fix, I guess that's part of being a consultant.

Thursday, November 15, 2007

Ballarat .Net SIG

Big thanks to Damian for inviting me to speak at the Ballarat .Net SIG, there was a small but responsive group of people a good venue, and of course Pizza and Soft drink, all the ingredients for a good user group meeting. I am always amazed at just how much you learn when you speak at these kinds of events.

Sunday, November 11, 2007

Performing at the Hispanic Fiesta

 

Niki & I are performing again, this time as part of the the Spanish Club's participation in the Hispanic Fiesta (otherwise known as the Johston Street fiesta or the Spanish festival).

Details

where : Hogar Espanol (The Spainsh Club) 59 - 61 Johnston Street Fitzroy

when : 3:30 pm, Saturday and Sunday the 17th and 18th of November 2007.

Pose #4

Vote below the line campaign (cont.)

The group voting tickets for the 2007 Federal election are now available on-line at the AEC website, and I urge anyone who has read my previous post on the topic of voting below the line and is still thinking of voting above the line to download the group voting ticket for their own state to see just how their preferences are being re-distributed. Even if you are going to vote below the line, it is good to have a look at who different parties are preferring. As I take a quick look over the Victorian paper, I can't help but wonder at some of the deals that must have gone on under the hood, or even at the reason that some of the parties even exist in the first place.

Friday, November 09, 2007

Speaking in Ballarat

I was just doing some research for my presentation next week at the Ballarat .Net Special Interest Group, and was trying to get some information on running Visual Studio 2008 on Vista (I usually only run Beta software on VPC's, so I have no first hand experience of this as yet), when I clicked on one of the links from my search only to find that it was a blog post by Charles Sterling advertising said event which reminded me that I should probably put in a plug for it on my blog, so that all two people who read my blog (yes that includes me), know what I'm up to. So here it is :

 

Vista as a Software Development Platform”
with Scott Baldwin from Readify

&

“What’s new for Web Developers in Visual Studio 2008”
with Dave Glover from Microsoft Australia

When: Wednesday 14th November, 5:30pm for light dinner and drinks followed by presentations from 6:00pm to 6:30pm and 6:40pm to 7:40pm

Where: COMMANDER CENTER, BALLARAT, 1 Ripon Street North

 

I find it funny that the last time I did this very same talk back in June, it was also in partnership with Dave Glover, but last time he was talking about power shell, this time Visual Studio 2008 enhancements.

Tuesday, October 30, 2007

Evidence Based Scheduling

I've always loved Joel Spolsky's common sense approach to the software development lifecycle, and his latest article on evidence based scheduling is no exception. It has a realistic approach to quantifying unpredictable elements of team development such as interruptions, meetings and the odd rebuild of your development environment. Estimating is always a big problem, and most developers (including myself) are usually quite optimistic about how long it will take for them to write a particular piece of code. This leads to an un-ending conflict between managers who want to know when something will be delivered, or how much a feature will cost to create. I think Joel offers a real practical approach to this dilema, and I am now trying to think of ways to integrate some of these ideas into a TFS project template.

The only thing i don't think Joel covered is how to go about predicting for the very first iteration, the very first time you start using EBS. I think that you probably need to pick a number or range of numbers to seed your velocity history. You probably need to be a bit pessimistic to begin with, but by the next iteration you'll have some better numbers to work with.

Saturday, October 27, 2007

Vote below the line campaign

Who do you prefer

Yes, it's that time again. The 2007 Australian Federal election campaign is in full swing, and for the vast majority of Australians it will come down to a choice between the boxer in red shorts in the right corner, John Winston Howard, and the boxer in the blue shorts in the ... other right corner, Kevin (I speak Mandarin) Rudd. Now I'm under no illusions here, although I am an active member of The Australian Greens, I realise that the next prime minister of Australia will be either Rampaging Rudd or Horrible Howard, however, there is the balance of power in the senate that is up for grabs, which is where I think the Greens can make a real difference, beyond the democrats failed attempts at "Keeping the bastards honest", I think The Greens can not only keep them honest, but also keep them on track on important issues such as climate change, social justice, transparency, and workplace relations. The subtle thing that a lot of people don't realise is that the only reason The Greens don't already have more influence in the senate is because of the above the line preferential voting system and the preference deals that go on behind the scenes that ordinary voters are completely oblivious to.

I have blogged about this before when the Family First Senator Steven Fielding was elected to the senate in 2004 purely on Labor preferences even though he received little more than one fifth of the primary votes that the Greens Lead senate Candidate David Risstrom received. All the Labor voters I know were appalled when they realised that they were responsible for this miscarriage of democracy, some have even vowed never to vote 1 for Labor again, others have decided to vote below the line and distribute their own preferences.

 

Now I know that The Greens are no strangers to preference deals, and there are some in the party like me who are frustrated when deals are done. I personally think that the whole preferential voting system needs overhauling, and have said this in the past at branch meetings. This issue is an entire blog post on its own, and in this post I want to concentrate on what  can be done this election to get who we really prefer into the senate.

 

Now don't get me wrong, I don't resent Family First coming on to the political scenes, I think that one of the major problems with the Australian political climate is that their is a lack of representation, and Family First do represent a valid cross-section of Australian society who are entitled to their say. In fact my parents and my sister all support and campaign for Family First, it's just that Family First don't have the numbers and support to legitimately hold any seat in the senate in their own right. So what can we do?

 

I personally think that people need to stop voting above the line in the senate, and to analyze your how to vote cards for the lower house as well to determine if you really want your preferences to go the way your party of choice is suggesting. I concede that the preferential voting system is very difficult to understand, and it is sooooo much easier to put a single digit in the section above the line, but you need to realise that this gives the party you voted for the power to redistribute your preferences as they see fit. I don't trust any party, not even The Greens to redistribute my preferences. To aid with the understanding of the Preferential voting system, the Australian Electoral Commission has educational material, and if you are still thinking about voting above the line, then be sure to to check out the AEC's 2007 Election website closer to the election when they will post the Group Voting Tickets for the senate that will show how your preferences will be redistributed if you choose to vote above the line. It is instructional to see the AEC's 2004 election webiste, and in particular look at the Victorian Group Voting Ticket, and see how the Labor party preferenced Family First above The Greens.

 

I would like use this blog post to start a campaign to get as many people as possible to vote below the line in the forth coming election so that the behind the scenes wheeling and dealing over preferences is rendered meaningless. Please link to this post, or write your own blog post urging people to vote below the line, and let your preferences be known.

 

Saturday, October 06, 2007

Performing on the fringes

Niki and I have been asked to perform some tango at a production by one of our close friends at this years Melbourne Fringe Festival. The show is called Movimientos, and there are some really great dancers involved, so it is worth checking out.

Pose #5

 

Technorati tags: , ,

Saturday, September 15, 2007

On Holidays

I'm currently on holidays in Cyprus. It is at this time that my non-geek blog tends to get a bit more love than my geek blog, although I do intend to finish off a couple of geek blog posts while on holidays, I will be writing about all my adventures on musings of a morbid mind.

Tuesday, September 04, 2007

Defense in Depth - (Part 2)

In my previous post I described a three pronged approach to software security that is summed up by "Constrain, Reject and Sanitize". In this article, I'll discuss the "Constrain" part in some more detail.

 

Constrain

Developers are usually focussed on what their end users are going to want to do with the system, and this generally does not (in most cases at least) involve launching XSS or SQL Injection attacks against the system, in fact generally the target audience for most software know nothing of these things. So when the developer is designing a piece of software, they are looking at it from the perspective of a benign user who just wants the software to work. It goes without saying, however, that the benign user doesn't want their personal details divulged to hackers. So there comes a time when the developer of the system has to consider what a hacker might want to "inject" into their software. There is a general principle that ALL user input should be considered evil until proven otherwise. The first part of this process is to constrain ALL user input.

 

There are a number of ways that input can be constrained, based on the type of information you are expecting.

1. If the possible field values is a singl or multiple selection from a fairly small well defined set of values, only allow the user to choose from this set of values. This can be done using UI elements such as a list of Radio Buttons, a group of check boxes, a listbox or a drop down list.

2. If you require more freedom than this but the data has a strict pattern that you can check for, then ensure that you validate the entered data the user enters. . This can take a number of forms

a. Data of a particular type (ie Decimal / Integer / Date) should be attempted to be cast to that type as soon as possible and the user notified if the cast fails.

b. Valid ranges and lengths of all data should be enforced. i.e. an age field may be required to be > 18 but < 130, a name field may be 30 characters or smaller etc....

c. regular expressions should be used for things like email addresses, post codes, Tax File Numbers, etc...

 

This validation MUST at the very least occur on the server side. There is a trend to validate client side using javascript, and this can add a lot of difference to the responsiveness of your applicatoin, and even to the load on the server, however, it is a serious mistake to have only client side validation. In the first place the user may for various reasons have disabled javascript, in fact if you are being hacked and you are validating using javascript then the first thing the hacker will do is turn off javascript. Secondly, because of the nature of the web, you can never be sure exactly what you are talking to, it may say that it is IE in the Htttp headers, but in reality it could well be a program written by the hacker specifically for the purpose of fooling your application.

 

This kind of validation is called "white list" validation, because it looks at the problem from the point of view of a set of allowable formats for the input data, anything that does not satisfy this formatt is rejected outright. For example if you have a post code field, no one is going to be able to write any kind of attack that contains only 4 characters, all of which are numeric (0-9),  similarly it is not possible to form an attack that looks enough like an email address to validate with a decent email checking regular expression.

 

The issue is though that some fields don't really lend themselves to this form of valdation. For instance description fields you generally want to be large text fields that can contain virtually any type of character. Even still, I would suggest you attempt to define a set of allowed characters and limit what the user can type into these fields. This article is not specific to web development, however, I do want to say something specific to web development. Some times you want to give your users the ability to enter rich html content. this is all well and good, but of couse it makes constraining the input quite difficult. You have to allow tags which means that you are potentially openning yourself up to Cross Site Scripting attacks. You might say that we can just reject any <script> tags, and we will get on to the rejection phase in my next post, but if you spend a bit of time looking at the XSS Cheat Sheet you'll very quickly realise that there are literally hundreds of ways to phrase an XSS attack. Also as the web is eveloving, and browsers implement the new standards (and new proprietary tags), the list of possible attack vectors grows without bound, so a site that may have been safe in ie5 days may without any extra work be vulnerable if the end user is using firefox 2.0 or IE7. This is a difficult problem, and one solution I have seen in the past is for the rich editor control to use its own format for storing the rich content. So the idea here is to have a format that allows for a supported subset of html. It is stored in the backend in this format and only transformed into html when it is needed to be rendered. This is still white list constraining and it works because the format that the control stores its data in will not have the necessary syntax to support tags that it doesn't know about.

 

The final thing I want to say is to give some idea of where this constraining should be done. The defence in depth paradigm requires that input be checked at every boundary. So the first boundary would be client side ie when the user first enters the data into the UI. Secondly the server side should validate the instant it recieves any data. From this point onwards, the data should be checked at the boundary of every layer until it is finally placed safe and secure into the database. Often if your application is designed well, you should be able to re-use validation logic between layers. This ensures that if the user finds a way to enter data at a lowere level, the data is still constrained.

 

In summary, Constraint is your first (and in my opinion best) line of defence against potentially malicious users. Next I will discuss the rejection phase.

 

 

Sunday, August 26, 2007

Virgin Blue - Sexist Policy

I travel virgin blue quite frequently, but today on my way home from a visit to Sydney, I ran foul of one of Virgin Blue's policies that I find to be overtly sexist.

I had boarded the plane, and was sitting in the very last row, in the isle seat. I was thinking I must have been extremely lucky when it appeared as though there was going to be no-one else sitting in my row. It meant I could get out my laptop and not have to worry about bumping elbows with the person next to me as I wrote a blog post I am currently working on. Just as they were closing the doors, one of the stewards came down the back with a young boy who looked about 8 years of age, and sat him in the window seat. Still good, the middle seat was still vacant, so I would still not be bumping elbows with anyone. It appeared as though the woman who was supposed to be sitting in the middle seat had not shown up. It was at this point the stewards asked the woman sitting in front of me to change to the middle seat between me and the child. When I looked quizically at the stewardess she revealed that "it is virgin blue policy to ensure that a male is not sitting next to an unaccompanied minor". I couldn't believe it!

I was not given any reason for the policy, I can only assume that with all the media hype around child abuses that Virgin Blue management feel that it is safest to treat all men as potential child molesters, and the obvious assumption, based on a 1920's understanding of gender, is that it is better to have a female sitting next to a child, because obviously a woman would never abuse a child.

I expressed my indignation to the stewards, not that it really bothered me who I sat next to on my hour and a half flight from Sydney to Melbourne, and told them I thought it was sexist. One of the male stewards agreed with me and encouraged me to file a complaint form which I did.

When all is said and done, it may seem like a silly little thing to complain about, and not something that I should get too upset about, but what bothers me is what it says about the kind of society we are becomming. What this kind of policy does is to make males feel uncomfortable around children. We are already at a point where the first thought when an adult male hugs or shows any kind of affection towards a child in public is one of suspicion. I fear that attitudes like this will feed into the already stark gender imbalance in our education system, and into other social activities involving children meaning that a generation of children will grow up not quite knowing how to have healthy relationships with adult males.

Saturday, August 18, 2007

Defense in Depth at Tech Ed

Last week I attended Tech Ed Australia, and as usual at Tech Ed, I found it impossible to stick to one track, instead picking and choosing interesting bits and pieces from almost all of the tracks. One thing that I am starting to get realliy interested in is Security, particularly code security. One thing that I don't think a lot of developers a really aware of is that you can have the best infrastructure security money can buy, you can have fire walls, and DMZ's set up so tight that no un-authorized traffic can possibly get through, but it just takes one bad line of code to blow a whole so wide in your defenses that it can render these measures meaningless.

The buzz these days is on Defense in Depth, which roughly translated means having multiple layers of checks and safe guards so that if a bad line of code up stream lets an attack through, there is a high probability that code further down will pick it up and dispense with it appropriately. I attended a number of security seminars at Tech Ed, and I'd like to expound on a concept that I have have been using in my current role, and it was really good to hear it codified into three basic development pratices at Tech Ed.

 

The three pillars of this approach to code security are ConstrainReject and Sanitize, and I want to explain in this series of articles how these concepts fit in with day to day development, and how they can form part of a defense in depth approach to your products.

 

I'll start in this article by explaining what the three trems mean, I'll then go on and spend an entire article on each of these three concepts and explain how these can fit into the software development life cycle.

 

Constrain

At the root of almost all code level attacks is user input. SQL Injection exploits the single quote (') special character in the SQL language, Cross Site Scripting (XSS) exploits the web browsers propensity to want to render HTML tags. Generally speaking the developers of the system never intend the input to be in this format in the first place. The constraining of data should be done at the point where the user is entering the data into the system. This is commonly referred to as "white list" checking.

 

Reject

There are certain known common attack vectors that should be rejected out right. If you are accepting input from a web site comment field, and the user types in "<scrpt>...</script>" chances are you are not really going to want that users comment, and the safest thing to do is to reject it, and tell the user that the input was not acceptable. The rejection of suspect data can be done at multiple phases such as code layer boundaries, or even as an aspect of the way your system works. This is commonly referred to as "black list" checking.

 

Sanitize 

Sanitizing the user input is what I like to call the last line of defense. This is usually done just before user input is presented to the attack target (ie SQL Server in the case of SQL Injection, or the browser in the case of XSS). Sanitizing information is usually done by escaping potentially malicious data before presenting it. As an example, in the case of XSS where the browser is the attack target, a string such as <script>...</script> is rendered harmless if it is HTMLEncoded to &lt;script&gt;...&lt;/script&gt;.

 

Putting them all together

By now you may be able to see the idea of "defense in depth" starting to form. In the first place, we constrain, only allowing data that we expect to be entered. Secondly we reject any known attack vectors so even if we can't constrain every field on in our application (or a lax developer forgets to constrain), we can at least protected ourselves against known attack vectors, and finally if constraining and rejecting fail to pick up a potential attack vector, then sanitizing the output will render the attempted attack harmless.

Monday, August 06, 2007

Readify Developer Network Launched

An ambitious project by Readify is about to start up. the Readify Developer Network is a way of Readify staff being given the opportunity to learn from and present in front of their peers, and it is open to anyone who is interested.

I'll be speaking on the Ajax Control Toolkit in Melbourne on the 1st of November.

Saturday, July 14, 2007

Developing on Vista - Part 3

Why Move to Vista?

Presenting at Victoria .Net SIG 12My initial shock at how few people are actively developing on Windows Vista has been tempered by the realisation that I am involved in a very different culture here at Readify, and also the knowlege that Windows Vista is not as revolutionary as it should have been.

 

The Readify Culture

At Readify we market ourselves as being "Technology Readiness Experts", and as such, we are always playing with the absolute latest technologies that are made available to us, often way before they are even released commercially, and sometimes even before they are stable enough to be used in production environments. This creates, out of necessity, a really vibrant culture of communication. We have a great technical mailing list where we are all avidly asking and answering technical questions, and discussing the ins and outs of the latest software we're grappling with. As much as I am excited to try out the latest things, I am also fairly risk averse. Being a part of the dynamic culture at Readify has given me the confidence to go out on a limb and install Vista, knowing that others have gone before, and that help is only ever an email away. I sometimes wonder, if I wasn't working for Readify whether or not I'd still be developing on Windows XP.

The act of installing Vista has also meant that my own knowlege and experience has increased exponentially, because I am the sort of person who likes to attempt to find the solutions to his own problems before sending out an SOS distress signal. This is why I felt confident standing up in front of a group of 50 or so of my peers and telling them that they can develop on Windows Vista.

 

Vista's Missed Opportunities

As with most commercial products, there are real world pressures on the development cycle that tend to drive product development a lot stronger than any utopian, computer theory ideal about how things should be, and Vista is by no means an exception.

For me there are two major features that were dropped from Vista because of commercial realities that would have made the imperative to upgrade even stronger. Firstly the fact that Vista was originally supposed to be a fully managed operating system, and secondly the new file system WinFS. I guess it's no use crying over spilt milk, but I do remember feeling a sense of dissapointment as these features were dropped from Vista.

Also in their haste to get Vista out the door, Microsoft released a product that in, my opinion, was not as stable as it should have been. It is only now with many updates installed that I feel Vista is finally starting to behave to my satisfaction.

 

So Why Bother?

I guess so far I really haven't been making that good an argument for people to upgrade to Windows Vista, and even less of an argument for people to use it as their primary development environment, and to be honest, we've had some consultants at Readify who after trying Vista have decide to go back to developinig on Windows XP. I'm no Microsoft Evangelist, and I wouldn't recommend that everyone use Vista just for the sake of having the latest and greatest, but I think that there are some very good reasons to make the switch.

 

UI Enhancements

There has been a lot of work done on the way users interact with the desktop environment and how windows are displayed to the user. Some of these features are merely eye candy, other features won't even work unless you have a decent graphics card, but if you do have the hardware, then the way in which you interact with the desktop will be enhanced, and you'll find yourself more productive for it.

 

There are some really simple ideas such as being able to click on the representation of your application after you press ALT+TAB which I always thought was missing from XP, and when I am forced to use XP at a clients site or a friends machine, I sorely miss.

Integrated Search

Desktop searching is nothing new to windows, but in Vista, Microsoft have made some significant improvements to the performance of the searc, and they've integrated it the OS. Probably my favourite feature of Vista is the fact that search is available from the start menu.

When you click on start (or hit the start menu key),  the start menu opens, and places the focus in the search text box. You can then start typing, and windows dynamically finds the best match in your Programs, Files and communications.

Vista Search

As you can see from the image above, you no longer have to remember where The Microsoft Office installer placed Microsoft Word, because you can just click Start, and type "word". You don't even have to wait for the search results to return. Vista Automatically places the best match at the top, and selects it, so if you hit the "Enter" key immediately after you type "word", Microsoft Word will start to run. This saves so much time looking for programs that you don't use often enough to place on the quick launch menu.

 

Better Security

As discussed in my previous post, I believe UAC is a step in the right direction, although a lot of people are frustrated with it, if people learn to live with it, and use it correctly,  it will benefit them. Given that ideally people should NOT have been running as an Administrator when they're only running user applications anyway, Vista makes it so much easier to run as a normal user, that I personally don't want to switch back to developing on XP as a normal user.

 

Better System Maintenance

There are a lot of system maintenance enhancements that generally go unnoticed (as all good features should). The fact that by default, a hard disk defragment is scheduled to run once a week, means that your hard disk remains optimized without you waiting perform a defrag. There are many such background optimizations aimed at keeping Vista stable and performing at it's peek.

 

A Word about Performance

Many people have quoted performance as a major reason not to upgrade (or in some cases to downgrade to XP), and this is a valid concern. It's true that older machines struggle with vista, and that things sometimes feel slower on Vista than on XP. My personal experience as been mixed, it started out OK, but as I installed more software, it did get to a point where it was a bit sluggish. I simply went in and had a look at all the services that were running and switched off any that I didn't need, and my laptop is quite responsive now. Also there are some really cool enhancements in Vista specifically aimed at performance. For instance, Ready Boost enables you to plug in a high speed USB 2.0 Flash memory device, to act as another level of caching before data is required from the disk.

 

Leading Edge

At Readify we like to be on the "Bleeding edge" of technology, trying out things before they are released to the masses. Not everyone is this keen to try out pre-release software, but whether you like Windows Vista or not, Vista has launched and it is here to stay. Yes there are some problems with it, but there will be improvements as time goes on, and hopefully some of the things we find frustrating now will be resolved in service packs. As IT Professionals, people will look to us for guidance, and we need to be able to direct them. I would strongly suggest that if you are thinking of upgrading your development PC that this is the perfect opportunity to take on Vista.

 

 

Technorati tags:

Friday, June 22, 2007

Developing on Vista - Part 2

To UAC or not to UAC?

Presenting At Victoria.Net SIGWhen I presented on this at the Victoria .Net SIG meeting a week ago, I wasn't quite sure of the general understanding amongst the group about User Account Control (UAC), so I asked the question, "Who here knows what UAC is?", and about 5 (out of 50) people put up there hands. I had intended on launching straight into my argument, but with the vast majority of people not knowing what UAC was, I had to explain a bit about UAC first, so I'll do the same here.

If you are already familiar with UAC, feel free to skip over this explanation and go on to Developing with UAC.

 

What is UAC?

You may have seen the very funny Mac Vs PC Advertisement that makes fun of UAC, although it is a bit of an exaggeration. Mal-ware has been a serious problem for Windows (as Apple are not backward in pointing out), and the reasons that mal-ware is such a problem stem from the way in which the vast majority of Windows PCs are used. By default, when you create a new user on a new Windows based machine, they are set up as an Administrator (ie have unfettered access to any part of the system, files, registry, hardware etc...). This means that unlike linux or unix based operating systems, you do not need to log in as a user with higher permissions to install software or make system level changes etc..., which significantly lowered the barrier to the general population being able to use PCs, and is part of the reason for the success of Windows as an OS in the home user market. It has also meant a lower barrier of entry to mal-ware which is why we have such a problem. In a unix or linux environment, if a user accidently runs a virus, or if a piece of mal-ware infiltrates the system through a user application, the most that it can do is infect files that the user has control over, which means that it won't infect any system files because a unix/linux user would never dream of running as an administrator when performing normal tasks

It's not like Windows XP forces you to run as an Administrator, in fact in many corporate environments that have a dedicated IT department controlling their network, users are configured to run as normal users on their own machines.Tasks like installing software and configuring networks and drivers are generally handled by the IT department using various remote administration technologies. This makes for a much more secure corporate environment, and means that users can't install malicious software accidentally.

Ideally everyone should be running as a normal (non-administrative) user when doing normal day to day activities, even if the computer is their own personal home PC. The reason people don't do this is 2 fold. Firstly because it is the default when installing a new operating system, and secondly, a lot of applications have been written for windows that for no good reason require Administrative access to system resources. So UAC is Microsoft's first attempt at fixing this problem.

In Vista by default when you create an Administrative user, Vista will generate 2 separate user tokens, one that has Administrative access to the system, and another that has normal user (restricted) access to the system. When the user is logged into Vista, by default the system uses the token associated with the normal user, but if an application attempts to do something that requires elevated privileges, Windows Vista will warn the user with a dialog asking if they started the program, and if it's OK to use the Administrator token to perform the operation. This means that if malicious software tries to infiltrate the system without being detected by the user, as soon as it attempts to do anything that requires administrative access, it's cover will be blown, and the user alerted to what's going on.

As with many things in Windows, you can actually turn UAC off completely, however, this just puts the user back into the position you were running Windows XP with Administrative privileges. A better approach is if an application requires administrative permissions (and really the only types of applications that should are system utilities that are performing low level administrative style tasks, not your general run of the mill user application), then you should use the "Run As Administrator" option to start the application. This means that you give that application the permission to run using your administrative token, while everything else uses your normal user token, protecting you from malicious software.

Now I'll be the first to admit that UAC isn't perfect, and especially as a developer, I do find myself getting a lot of UAC warnings as I perform certain tasks, but I personally think it's a step in the right direction. I also believe that for most normal users, these UAC warnings will happen so infrequently that they shouldn't be a problem.

 

Further reading on UAC:

 

Developing with UAC

Visual Studio 2005

When Vista first came out, and the early adopters in the development community first started using it as their primary development platform, there were many issues. One thing that did happen when you attempted to run Visual Studio as a normal user is that it would display a warning saying "You should run Visual Studio as an Administrator". In fact this was the official line from Microsoft, and as far as I'm aware still is. I also believe that this is WRONG! Service Pack 1 and the Updates for Windows Vista have fixed a lot of problems, and it is now possible to run Visual Studio and perform MOST of the tasks you do as a developer in normal user mode. The majority of things work just as they did on Windows XP. Developing Console Applications, Winform Applications, WPF Applications, and many more types of applications DO NOT require you to be running Visual Studio as an Administrator. However, there are some scenarios where you may still need to run Visual Studio as an Administrative user.

Most of the problems revolve around trying to debug into processes that you don't own. For instance if you are developing an ASP.Net application and you are using IIS7 instead of the the Visual Studio Development Webserver (Cassini), then debugging requires you to attach to the w3wp process. Not surprisingly, this requires you to have Administrative Privileges, and currently there is no way from within Visual Studio to elevate those permissions on the fly, so if you are intending on doing this, you'll need to start Visual Studio as an Administrator. Alternatively, you could use the Visual Studio Web Development Server (Cassini) to host your application while in development. Cassini quite happily allows you to attach and debug into it as it is started by the same user that started Visual Studio. It also has the advantage of allowing the "edit and continue" feature of Visual Studio debugging (something IIS7 or IIS6 can't do), and it is generally a little faster. Cassini will behave identically to IIS7 in the vast majority of scenarios, however there are some deficiencies with Cassini. It can't use the HTTPS protocol, in this case I usually find myself adding a configuration setting to determine if I am in development or production, and switching to use HTTP when in development. It also can't emulate subdomain scenarios properly. So if you need to test Single Sign On scenarios across subdomains, then you will need to use IIS7. There are also some other subtle differences in the way Cassini processes the ASP.net pipeline to the way IIS processes it. I am a firm believer in testing your application as close to your final production environment as is practicable, so I do recommend that final stage development testing is done against IIS, and not cassini, but you should only need to debug using IIS7 if you find a problem that only occurs when running the application under IIS7, and does not exhibit when running under Cassini.

Another thing that I have found to be problematic is installing assemblies into the GAC. Installing assemblies into the GAC is by it's very nature an administrative task, and requires elevated privileges. I have seen many projects that, as part of a post build step automatically deploy the newly created version of a shared assembly to the GAC. Obviously if you are running as a normal user, this post build step will fail, thus causing the build to "fail". Now personally I would question the need to install libraries to the GAC during development, and would encourage development teams to look at ways of privately deploying shared libraries during the development phase instead of deploying to the GAC. Adding a probing element, or a codebase element to the applications configuration file will allow you to share common assemblies amongst applications during development. However, if you insist on installing libraries to the GAC in a post build step, then you will have to run Visual Studio as an Administrator for that project to build.

There are some other minor glitches that are left over as a result of Visual Studio 2005 not quite being vista ready, (see Issues running Visual Studio 2005 on Vista as a normal user). Even running Visual Studio 2005 with elevated permissions there are still some minor problems on vista, (see Issues running Visual Studio 2005 on Vista With Elevated Privileges), but these issues occur infrequently enough, or there are adequate workarounds for them not to be a problem (at least in my experience).

 

SQL Server 2005

When Vista first hit the streets, SQL Server 2005 Service Pack 1 had already been out for some time, but unfortunately there were still lots of incompatibilities with Windows Vista. Most of us were forced to run SQL Server Management studio as an Administrator to tackle some of these issues. Service Pack 2 fixes a lot of these problems, and certainly after running the user provisioning tool, there is no longer any need to run SQL Server Management Studio as an Administrator anymore. There are still some issues mainly around Microsoft Visual Studio for Applications (VSA) used in SSIS script tasks, but beyond this, there are few significant problems.

See Windows Vista Considerations in the Service Pack 2 Readme document.

 

Why not just turn UAC Off?

Many developers argue that they are experienced computer users and as such feel somehow immune to mal-ware, and anyway, they run anti-virus software, so they are protected against malicious software attacks. They also argue that they constantly need to perform tasks that require administrative privileges, and therefore feel that turning UAC off all together will save them heaps of time, and put an end to all the UAC warnings.

Although dubious, let's assume for a moment that the first argument holds some water. I hope by now I will have been able to convince you that you don't need to run Visual Studio as an Administrator all that often. I would also argue that the effort of Clicking "Continue" from time to time on UAC dialogs when you do need to perform Administrative tasks is not going to cost you that much time in the grand scheme of things.

Another argument to keep UAC switched on is put very eloquently by Ian Griffiths, in his post UAC: Don't be part of the problem. Ian argues (as I did in my explanation of UAC) that part of the reason people feel they have to run as an administrator on Windows is because there are so many end user applications that require Administrative privileges for no good reason. He then goes on to argue that the reason that so many applications require elevated privileges is that they were written by developers who were running as Administrators on their own machines at the time of development, and were completely unaware that the code they were writing would not run without elevated privileges. he quite rightly says that this sort of behaviour in the linux/unix/OS X development community would be frowned upon, and the resultant applications would just not get used. Please DO NOT turn off UAC before reading Ian's article.

I would also argue that always running Visual Studio as an Administrator is just as bad (at least from the point of view of Ian Grifiths' argument) as turning UAC off altogether, because when you debug your applications, they will always be launched with elevated privileges, and you will never see potential normal user problems.

I come from a position of having developed as a normal user on Windows XP, and it is soooo much easier to develop with UAC on Vista than as a normal user on Windows XP. It is also much more secure. If you were developing as a normal user on Windows XP and needed to run something with elevated privileges, you would have to use the "Run As" command which would require you to type in your administrator password every time. The more times you are forced to type in a password the more chance there is of over the shoulder credentials theft. UAC simply requires you to validate that it was you who invoked the task that required the elevation of privileges.

 

Tips to help you stay on the wagon

Here are a few tips to help you run on Windows Vista and navigate the potential problems of UAC.

  • Add Visual Studio 2005 to the quick launch menu

Quick Launch

This not only makes it quick and easy to find, but on the rare occasions when you do need to run Visual Studio as an Administrative user,  you can simply right click the icon, and select "Run As Administrator"

Run As Administrator

  • If you find yourself needing to perform lots of administrative tasks, simply run a command prompt with elevated privileges and keep it open as you develop. Anything you run from this command prompt will automatically have administrator privileges, and you won't get asked to confirm the elevation of privileges.

I am sure that there are a lot of people out there developing on Vista who have come up with some really good stratergies for developing on Vista, and I'd love it if you could leave any other tips/suggestions as comments on this post.

 

Conclusion

If there is one thing I hope you take from this post it's you don't need to turn UAC off to develop on Vista. Also that you don't always need to run Visual Studio 2005 as an Administrator all of the time.

 

Links

 

Technorati tags: , , ,

Wednesday, June 20, 2007

Developing on Vista - Part 1

As promised, I am putting together a series of articles around setting up your development environment on Windows Vista. I have been using Visual Studio 2005 and SQL Server 2005 (amongst other development tools) on Windows Vista since November 2006, and I must admit that the experience hasn't been all plain sailing.

Pasha Bulker 2

However, with the release of service packs for both Visual Studio 2005, and SQL Server 2005 that address some of the problems, the situation now is much better than when I started. There has also been a lot of mis-information disseminated about how Visual Studio 2005 and SQL Server 2005 interact with UAC leading to some confusion about how we should run them.

In Part 1 Installing the toolset I hope to give some guidance around getting your development environment installed and running. In Part 2 To UAC or not To UAC? I'll explain the issues around User Account Control (UAC) in some detail, and hopefully convince you that you don't need to run Visual Studio 2005 as an Administrator all the time. In Part 3 I'll attempt to answer the question Why Move to Vista, and in Part 4 Tidying up I'll discuss what's not supported, give some general tips and tricks for developing on Windows Vista, and link to some further reading that might be useful.

Installing the toolset

As developers we all have various tools and utilities we just can't live without. I am a .Net developer, and as such I wish to limit this discussion to the Visual Studio 2005 and SQL Server 2005 toolset. I will talk about a few of the other tools that I have had dealings with, but it will by no means be a complete list. I encourage anyone who has experience (good or bad) with a particular tool on Windows Vista to leave a comment on this post.

Choose a Vista

The first step when taking the plunge into Windows Vista, is to decide on what version of Vista is appropriate SKU of Windows Vista. The good news is that Visual Studio and SQL Server are supported on all SKU of Vista. There are however some issues when running Visual Studio 2005 on versions of Vista that don't support Active Directory, so my general advice is to choose one of the following versions

Order of Installation

Since I started using Visual Studio 2005, and SQL Server 2005 back in January 2005 (yes that's right almost a year before they RTM'ed), I have installed the toolset many times, and had some success as well as some pain. I have developed the following order of installation based on this experience, and also on my experience since starting using Vista (November 2006).

  • Set up IIS 7
  • Install SQL Server 2005 Service Pack 2
  • Install Visual Studio 2005 Service Pack 1 With Vista Updates
  • Install any other tools that you can't live without.

I will go through these step by step.

Set Up IIS7

By default IIS7 is not installed when you first set up Windows Vista, this is part of Microsoft's attempts to reduce the potential attack surface of the base OS, and make it a more secure environment for everyday users. However, if you're intending on doing any ASP.Net development, or want to run SQL Server Reporting Services, you'll need to install it and configure it appropriately.

 

To Install IIS7, Go to Start->Control Panel->Programs and Features, and select "Turn Windows Features on or off". In the "Windows Features" dialog, select "Internet Information Services", and ensure that you enable the following features as a minimum.

  • IIS 6 Management Compatibility
    • Without this, you can sometimes get some cryptic errors when performing certain tasks about not having Microsoft Front Page Extensions installed"
  • .Net Extensibility
  • ASP.Net support.

Figure 1 shows how this might look.

 

IIS7 Setup

Figure 1

 

Install SQL Server 2005

Obviously you need to decide on what SKU of SQL Server you want to use. For developers this usually comes down to a choice between SQL Server 2005 Developer Edition and SQL Server 2005 Express Edition. I personally prefer to run SQL Server Developer edition on my primary developer environment because I often use features that are not available in the Express version. This does not mean that you can't install SQL Server Express as well if you need to develop against it, but I would suggest installing the developer edition first. So the sequence would be...

User Provisioning allows you to add Vista users to the sysadmin fixed server role, make sure you provision the user you develop under.

* Be careful when performing the user provisioning immediately after the Service Pack install. Because the service pack requires you to stop ALL SQL Server Services, and doesn't re-start them before asking you to perform User Provisioning, the user provisioning will fail unless you explicitly go and re-start the SQL Server services. Failing this, you can always run the User Provisioning tool from your SQL Server installation at a later point.

C:\Program Files\Microsoft SQL Server\90\Shared\SqlProv.exe

Install Visual Studio 2005

Again, you'll need to choose which SKU of Visual Studio you wish to run, but once you have made the choice, the installation procedure is as follows.

If you wish to start writing .Net 3.0 applications, then you'll also need to install the following

NB the extensions for WPF and WCF is a CTP release, and as with most CTP software, there are some problems with it (running either on XP or Vista), and as far as I'm aware, this is the state it is remaining in until Orcas is released (I guess MS need to have something to encourage people to upgrade to Orcas when it comes out).

Install any other Tools you can't live without

At this point, you can then install any other tools/utilities/plug-ins that you normally use. Just check with the manufacturers around each tools compatibility with Vista, I will discuss in a later post some tools that have issues, but my experience on the whole is that most tools work fine on Vista.

 

Conclusion

This will get you to a point where you are ready to start developing on Vista. Most things should work just as they did on Windows XP, however, keep in mind that these tools were specifically written for Windows XP, and there are still some notable exceptions to this rule which I will cover in my next post.

 

Technorati tags: , ,

Tuesday, June 19, 2007

.Net SIG Presentation Wrap up

Well, I must say I got a bit of a shock when I asked "How many people are using Vista as their primary development environment, and 4 people out of 50 put up their hands. Maybe it's just the Readify culture, but most of us at Readify are using Vista as our primary development environment, (some at Readify have been using it since before RTM), and I had expected that a significant proportion of Developers would by now be using Vista as their Development platform.

 

This surprise sparked a question. "Why is it that so few Developers are running Vista as their development platform?" I'm not suggesting that Vista will suit everyones needs, or that people should upgrade just for the sake of upgrading, but developers are generally NOT techno-phobes, and are usually fairly quick to adopt new technologies that increase their productivity, or even just for the novelty value.

 

I guess there are numerous reasons why this might be the case, but one that occurs to me is that the information required to set up your development environment on Windows Vista seems to be spread around various places on the web making it a significant research task to find all the details. There has also been some mis-information that has lead to confusion. This I feel I am in a position to rectify. As a resut I have decided to do a series of posts aimed at accumulating the best practices around setting up your development environment on Windows Vista.

 

As I promised in my presentation, here is a link to the slides from my presentation, however, I will over the next few days put together a series of posts that cover roughly the same concepts as in my presentation, but go a little bit more in-depth.

Friday, June 08, 2007

Latent bug syndrome

Today I have fallen foul of what I call the "Latent Bug Syndrome". This will hopefully help people understand just how complex software engineering is, and why bugs occur in software despite the best efforts of all involved. Let me explain.

Let's say you have a system like the following.

SystemA

where System A represents a fairly complex, fully featured, fully tested product that exposes cerrtain functionality to the end users. Now System A can be operating 100% perfectly (or at least for the overwhelming majority of scenarios that are encompassed by the exposed feature set), however there still may be a "Latent Bug" hidden deep within the underlying logic of this system that is never exposed to the end user because the exposed feature set never actually allows the system to get into the state that would cause this bug to exhibit. As a really contrived example, let's take the classic divide by zero problem

 

public stat double DoCalc(double val)

{

return 42 / val;

}

This function operates as expected under all conditions except where val = 0, in which case it throws an exception. If the exposed feature set only alows the user to select numbers from a drop down box that contains the numbers 1, 2, and 3, then every possible test that a tester can do will NEVER produce an error. Keep in mind this is possibly the simplest example one can find, in real life, the scenarios are far more convoluted. The wrong assumption from this is that System A has no bugs.

Now lets suppose that System A is augmented with System B as shown below.

systema_b

Now System B has the potenial of producing conditions in System A that not only were never tested, but were never even thought of by the original developers and testers of System A, and in my contrived example, a user input into system B may very well cause the value of the val parameter to be passed 0. The thing is that the developers (and testers) of System B are usually different to the developers (and testers) of System A, and often don't have access to the source code, or even if they do, don't have time to follow every code path through the interactions with System A accounting for every possible state that System A could possibly be in to find such a bug.

If the testers of System A are doing a good job, they may actually pick up the bug, then of course the blame game starts. Who is responsible for this bug, who's going to fix it why wasn't this bug picked up before etc....

So this happened to me today. Fortunately the bug never made it to a production system (and no it wasn't the divide by zero bug suggested above, it was infinitely more intricate), but it did make me aware that I had made some assumptions about System A that turned out to be less than 100% accurate.

 

So how can one guard against The Latent Bug Syndrome.

 

System A

  • Thorough unit tests of all public (and some times even private) methods exposed by system A will catch a vast number of these bugs.
  • Better documentation of publicly exposed methods to help developers integrating with your system understand what assumptions you are making about the input to a method, and what state you are expecting the system to be in.
  • Test Driven Development could actually go a long way to irradicating this all together because in TDD you only write code to satisfy the tests nothing less, nothing more, so you don't get the situation where there is code that under ALL conditions tested is never executed. However, I have yet to see a company embrace TDD to that extent as the sheer time involved is usually far too much to justify it commercially.

System B

  • Developers and testers of system B need to have a good understanding of how system A works, what it was intended to do, and how they are changing the way in which System A is being called. Understanding this may help you know where problem are likely to occur.
  • Know what your assumptions about System A are... and constantly question them.
  • Again unit tests are your friend.
  • wrapping your interaction with System A is also a good way of being able to respond to changes in System A if they occur.
  • Thorough end user testing (there is NO substitute).

Wednesday, June 06, 2007

VS2005 Project Templates Failing

I came across a problem recently where I went to create a standard Console Application project in Visual Studio 2005 and to my alarm I got a message saying

 

---------------------------
Microsoft Visual Studio
---------------------------
Could not find file 'C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\ProjectTemplatesCache\CSharp\Windows\1033\ConsoleApplication.zip\ConsooleApplication.csproj'

---------------------------
OK
---------------------------

 

This took me back a bit, as creating a Console application is one of the most basic tasks you would ever want to do in Visual Studio 2005. my next thought (after trying exactly the same thing 5 or 6 times) was to run visual studio with elevated priviledges, but alas this still didn't fix the problem. I checked and sure enough I could still create a Windows application no problems (with or without elevating my priviledges), so I just put it down to some kind of vista weirdness and continued on (using Winforms apps instead of console apps whenever I wanted to create a quick test). Today, however, I discovered that I could no longer create Web Service Applications as they crashed with a very similar error, and with a presentation on setting up your development environment on Windows Vista looming, I decided that I had better get to the bottom of the problem.... and so I did.

 

It turned out to be the Guidance Automation Toolkit and Guidance Automation Extensions packages I recently installed that for some reason had done some damage to the Project Template System in Visual Studio. now I guess fair is fair, both of these packages are in beta, and there are some known issues around running them on Vista (unfortunately I was not aware of these issues at install time). One of the big issues with the Guidance Automation Extensions is around uninstalling it. You get so far, and then a dialog pops up, no title, no message, just an OK button, and it stops uninstalling. fortunately I found a blog post by one Greg Duncan on how to uninstall the Guidance Automation Extensions.

 

I can now happily create console and web service applications again, but more importantly stand up in front of the Victoria .Net SIG next tuesday and talk confidently about running Visual Studio 2005 on Vista.

Tuesday, June 05, 2007

Speaking at Victoria .Net SIG

Just a quick note to say that I'll be speaking at the next Victoria.Net SIG meeting on the 12th of June at Microsoft building, Level 5, 4 Freshwater Place, Southbank. Turn up for Pizza and soft drink at 5:30 pm.

 

The topic of my discussion will be setting up your development environment on Windows Vista. Hope to see you there.

Thursday, May 03, 2007

IT a series of stops and starts

Is it just me, or does anyone else in the industry feel that working in the IT industry is a series of stops and starts. What I mean by this is that often when i start on something that I think is going to (or at least should be) faily straight forward, quite often, very early on in the peice I come across a problem that consumes way more time than I'd expected. For example, today, I was asked to fix a bug that was occuring on a project that I was involved with no less than a few weeks ago. I know the codebase had changed significantly, as a merge with the main branch had been done. What I didn't expect was that I'd spend the first 3 and a half hours tryng to get my machine to a point where I could compile and then run the project. Now part of the problem was my grappling with the back up and restore features of SQL Server, and one could argue that I potentially should understand that better (and now do), but the rest of the time was grappling with issues that we have been having with bugs inTFS that cause issues when trying to Get the latest versin of all files in large projects. Then there are aother times like right now where I am trying to do a few things on my PC, and even though I have a fairly new DELL D820 laptop with 2GB of RAM, and 4GB of ready boost, the hard drive is churning continuously (not sure why), and it takes up to 10 seconds to switch between tasks. This may have something to do with the fact that about 20 minutes ago I had to actually reset my computer after it crashed while trying to use Explorer. I am fast approaching "a sad realisation" that although windows Vista may be more secure, it is not in any way speedy, or stable. I really think Microsoft rushed it out the door, and that the end user is paying the price. It's times like these when I start to think about work in other sectors, hmmmm.... maybe I'll start up a Tango dance studio.

Sunday, April 22, 2007

Tango performance

Just to show how behind the times I am, here is my first ever appearance on You Tube. This is a Tango performance Niki and I did last weekend at our friend Yeow's birthday party. Niki and I have been dancing Argentinean Tango for 6 years now, and it is becoming an all consuming passion.

 

 

Technorati tags:

Wednesday, April 18, 2007

It's had its fair chance, time to switch back

That's it I've had it with IE7, I used to use firefox for the vast amount of my work until IE7 came out, and in the spirit of giving IE7 a fair trial, I decided to use it as my default browser. I started this back when I was still running Windows XP, and I was then using IE7 Beta 2, and the odd crash was acceptable for a beta product. I liked the tabbed browsing, and thought theat they actually did some things slightly better than firefox. I was impressed with the efforts made in standards support, and for this reason, I had persisted with it, even though the rendering engine is significantly slower (up to 5 times slower in some cases). When I started running Visat in november last year, I noticed that from time to time IE would crash (even though it was now out of beta), but I thought "patience... Microsoft are fairly good at stabilizing their products.... eventually, and the release of Vista was a rather rushed one, just so they could get it out the door before the end of the year, so I can expect that there may be some bugs that they find and fix along the way". Well, it's now mid April, and Internet Explorer is still crashing quite frequently, and today as I was browsing some Micosoft web sites, it crashed again. This is the final straw, I think they've had long enough to fix these problems. I've just switched back to firefox as my default browser.

Monday, April 16, 2007

Blog posts as an indication of my state of mind

When I started blogging way back at the start of 2004 (see http://scottbaldwin.blogspot.com/2004_03_01_archive.html, (I'd like to say before blogging became popular, but that would definitely be a lie) I was in a very stressful job that I didn't really enjoy, it was taking up far too much of my time, and was making me feel quite frustrated with the IT industry to the point where I was almost ready to leave the field for good. So even though I found out about blogging through my work in the IT field, the first blog I started was nothing to do with IT at all, in fact musings of a morbid mind was where I intended to post all of my political rantings and ravings, my general thoughts on topics (generally outside of IT), family and travel photos etc... , and just a general writing exersize for me. I started my geek blod Daily Dribblings of a Demented Developer shortly after (May 2004), but it was always the poor cousin of my musings (btw no prizes for guessig that alliteration is one of my favourite literary devices). I started work for Readify at the start of 2005, and since then, my writings on my Geek blog have steadily increased, and my writings on my musings blog have decreased, to the point where I noticed today that my geek blog has almost caught up, and what's more surprising the blog articles I've got in the pipeline are almost ALL for my Geek blog.

Its amazing the amount of difference working for a good company can make, there are still aspects of the IT industry I'm not enraptured about, and I still get stressed from time to time when things don't work the way I expect them to, but I have come to realise that there is a lot I do enjoy about IT.

Sunday, March 25, 2007

Fixing the WCF - SSIS Web Service Task problem the easy way.

Literally minutes after publishing my post on calling a WCF Web Service from the SSIS Web Service task, I checked back with the forum post that I'd mentioned in that blogpost http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=1023821&SiteID=17 only to discover that Uwe Heinkel had given me the peice of information I was missing in the first place. So the crux is I don't have to hand craft the dtsx xml file to get it to play ball.

 

I simply remove the following line from the wsdl file

<xsd:import schemaLocation="http://localhost:50344/MyWebService/Service.svc?xsd=xsd2" namespace="http://schemas.datacontract.org/2004/07/"/>

 and it all just works as expected.

 

damn.... I was having soooo much fun wading around in the dtsx xml file.

Calling a WCF Web Service from the SSIS Web Service Task

I have recently been doing some work using SSIS, and this is really the first time I've used for anything serious (ie beyond doing the tutorials that come with SQL Server 2005). I have also recently been learning WCF and as we needed to create some Web Services to deliver the information to SSIS, I decided that this would be the perfect opportunity to put my newly acquired WCF skills to good use. Now I always like to start things out very simply, which in this case meant prototyping the SSIS Web Service Task with the most basic of WCF Web Services I could possibly create, thinking once I get this working, I can then move on to the actual task that I want to achieve with confidence... unfortunately this is where my whole plan started to unravel. For those following at home, here are the steps I took.

1. Create a new WCF Web Service in Visual Studio 2005 by selecting File->New->Web Site, then choosing a WCF Web Service. (Note .Net 3.0 and related Visual Studio extensions are required for this. Also choosing a File based or IIS based web service shouldn't matter).

2. Open the web.config file change the binding to basicHttpBinding (although the same end result is still achieved with the default wsHttpBinding).

3. Expose the metadata over httpget by adding the following line to the service behavior in the config fil

<serviceMetadata httpGetEnabled="true" />

4. run the service.

5. Create a new SSIS project in Visual Studio by selecting File->New->Project, then choosing Business Intelligence->Integration Services from the menu.

6. Add an http connection that points to the wsdl file for your service, and a Web Service Task to the Control Flow that uses the http connection.

7. Edit the Web Service Task, and download the wsdl exposed by your service, and save it locally to a file by using the OverwriteWSDLFile = true on the General page.

8. Select Input, and then select 'MyService' from the drop down.

 

At this point I recieved an error message.

 

Item has already been added. Key in Dictionary: ‘DataContract1’ key being added : ‘DataContract1’

It then would NOT populate the Method DropDown, so I was unable to select a Service Method to call, and because you are confined to selecting from the drop down, there is no possible way to make any of the Web Service method calls.

 

Now I'm the superstitious kind when it comes to writing software, my firm belief is that if you can't get the most basics Web Service working, then you may as well not try to get your complex web service that wants to pass around meaningful data, working, you'll end up in tears.

A google search on the problem yielded the following forum post http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=1023821&SiteID=17 which was helpful in revealing the underlying problem. The problem is with the generated wsdl and corresponding schemas, not sure exactly what is wrong with them, but for some reason, the UI in Visual Studio doesn't like the way WCF publishes its metadata, in particular the way it published metadata about its DataContracts (complex types). So I thought I'd make the default WebService that WCF generates even simpler by removing any trace of the DataContract. This worked fine, I could call a WCF WebService from within SSIS no problems... with the restriction that I could only pass primitive types to/from the web service. now this might be fine for your purposes, but I required something a little more powerful, so I decided to persist with figuring out the solution.

 

I tried a number of things with the generated schemas, but I think in the end I'd exhausted my limited understanding of wsdl and xsd, and was no further ahead. Next I remembered a very simple statement by a fellow collegue of mine Andrew Ball who was at the time giving me some SSIS advice. He said "... under the hood an SSIS package (dtsx file) is just a big xml file". Armed with this I began to see the issue as more of a tooling issue, and thought that maybe the issue is purely with the designer. So I cracked open a sample that Andrew had sent me and found where the various Web Service properties were being set. I also cracked open good old reflector and took a look at the way in which Complex Types were handled in the Microsoft.SqlServer.WebServiceTask.dll Which is responsible for the Web Service Task in SSIS. about half an hour later (give or take 10 mins) I had the following XML

<WSTask:MethodInfo WSTask:MethodName="MyOperation2" WSTask:MessageName="">

<WSTask:Documentation>

Basic Hello world service call that uses a DataContract

</WSTask:Documentation>

<WSTask:ParamInfo WSTask:Name="dataContractValue" WSTask:Datatype="DataContract1" WSTask:ParamType="Complex" WSTask:SeqNumber="0">

<WSTask:ComplexValue>

<WSTask:ComplexProperty WSTask:Name="FirstName" WSTask:Datatype="string" WSTask:ParamType="Primitive">

<WSTask:PrimitiveValue>Scott</WSTask:PrimitiveValue>

</WSTask:ComplexProperty WSTask:Name>

<WSTask:ComplexProperty WSTask:Name="LastName" WSTask:Datatype="string" WSTask:ParamType="Primitive">

<WSTask:PrimitiveValue>Baldwin</WSTask:PrimitiveValue>

</WSTask:ComplexProperty WSTask:Name>

</WSTask:ComplexValue>

</WSTask:ParamInfo>

 

Which when placed inside a web service task element, can be used to successfully call the web service with the DataContract.

 

So my conclusion is that there is a bug in the SSIS Web Service task UI that doesn't handle populatinf the Method drop down correctly (I'd trust WCF before I'd trust SSIS), but this is merely a bug with the UI, not with the way in which the Web Service task actually goes about calling the web service.

 

 

Added after post (literally minutes after): For the easy way to solve this problem see http://sjbdeveloper.blogspot.com/2007/03/fixing-wcf-ssis-web-service-task.html

Thursday, March 01, 2007

Flickr becomes alive

Just found a really neat Plugin for Windows live writer called flickr4writer that allows you to add images from your Flickr photos by simply clicking "Insert->Flickr image...", so to celebrate here is one of my flickr photos.

 

Pose #6

Tuesday, February 27, 2007

Molly helps Microsoft with Standards

This is huge... I've been up against some major deadlines so I haven't been keeping up with all my blog reading, but today I decided that before I go home I was just going to catch up on a few of the more important ones, and so I was reading the IEBlog when I stubled across this article, the basic crux of which is that Molly E. Holzschlag has "signed on with the Internet Explorer team on a contract basis to work on standards and interoperability issues".

 

I have actually heard Molly speak at the Web Directions South conference last year, and she is a really inspiring speaker, and someone who tirelessly advocates for better adoption of Web Standards. It'll be interesting to follow her progress at The Daily Molly.