Friday, 17 December 2010

Oakton hiring in India - and there's a billboard to prove it!

My employer Oakton is currently doing a big recruitment drive in India... this reminds me of when one of my previous employers made a 50-metre long company logo out of vinyl hoping it would appear on Google maps. We spent all day putting it out and the plane didn't even fly over!


How to Use BAPI_CUSTOMER_FIND to search for Customers in SAP

BAPI's are a useful way of retrieving data in SAP for testing purposes - even if they are not the recommended way of retrieving data from SAP from external systems (you should use Enterprise Services for that).  I had a problem with the quirks of one of the BAPIs today - the SAP BAPI "BAPI_CUSTOMER_FIND" wouldn't allow me to perform a wildcard search on the Customer Name (the KNA1.NAME1 field). Exact matches worked fine. Turns out there is a field "PL_HOLD" in the input parameters that has to have a value of "X" in order for wildcard matches to work at all.

So the process is:

  1. Work out the field and table name that you want with SAP transaction /nse11
  2. Test the BAPI:
  3. Make sure that MAX_COUNT is 200, or the desired maximum number of return values. Use 0 for an unlimited number of return results.
  4. Make sure PL_HOLD is X to enable wildcard matching
  5. Put the TableName (e.g. KNA1 for customer master), field name (e.g. NAME1 for customer name) and the wildcard in the SELOPT_TAB table
  6. Run your BAPI to perform the search.

Of couse you wouldn't need to worry about this if you are just using SOAPUI and the CustomerSimpleByNameAndAddressQuery enterprise service as it has no such flags to enable the wildcard searching - but that's another story.


Tuesday, 14 December 2010

Entity Framework "4.5" coming out in Q1 2011 as a Feature Pack/Update

The Community Tech Preview 5 (CTP5) of the Entity Framework Feature update (aka EF 4.5 - name to be confirmed) was just released for download last week according to the ADO.NET team blog - see

This has facilities to create databases based on:
  1. A "Code First" approach where the code defines the model, which in turn can generate the database for you. This involves the ability to define your model using the "Fluent API" rather than an entity diagram.
  2. A "Model First" approach where the normal edmx EF Designer is used to create the Model and the database can be generated from that.
I will be also looking forward to any performance improvements the guys at MS are going to incorporate into the RTM build.


Monday, 13 December 2010

How to spot a fake SanDisk SDHC Card

I recently had the misfortune of purchasing a fake 32GB SDHC card for my HTC Desire. I only found out a few weeks after my purchase when I started to notice corruption in some of the mp3 files I was copying over to my card. Once I copied them over, Files on the Android-based phone would sit on the card for a minute or two and then disappear. To confirm it was a fake card, I tried to format it and then copied some large files onto then off the card. This copy process failed when trying to read the files back off the fake media.

Apparently, the dealers in China often rip the cards out of old GPS machines and relabel them with a fake serial number and SanDisk logos.

After looking into the topic, it turns out there are some telltale signs that give a fake card away:
  1. The serial number is on the back, not the front so fraudulent sellers can display the card in photos without giving their game away.
  2. The SDHC logo is not clearly printed and may appear blurred or smudged.
  3. The white writing on the card is a straight white rather than a muted white colour.
See below for a photo of the fake and the real card side-by-side. The real card is on the left, the fake is on the right hand side:

The only guaranteed way of getting a real SDHC card is by dealing with a local dealer who has a legitimate address in Australia and who you can follow up through the Australian Competition and Consumer Commission (ACCC) if you are sold a fake.

You have been warned!

Performance and the Entity Framework (EF) 3.5 - How to Lazy Load Fields within the one table

There are 2 different camps when it comes to application performance:
  1. Functionality should be delivered first and we can optimize performance later
  2. We should constantly strive to test and improve the application performance throughout development.  - this is particularly important when dealing with new or unproven technologies.
While it is good to be constantly proactive in improving performance, it can sometimes be a hinderance to the project and delivering functionality that the client can actually use. Clearly Microsoft has taken the first approach with the first version of the Entity Framework (EF 3.5). As a hybrid approach between these two, I strongly believe in the use of a Proof of Concept based on core use cases for every project aimed and proving integration approaches and the expected performance of the end system. This helps you develop some golden rules/rules of thumb for that particular implementation and can help you to avoid larger-scale issues down the track.

Performance approaches aside, one of my clients recently had an issue with performance of a system based on the Entity Framework 3.5. Many of the issues in general with EF performance are well documented and I will not detail them here - however there are some golden rules that apply to any database-driven application:
  1. Minimize the amount of data you bring across the network
  2. Minimize network "chattiness" as each round-trip has an overhead. You can batch up requests to resolve this issue.
  3. JOIN and Filter your queries to minimize the number of records that SQL needs to process in order to return results.
  4. Index your DB properly and use Indexed (SQL Server)/Materialized (Oracle) Views for the most common JOINS
  5. Cache Data and HTML that is static so you don't have to hit the database or the ORM model in the first place
  6. Denormalize your application if performance is suffering due to "over-normalization"
  7. Reduce the number of dynamically generated objects where possible as they incur an overhead.
  8. Explicitly loading entities rather than loading them through the ORM (e.g. via an ObjectQuery in Entity Framework) when the ORM outputs poor performing JOINS or UNIONs.
One thing that I noticed in this application that violated Rule 1 - was the use of a EF entity "SystemFile" which had a field called "Contents" that held large binary streams (aka BLOBs) and was pulling them out from the database every time the Entity was involved in a query. The Entity Framework doesn't support lazy loading of fields per se - but it does support loading of entities separately.

Using this concept, the most obvious step seemed to me to be:
  1. Remove the "Contents" field from the "SystemFile" entity so it didn't get automatically loaded when the EF entity was referenced in a LINQ2E query.
  2. Create an inherited entity "SystemFileContents" that just had the contents of the file so the application can load it up only when needed.
This was fine - but then my Data Access Layer then wouldn't compile and I received the following error:

Error 3034: Problem in Mapping Fragments starting at lines 6872, 6884: Two entities with different keys are mapped to the same row. Ensure these two mapping fragments do not map two groups of entities with overlapping keys to the same group of rows.

After a little investigation, I found there are a few different approaches to this error:
  1. Implement a Table Per Hierarchy (TPH) as described at This would mean I could just make some database changes and move the file binary contents into a separate table. After that I could just make the parent "SystemFile" class an abstract one, and only refer to 2 new child classes "SystemFileWithContents" and "SystemFileWithoutContents"
  2. I could simply split the table into 2 different entities with a 1:1 association rather than an inheritance relationship in the Entity Framework Model.
Option 2 was the best in terms of minimizing code impact as this application had been in development for over a year. To this end, I used the advice here regarding adding multiple entity types for the same table.

The designer in Visual Studio 2008 doesn't support this arrangement (though the designer in Visual Studio 2010 does as per - so you have to modify the Xml file directly and add a
"ReferentialConstraint" node to correctly relate the 2 entities:

We add the referential constraint to it to inform the model that the ids of these two types are tied to each other:

<Association Name="SystemFileSystemFileContent">
  <End Type="SampleModel.SystemFile" Role="SystemFile" Multiplicity="1" />
  <End Type="SampleModel.SystemFileContent" Role="SystemFileContent" Multiplicity="1" />
    <Principal Role="SystemFile"><PropertyRef Name="FileId"/></Principal>
    <Dependent Role="SystemFileContent"><PropertyRef Name="FileId"/></Dependent>

This reduced the load on SQL and the web server as it didn't have to drag across the data dynamically on each call to the SystemFile table anymore. Any performance improvement must be measurable - so the team confirmed this with scripted Visual Studio 2008 Load tests which has a customer-validated test mix based on their expected usage of the system.


Friday, 19 November 2010

SharePoint - How to get an SPUser Object based on a Person Field in a SharePoint List

I had a question today about how to determine the email address of the user who is mentioned in the "Responsible Field" of a SharePoint list. He was trying to develop an event receiver that would email a particular user if they were mentioned in that field. To this end, here is a tiny method I wrote to get an SPUser based on a Person field in a SharePoint list. The code has to first obtain a reference to the relevant SPField and then use the "GetFieldValue" method of the SPField to get the SPFieldUserValue. The SPFieldUserValue is really just an SPUser:

        /// Gets the SP User object from a person field (e.g. the Modified By field) 
        /// so we can determine the email address or other details of that user.
        /// e.g. the SPItem with the Person field obtained 
        /// from web.Lists[0].Items[0]         /// e.g. a person field e.g. "Modified By"
        public static SPUser GetSPUserFromPersonField(SPListItem listItem, string fieldName)
            var personField = listItem.Fields[fieldName];
            return ((SPFieldUserValue)personField.GetFieldValue(listItem[fieldName].ToString())).User;


Tuesday, 16 November 2010

Exception when connecting WCF client to SAP WS-Reliable Messaging enabled web service - "Invalid WS-RM message. There are no WS-RM headers within SOAP message."

One of the SAP Business Process Management (SAP BPM) WSDLs consumed by a SharePoint web part was re-created by a member of our development team yesterday. My .NET client application then refused to operate with the new SAP SOAP endpoints and began to spit out the following error in the SOAP response (as captured by WireShark):

<SOAP-ENV:Envelope xmlns:xs="" xmlns:SOAP-ENV="" xmlns:xsi="">
<faultstring xml:lang="en">Invalid WS-RM message. There are no WS-RM headers within SOAP message.</faultstring>
< xmlns:yq1="http://sap-j2ee-engine/error">Invalid WS-RM message. There are no WS-RM headers within SOAP message.</>

WS-RM (WS-Reliable messaging) is a protocol that allows messages to be transferred reliably between nodes that implement this protocol in the presence of software component, system, or network failures.

MSDN has a brief mention of the potential problem here:
As described in the article:

"Both products also support WS-ReliableMessaging 1.0. However, the implementations are not interoperable. Do not use WS-ReliableMessaging 1.0 when exchanging messages between SAP and .NET Framework."
In fact, even though SAP and WCF both support WS-ReliableMessaging 1.0, you cannot use it - it will just give you an error like the above. You can either turn WS-RM off or use version 1.1 of WS-ReliableMessaging for your SAP to .NET WCF communications.


Monday, 25 October 2010

Fix - "An error occurred during the processing of /_catalogs/masterpage/CompanyNameHomePage.aspx. Code blocks are not allowed in this file." - SharePoint 2007 Exception

Today I had an urgent call from the Support desk at my current client - and had to be pulled out of a meeting to help resurrect a corporate intranet. All pages in the corporate intranet were down and all were giving the same error:

"An error occurred during the processing of /_catalogs/masterpage/CompanyNameHomePage.aspx. Code blocks are not allowed in this file."

Turns out one of the support guys had checked out a page when viewing the master page library and checked it back in. There were no actual changes to the page code at all - but to try and fix the problem, they tried to restore from previous versions in the version history of the library. It made no difference.

When I saw the error, I recognized this immediately that something had been unghosted (ie was now serving the code from the content database and not the filesystem). The SharePoint page parser was now recognizing that there was inline script (as file-system served files are inherently trusted and content database files are inherently untrusted) - and it was failing.

The fix was to just reset the specific page to the site definition to effectively reghost it (so it was the same as the one deployed by the original feature). The steps are:
  1. Start Internet Explorer.
  2. Browse the SharePoint site to locate Site Actions.
  3. Click Site Actions, and then click Site Settings.
  4. On the Site Settings page, click Reset to site definition under the Look and Feel option.
  5. On the Reset Page to Site Definition Version page, type URL for the home page (in the 'Reset specific page to site definition version' textbox (e.g. /_catalogs/masterpage/CompanyNameHomePage.aspx , and then click Reset to reset the page to the site definiton version (and re-ghosting the page)

This resolved the issue immediately and I had a call just 10 seconds later thanking me for fixing it :o).


Friday, 8 October 2010

Fix - InfoPath Browser Forms Error - "There has been an error while loading the form. A required resource could not be downloaded. To To try to resume the download, refresh the page."

If you receive the following error in an InfoPath 2007 Browser based form:

There has been an error while loading the form. A required resource could not be downloaded. To To try to resume the download, refresh the page.

You will also receive the following in your SharePoint ULS logs straight after the user sees this error regarding the "Canary" timing out:

10/07/2010 10:53:54.89 w3wp.exe (0x0D70) 0x1804 Forms Server Forms Services Runtime 5ajc Medium The Canary has timed out for form 6bbd1ceb-7956-49de-aaa5-015d7d94d2b2:ver:

10/07/2010 10:53:54.90 w3wp.exe (0x0D70) 0x1804 Forms Server Forms Services Runtime 7tel Assert WARNING: Invalid Canary for view file. StackTrace: at Microsoft.Office.InfoPath.Server.Controls.ResourcePage.GetSolutionAndVerifyCanary(HttpContext context, String solutionId, SPSite contextSite, ResourceErrorType& error) at Microsoft.Office.InfoPath.Server.Controls.ResourcePage.HandleViewFile(HttpContext context) at Microsoft.Office.InfoPath.Server.Controls.ResourcePage.<>c__DisplayClass2.b__0() at Microsoft.Office.Server.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock) at Microsoft.Office.InfoPath.Server.Controls.ResourcePage.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplicat...

This is a result of the session timing out (the default is 20 minutes). Simply increase the session time out based on your user's standard behaviours (e.g. going out to lunch while filling out a form or taking a very long time to fill out a form). We increased ours to 1 hour session timeout as described here


Sunday, 3 October 2010

How to Check that Windows Server 2008 Network Load Balancing (NLB) is using "Sticky Sessions"

Today I wanted to validate that our 3 load balanced SharePoint Web Front End Servers were using sticky sessions (I already know that they are using SQL Server for session state) - for peace of mind. Here is how to check:

  1. Start - Control Panel - Administrative Tools - Network Load Balancing Manager
  2. Check that the Affinity is set to "Single" for each server in the Network Load Balancing Column:


Friday, 1 October 2010

Back from our China Trip 2010 (Hong Kong, Shanghai, Dali, Lijiang, Guilin, Shangri-La)

I'm back in Oz after my second trip to China with my wife and kids. The main driver for us to go was to show my father and mother in law their new grandson in my wife's old home town of Shanghai. We also ventured to other areas of China.

Rather than the historical centres of Beijing and Xian (as we did in 2007), we went to some more scenic and less populated areas in the South of China - and were inspired by the BBC documentary "Wild China" to go to Shangri-La and Guilin. Of course you can't do everything you want (especially with two yound kids) - but we saw as much as we could over 26 days. Highlights of my China 2010 Trip were:
  1. Hong Kong Disney
    We got some free tickets as my brother-in-law has some good contacts in Hong Kong. One of the first things I noticed in the hotel is that Disney provides you with bath robes (for the pool areas) so it the large number of people walking around in robes made the hotel look disturbingly like a Disney-inspired mental institution!

    The fireworks were great on the first night:

    In the Jungle area, the whole family loved the Leaky Tikis
    as well as the Jungle Cruise (complete with cannibals, crocs, elephants and flames coming out of the water at the boat)

    We sat on the Alice in Wonderland ride 3 times because Heidi loved it so much. The Lion King musical show was also great fun for the kids.

  2. Lantau Island (and the Big Bronze Buddha)Probably my favourite part of Hong Kong with some nice quiet walking tracks and cool breezes away from the bustle of the city.

  3. Heidi's Princess Photo Shoot in ShanghaiChina's one-child policy makes any form of "child idolization" by parents acceptable (though we got it as a gift from her uncle in China). Heidi spent almost 3 hours and struck 100 poses for her photo shoot in various dresses and with swords. The story book you get at the end is amazing (in Chinese and in English)

  4. Dali and Cangshan Mountain, Fishing Birds on Erhai Lake, Horseriding. One of the most pristine and tranquil places I've visited in China (though it was tough carrying both Zach and Heidi up the steep and slippery stone steps as they refused to walk). There's even a giant Chinese chess board nestled up in the mountains. The pineapple and watermelon we purchased on the way down the mountain was bursting with flavour.

    The fish-catching ospreys were also pretty amazing.
  5. Staying in an old-style Chinese Hotel in Lijiang old town (kung-fu movies had been shot there) - and getting lost in the maze of stone walkways and waterways.

  6. The Tiger Leaping Gorge The amount of water passing through after heavy rainfalls was impressive - but so are the landslides that happened a couple of weeks before we got there (and the tunnel system made available to pass around the landslide areas).

    Video showing the sheer volume of water in the gorge can be seen here
  7. Herds of random animals (Yaks, pigs, goats) stopping the cars in Shangri-La on a 2-lane highwayThey were a bit better than the animals in Yellowstone National Park in the US as at least they had farmers cracking them with sticks to send them on their merry way. There were some Yaks that jumped in front of our bus though - thankfully we swerved in time.

    The scenery around Shangri-La (esp. the clouds hanging around the mountains when at 4000m altitude) was amazing.

    They the local shops try and flog oxygen to all the tourists before you get there. My recommendation is not to buy it - though the oxygen tanks are fun to play with.
  8. Guilin Boat Cruise

    Even the view from the hotel was amazing

    Guilin was very popular with the western tourists (affectionately known to some as White Devils). We even ran into some Indian Sydney-siders who live in the next suburb to us.
  9. Shanghai Expo 2010It was fun - but it was hugely busy (600,000 people per day visit) there was no way I'm going to wait for 4 hours to see the Germany exhibit. We got into the Spain exhibit right away though when we visited in the evening - one of the features of which was a giant puppet baby

  10. My son Zach's New Remote Control Car and Toy Guns and a giant teddy bear.. My father in law got some presents for the kids and they are still fighting over them. Zach looks like a little Arnie when he's carrying around the gun.
  11. Eating Snake, Dog, Frog, Jellyfish and Snails (I've had dog before - it's like beef except with a stronger flavour - quite nice). Snake is all bones and no meat so I couldn't approve.
  12. The Stone Forest in Kunming and Heidi and Zach's dancing to Chinese techno in KunmingThe Stone forest had some very nicely manicured gardens and had some of the most fragrant flowers and blooms I've ever seen.

    There was also a bridal expo in Kunming with a red carpet. The kids couldn't resist dancing their booties off on stage. As usual they attracted a lot of cameras and a crowd. Everyone loves to take photos of them - with the most common thing said is "piao liang" or "beautiful!" and "yung wa wa" ("foreign doll")

    You can check out Zach's dance class moves on youtube here:
  13. The Chinese Massage House of PainI've had a few foot massages in China - but the most painful massage I've ever (with my wife and brother-in-law) had was in a place called the Double-Rainbow Massage Parlour (sounds dodgy I know). I think they surgically enhance their elbows so they can cause as much excruciating pain as possible for extended periods of time.
During the trip to China, I lost a few kilos because we walked so much, and Lisa and I carried kids around so much. Also, Chinese food has way too many bones and not enough meat (In my opinion, not my wife's). Add to the fact that we were sweating 24x7 due to the pre-typhoon conditions (36 degree humidity that covered most of China) - and it was almost like a month-long weight-loss bootcamp.

Next holiday will be to Phuket, Thailand (ie something closer to what we did in Vanuatu in May 2009)
...Perhaps then Lisa and I will be able to breathe and relax rather than acting as human donkeys for our children :o)

Friday, 27 August 2010

Microsoft Tech Ed 2010 Gold Coast Meter-Maids in SMH...a bit of poor form?

I'll leave you to be the judge with these Photos from Tech Ed 2010 -

The organizers were fully aware of what they were doing and I thought it was on par with what you see at any motorshow - so didn't think much of it.


Wednesday, 25 August 2010

Just got 900/1000 for Biztalk 2006 R2 Exam (70-241) at Australian Tech Ed 2010 on the Gold Coast!

The Biztalk exams don't have any official training materials. After trawling through 2 Apress Biztalk books (including Apress Pro Biztalk 2009), all the Microsoft 2009 Virtual Labs and a SoA Book (~1300 pages in total), I sat the exam this afternoon.

As usual, many of the questions are slightly ambiguous and I did a lot of "um-ing" and "ahh-ing" before a decided to click the "Exit Exam" button. Thankfully the numbers went my way. Updated MCP logo including my Biztalk cert will appear soon!


Monday, 23 August 2010

"The one-way operation returned a non-null message with Action=''." Error when calling a SAP Web Service from a MS .NET WCF Client

I received the following exception today when attempting to call a new service in SAP PI (Process Integration), the equivalent of MS Biztalk in the SAP world:

The one-way operation returned a non-null message with Action=''.

Using WireShark, I looked in on the HTTP traffic:

Looking closely, it turns out that SAP returns an Empty SOAP body in the response (ie NOT just a blank string/nothing) as per the screenshot below:

As discussed here   , there is a hotfix for the issue that can be found at 
- however the simplest fix is to handle the System.ServiceModel.ProtocolException and effectively ignore this exception.

//Execute call
            var client = WCFClientFactory.CreateClient(CredentialType.PI);
            catch (System.ServiceModel.ProtocolException ex) //Just handle this specific exception. 
            //Otherwise bubble up
                if (!ex.Message.Equals("The one-way operation returned a non-null message with Action=''."))


Thursday, 12 August 2010

Microsoft Windows Azure Platform WILL be available as an on-premise solution - via a Turnkey "Appliance"

One of my biggest qualms about recommending a cloud-based solution (esp for sensitive data) is that the solution is then completely bound to hardware under a particular vendor's (e.g. Microsofts or Amazon's) control. This in turn makes it hard to sell to clients who typically like to have complete control over their data (whether this concern is well-founded is another debate).

Last month, Microsoft made moves to circumvent this reaction by announcing an "Appliance" - which is a specially configured box that will be able to be purchased for use in-house. Costs have not been announced - and it is only available to be used by a select pilot customers (in a "Limited Production Release") like Dell, Ebay, HP and Fujitsu.

However, it is a move in the right direction - to making the Azure business model appropriate in more business contexts - and without the variable (and somewhat unpredictable) expenses normally associated with a cloud-based (or any transaction/CPU cycle/storage fee-based) solution. Yet they still retain many of the seamless scalability advantages offered by Azure.

For more details on the Azure appliance, see:


Team Foundation Server 2010 (TFS) Licensing Whitepaper

You can find a very comprehensive document on licensing TFS 2010 from Microsoft) at the following location

See page 15-20 for details on how TFS is licensed. You get 5 CALs out of the box which is a good deal for smaller development teams - plus no CALs are required for basic use of the worklists. It also details the User Vs Device CAL conditions, and how the External Connector License works.

Note that you must get CALs for "internal" users - internal users cannot use the External connector license.  The definition of External users is as follows (though this definition is somewhat vague and open to interpretation):
"External users are defined as users that are not employees of the organization or its affiliates, nor are they employees of the organization’s or its affiliates’ onsite contractors or agents."


Tuesday, 10 August 2010

SharePoint 2007 - Finding Group Membership of Users (Active Directory or SharePoint) through UserProfileService.asmx

By using the SharePoint 2007 (and above) Out-Of-The-Box "User Profile" Web Service found at the following location:

You can easily obtain information about the group membership (both Active Directory and SharePoint Groups) of a particular user - not just their basic user profile information (e.g. Mobile, Manager). This can be done with the GetCommonMemberships() method.

In this way, without code and without directly accessing Active Directory/LDAP, you can find the group memberships of a user for consumption in an InfoPath form by consuming it as a Web Service-based datasource.

For implementation details of a code-free InfoPath consumption of this web service, see:


Sunday, 8 August 2010

Location of Microsoft.SqlServer.Dts.Runtime when Programmatically Running an SSIS 2008 Package

If you are having problems finding the DLL to reference to programmatically run an SSIS Package, you should find it in the following location:

%Program files%\Microsoft SQL Server\100\SDK\Assemblies\Microsoft.SQLServer.ManagedDTS.dll

Take note that you may have several directories under %Program files%\Microsoft SQL Server\ if you have upgraded through different versions of SQL Server. Try looking through each of these to find the correct version of Microsoft.SQLServer.ManagedDTS.dll
So assuming you installed it on C:\, you should find it in:

For SQL 2005:
C:\Program files\Microsoft SQL Server\90\SDK\Assemblies\Microsoft.SQLServer.ManagedDTS.dll
For SQL 2008:
C:\Program files\Microsoft SQL Server\100\SDK\Assemblies\Microsoft.SQLServer.ManagedDTS.dll

If you still can't find the Assembly, make sure you've installed the Client Tools SDK as shown below:


Entity Framework 4 Limitation - [System.NotSupportedException] - "LINQ to Entities does not recognize the method 'System.String ToString()' method, and this method cannot be translated into a store expression."

I received this System.NotSupportedException at runtime (not design time) today when attempting a conversion of a Nullable Integer field to a string for population of an ASP.NET MVC 2 dropdownlist:

"LINQ to Entities does not recognize the method 'System.String ToString()' method, and this method cannot be translated into a store expression."

I was consuming an Entity Framework Datamodel indirectly (via a model) via a call to the Html.DropDownListFor() method:

<div class="editor-field">
<%: Html.DropDownListFor(model => model.Year, ViewData.Model.YearList) %>
<%: Html.ValidationMessageFor(model => model.Year) %>

The best workaround for this issue I've found (that keeps all processing happening in SQL Server rather than doing a client-side evaluation/enumeration) is using the SqlFunctions.StringConvert method:

public IEnumerable YearList 
                var list = new WorkforceEntities().Collections;
                return list.Select(a => new SelectListItem()
                    Text = SqlFunctions.StringConvert((double)a.FinancialYear),
                    Value = SqlFunctions.StringConvert((double)a.FinancialYear.Value)

Unfortunately this is a limitation of the Entity Framework versions 1 and 2 - as .ToString() is not one of the supported CLR to Database Canonical Model translations as detailed here:

Ensuring the server side evaluatation takes place is more important if we were filtering this list - but the golden rule and preferred outcome is the same - to minimize the amount of data going across the wire.

Checking SQL Profiler on the SQL server side, this evaluates to the following T-SQL in SQL Server:

[Distinct1].[C1] AS [C1], 
[Distinct1].[C2] AS [C2], 
[Distinct1].[C3] AS [C3]
 1 AS [C1], 
 STR( CAST( [Extent1].[Quarter] AS float)) AS [C2], 
 STR( CAST( [Extent1].[Quarter] AS float)) AS [C3]
 FROM [cfg].[Collection] AS [Extent1]
)  AS [Distinct1]

Which is using all SQL Server Canonical Functions - as best practice (performance-wise) dictates.


Saturday, 7 August 2010

SSIS 2008 - "Unspecified error" and "Could not find installable ISAM" Errors After installing Office 2010 and Visual Studio 2010

I recently installed Office 2010 on one of my development machines. Suddently, all of my SQL Server 2008 SSIS packages started to generate errors (when attempting to preview data outputs) to the tune of:

Error at PackageName [Connection manager "ImportFileSource"]: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.

An OLE DB record is available. Source: "Microsoft Access Database Engine" Hresult: 0x80004005 Description: "Unspecified error".

Error at ImportDataTaskName [OLE DB Source 1 [3639]]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "ImportFileSource" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed.
Exception from HRESULT: 0xC020801C (Microsoft.SqlServer.DTSPipelineWrap)

The SSIS package tasks that had the issues were using a connection string pointing to the Office 12 OLEDB drivers like so:

Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\temp\FileName.xlsx;Extended Properties="Excel 12.0";Persist Security Info=False"

If you then attempt to do a test run with the same connection string, it will also fail with the following error:

Test connection failed because of an error in initializing provider. Could not find installable ISAM.

The cleanest and most reliable way to fix this issue when using 2007 drivers (rather than attempting to reregister the neccessary dlls or if there are other issues), is to just just install or re-install the "2007 Office System Driver: Data Connectivity Components" - available at the following location:

Alternatively you could also just try re-registering the dlls as described here:

Wednesday, 4 August 2010

Fix - ASP.NET ReportViewer for SSRS not Rendering in IIS 7.0 or 7.5

Fix - If you are unable to render the ReportViewer Control on your ASP.NET Web pages while running on IIS7, the typical cause of this problem is:
  1. When the ReportViewer control is added to Web Form (.aspx), the Reserved.ReportViewerWebControl.axd httpHandler is added to System.Web section of the Web.Config file. In IIS7, it should be added under System.WebServer section.
  2. IIS7 Handler Mappings does not contain the Reserved.ReportViewerWebControl.axd httpHandler, and is therefore unable to render the ReportViewer elements needed by the JavaSript.
See  for a fix.


Fix - IIS 7.0 and 7.5 Not Rendering CSS Files or other static content with Error 500 or Blank Pages

During a deployment for a client today of a custom ASP.NET application, IIS 7 refused to render external css files correctly - just spitting out Error 500 or blank content.  This happens because IIS 7 and 7.5 do NOT render static content by default - which is a little surprising but not completely unexpected.

To fix for Windows Server 2008:
  1. Open up Start - Administrative Tools - Server Manager on the Front End web server in question.
  2. Select Web Server (IIS) under Roles
  3. Click on "Add Role Services"
  4. Enable the "Static Content" checkbox.
  5. In IIS 7, Click on the Website and double click Handler Mappings
  6. Right click on "StaticFile" and click "Edit" .
  7. In the Module Field add "StaticFileModule,DefaultDocumentModule" and click OK
  8. DONE - FIXED!
The official support document for this on the MS site can be found here:


Installing the Crystal Reports Runtime on a Web Server when trying to deploy Crystal Reports Developed in Visual Studio 2008

My preference is to use SQL Server Reporting Services (SSRS) as an application reporting engine where possible - but one of my clients had a requirement to create reports in Crystal Reports. The reports were designed in Visual Studio 2008 and consequently use the Crystal Reports 2008 Basic engine. When you deploy an application onto a server, you will get an error in the event log and an image placeholder where the report should be.

If you want to deploy your app on a server, you'll also need to deploy the CR runtime for your reports to render correctly. The official way to do this (rather than trying to install all the DLLs you can find to the gac) is to use the installer. The easiest way to get this is from your local drive (where you installed visual studio). It is typically located here:

%Program Files%Microsoft SDKs\Windows\v6.0A\Bootstrapper\Packages\CrystalReports10_5


Wednesday, 28 July 2010

Using SharePoint 2007 and SharePoint 2010 with Encrypted Databases

With SQL 2008, a new facility is provided called "TDE" or "Transparent Data Encryption". This is sometimes required by clients whose corporate governance rules require files in a filesystem to be encrypted. What do you have to do to get this working with SharePoint 2007 or SharePoint 2010?


As the name of the feature suggests, you simply have to set it up on the SQL Server side (as per, and your underlying database files (and SharePoint Content) and any backups thereof will be encrypted without any extra effort on your part.


Friday, 23 July 2010

LINQ to Objects - Performing a wildcard (LIKE in SQL) match between 2 different lists (aka Converting For Loops to LINQ queries or a Cool Feature of Resharper)

We'll start with an example. How would I get a list of any items in the "letterList" List below that matches (ie Contains) any of the numbers in the "numbersList" List below?

var letterList = new List<string>() { "A1", "A2", "A3", "A4", "B1", "B2", "B3", "B4", "C1", "C2", "C3", "C4"};

var numberList = new List<string>() { "1", "2", "3" }; 

We could do this in a looping fashion, or we could use LINQ to perform the query in a more declarative fashion.

For loop solution:
public void TestForEach()
    //We want all items in the letterList that wildcard 
    //match numbers in the numberList. The output for this example should
    //not include any items in the letterlist with "4" as it is not in the 
    var letterList = new List<string>() { "A1", "A2", "A3", "A4", 
        "B1", "B2", "B3", "B4", "C1", "C2", "C3", "C4"};
    var numberList = new List<string>() { "1", "2", "3" };
    var outputList = new List<string>();

    foreach (var letter in letterList)
        foreach (var number in numberList)

            if (letter.Contains(number))

How would we do this in LINQ?
One of the problems is that the LINQ Contains method only matches one value at a time (not Numbers 1,2,3 at the same time). We also can't use a normal LINQ equijoin as the LINQ join syntax doesn't support wildcard matches.

The answer is to do the below:
public void TestForEachLINQ()
    //We want all items in the letterList that wildcard 
    //match numbers in the numberList. The output for this example should
    //not include any items in the letterlist with "4" as it is not in the 
    var letterList = new List<string>() { "A1", "A2", "A3", "A4", 
        "B1", "B2", "B3", "B4", "C1", "C2", "C3", "C4"};
    var numberList = new List<string>() { "1", "2", "3" };
    var outputList = (
        from letter in letterList 
        from number in numberList 
        where letter.Contains(number) select letter).ToList();

This effectively does a wildcard match between 2 different lists. When you look at it, it really is very similar to a SQL Server wildcard join - but just using a WHERE statement.

The simplest wayway to make a conversion like this is to use one of the new features of Resharper 5 - the "Convert Part of body into LINQ-expression" refactoring functionality. This will automatically convert the for each syntax to the declarative LINQ syntax. EASY!


Tuesday, 13 July 2010

Anatomy of an IT Disaster - How the IBM/SAP/Workbrain Queensland Health Payroll System Project Failed

There has been a lot of media interest in the failed SAP payroll project at Queensland Health recently. It has been termed as an "unprecedented failure of public administration''. Just today in the Australian Financial Review, it was stated that even the superannuation calculations have become a tangled web of manual overrides and inconsistency (due to the original payroll amounts being incorrectly calculated). There is also going to be an internal government hearing today to work out how this failure happened. Surprisingly though, the Queensland Minister for Health will apparently keep his job (as per the following news article in The Australian Newspaper Now disaster on such a large scale (like a large train crash) drew my curiosity and I just had to ask:

How did this massive project failure happen, and how did it go so wrong, so far,  for so long?

This blog article is something akin to "Air Crash Investigation" on TV - but from an IT software perspective. As the US philosopher George Santayana (1905) said - "Those who cannot remember the past are condemned to repeat it." - and I'd like to learn from such a systemic failure in the Australian IT context.

Project Statistics:
The project was large by anyones's measure:

More recently, blame has been levelled at problems sourcing from the management by the CorpTech Shared Services - as per this computerworld article:(
I know some SAP developers who worked on the project and they had some explanations as to what the main reasons for failure. They bailed out themselves as they could see the trainwreck that would happen down the line. They identified that IBM wasn't the sole point of failure - they were simply the last company to try and come in and fix the mess.

The Queensland Government is now attempting to sue IBM even though it has signed the application off as satisfactory. In terms of fallout from the disaster, the 2 top people in Queensland IT have been sacked, and it is likely that CorpTech middle managers involved will be disbanded.

Problems with the Queensland Health Project (aka Project Post-Mortem):
  1. [Project Management Issue] There was NO contingency plan (aka "Plan B") in place in case the payroll system went belly up (and it did). Way too much trust was put into the contractors to deliver a perfect, bug free result (no real-world software is 100% bug free) and not enough common sense was used to mitigate risks. 
  2. [Project Management Issue/Testing and Reporting Issues] - Testing Plan and Numbers were fiddled (!) so the application passed testing criteria - According to the Courier Mail Newspaper article( - they (quite heinously) fiddled the numbers - "Instead of slowing the process, the project board agreed to revise the definition of Severity 1 and 2 defects – effectively shifting the goalposts so the project passed exit criteria."
  3. [Project Management Issue] - There was no parallel run for the payroll between the WorkBrain System and SAP Payroll. This is what was recommended by SAP itself. I've had the SAP QA team come out to my clients and they do a pretty thorough job.
  4. [Project Management Issue] - There should have been a Gradual Rollout (you can't do ANY large payroll system in one hit/using a "big-bang" approach).
  5. [Architecture Issue] - The Architectural design is questionable. The Integration between the 2 systems is wrong - as WorkBrain rostering is writing directly to SAP (using flat files to pump data into SAP) rather than using the timesheets as the intermediary entry mechanism first. SAP Payroll systems are effectively bypassed by using WorkBrain and a bespoke system for payroll calculation and generation.
  6. [Testing Issue - Government Due Diligence Issue]  - The system had been signed off by Queensland Government without proper checking on their part (they are subsequently trying to disavow themselves of this responsibility though the end decision to go live was theirs and done through their project board).
  7. [Architecture and Project Management Issue] - Whether WorkBrain should have been used at all as it is a rostering application. Other States have just SAP systems and they operate acceptably.
  8. [Project Management/Procedural Issue] A failure of a contractor [IBM] and CorpTech to Follow SAP's recommendations.
  9. [Change Management Issues/Lack of training] - The training plans for this project were very limited and didn't take account of the difficulty in operating a new payroll system. 
[NOTE: I have no affiliations to IBM/Queensland Government/SAP]

Fix for WCF Client Proxy deserialization issue (related to svcutil.exe) when referencing Non-Microsoft Services (e.g. SAP services from SharePoint) - "Unable to generate a temporary class (result=1)."

When creating a client proxy for the SAP Service Registry (so I could dynamically set endpoints for my other WCF client calls), I had the following issue today when running a unit test:

Test method DDK.UnitTest.UDDIProxyTest.GetEndPointBasicTest threw exception: System.ServiceModel.CommunicationException: There was an error in serializing body of message findServiceDefinitionsRequest: 'Unable to generate a temporary class (result=1).

error CS0030: Cannot convert type 'DDK.BusinessService.UDDIRegistrySearchProxy.classificationPair[]' to 'DDK.BusinessService.UDDIRegistrySearchProxy.classificationPair'

This error is a result of issues with .NET commandline tools wsdl.exe or svcutil.exe incorrectly creating multidimensional arrays in the strongly typed proxy class (Reference.cs), as per screenshot below:

This problem occurs when the svcutil.exe or the Web Services Description Language Tool (Wsdl.exe) are used to generate the client information. When you publish a schema that contains nested nodes that have the maxOccurs attribute set to the "unbounded" value, these tools create multidimensional arrays in the generated datatypes.cs file. Therefore, the generated Reference.cs file contains incorrect types for the nested nodes.

The problem and fix is described in the following kb articles: and

The fix is to basically change the multi-dimensional array in the Reference.cs file related to your service reference to a single dimension.

classificationPair[] [] 
instead becomes

Note that you will of course need to update all parameter references in the Reference.cs file to this multi-dimensional array, not just the original declarations.

Monday, 21 June 2010

Error when Deploying Solutions in SharePoint using stsadm - "The farm is unavailable" and "Object reference not set to an instance of an object."

If you receive errors when Deploying Solutions in SharePoint using stsadm - such as "The farm is unavailable" and "Object reference not set to an instance of an object.", then you have a permissions issue.

You will typically get errors like this when running stsadm commands such as those found in this PowerShell script snippet below:
if ($isValidConfig -eq "true")
 Write-Host "Retracting Solution -  SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 stsadm -o retractsolution -name SolutionName.wsp -immediate -url $siteUrl
 stsadm -o execadmsvcjobs
 Write-Host "Deleting Solution -  SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 stsadm -o deletesolution -name SolutionName.wsp -override
 stsadm -o execadmsvcjobs
 Write-Host "Adding Solution -  SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 stsadm -o addsolution -filename SolutionName.wsp 
 stsadm -o execadmsvcjobs
 Write-Host "Deploying Solution -  SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 stsadm -o deploysolution -name SolutionName.wsp -url $siteUrl -immediate -allowgacdeployment -force
 stsadm -o execadmsvcjobs
 Write-Host "Activating Feature - SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 stsadm -o activatefeature -name FeatureName -url $siteUrl -force
 stsadm -o execadmsvcjobs
 Write-Host "OPERATION COMPLETE - SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 stsadm -o execadmsvcjobs
 Write-Host "Resetting IIS so we avoid 'Unknown Error' or 'File Not Found' errors - SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 stsadm -o execadmsvcjobs

Errors that occur with the script if you don't have correct permissons on the SharePoint configuration database:

You should have dbo permissions to the Configuration database for your farm. See my related article for details on the permissions you need for solution deployment -


How to change the Read Only Attribute of Files in Powershell using a Visual Studio Pre-Build command (ie not using the DOS attrib command)

When using Microsoft PowerShell 2.0, you can just put this in your Visual Studio project pre-build event to remove the read-only attribute on binary files:

$(ProjectDir)FixTemplateFolderAttributes.cmd $(ProjectDir)
This points to a command file in your project directory called "FixTemplateFolderAttributes.cmd" like so:

:: Changes file attributes as needed.
cd %1
powershell Set-ExecutionPolicy RemoteSigned
powershell ../Build/Scripts/FixTemplateFolderAttributes.ps1

This calls the following powershell commands to make files writable:

$computer = gc env:computername

$fileList = Get-ChildItem ".\InfoPath Form Template" | Where-Object {$ -like "*.dll" -or $ -like "*.pdb" -or $ -like "*.xsf"  }

foreach ($fileItem in $fileList) 
 $fileItem.set_IsReadOnly($false) # Remove readonly flag

$fileList = Get-ChildItem ".\obj\Debug\" | Where-Object {$ -like "*.dll" -or $ -like "*.pdb" -or $ -like "*.txt"}

foreach ($fileItem in $fileList) 
 $fileItem.set_IsReadOnly($false) # Remove readonly flag

$fileList = Get-ChildItem ".\bin\Debug\" | Where-Object {$ -like "*.dll" -or $ -like "*.pdb" -or $ -like "*.txt"}

foreach ($fileItem in $fileList) 
 $fileItem.set_IsReadOnly($false) # Remove readonly flag


Monday, 31 May 2010

Fix - SharePoint Very slow to start after an IISRESET or Recycle of App Pool (30-130 seconds)

I was asked by another team at my current client to look at a performance issue that they'd been having major issues with. There were no obvious errors in the Windows Event Log or SharePoint logs related to the issue. The problem was that:
  1. If the application pool is recycled, it would take around 90-120 seconds for the first page to be served. This would be unacceptable to the client in case the App pool was recycled in the middle of the day - it would mean 2 minutes of downtime for all employees.
  2. A similar issue with after an IIS reset was performed - it also happened with ALL sites, not just one or two.
To diagnose the issue, I did the following:
  1. ANY performance improvement should be measurable. So I used the Fiddler Web Debugger (  to measure the total request time. Time was 84 seconds on this particular test server.
  2. Used Sysinternals Process Explorer to see what the threads were doing. This revealed little - but it was clear that the process wasn't 100% the whole time so it wasn't a problem related to intensive CPU processing.
  3. I enabled ASP.NET tracing at the application level as per and viewed the trace log through http://servername/Pages/Trace.axd. However, looking at the load of the control tree - nothing was taking a particularly long time. Even when the trace.axd was loading up, it would take an inordinately long time to start up and server the first requested page. This ruled out the possibility of it being a slow control being rendered.
  4. I created a completely new web application in SharePoint and it exhibited the same problem. I began to suspect machine-level config settings.
  5. I found and fixed several errors in the Windows Event Log and Sharepoint Log but they made no difference.
  6. I began to look at the Fiddler trace while testing again and by chance noticed that requests were also being made to an external address at Microsoft for code signing certificates. I thought this was unusual - so did a bit of research and found that it was checking for a revoked certificates list on a Microsoft web server. This is done when any of the cryptography calls are performed. Some details about this can be found here - but the article is related to Exchange specifically:  
  7. To work around the issue, I tried the registry entries suggested by, but it didn't seem to work. What DID work was pointing the hosts file so that would resolve to the local host (  This meant that the call would much more quickly fail when it tries to access the certificate revoke list at and, and not hold up the loading of Applications on the SharePoint server.
  8. After the HOSTs file change, recycle time (and reset time) went from 84 seconds to 20 seconds.
Hopefully this blog entry helps someone else with diagnosing this slowdown problem. Note that this fix only applies if your server doesn't have access to the internet - it is a problem specific to offline or intranet servers.

[UPDATE] - Found that someone else encountered this same issue as per and

The first article suggests the use of an XML file in each config - but I've not tested this out:

<?xml version="1.0" encoding="utf-8"?> 
              <generatePublisherEvidence enabled="false"/> 

[UPDATE - 11 October 2010]
One of my colleagues from Oakton had a similar issue and the above fix (using the hosts file) didn't work for them.

One of the fixes that did work was to do the following:
"Disable the CRL check by modifying the registry for all user accounts that use STSADM and all service accounts used by SharePoint. Find yourself a group policy wizard or run the vbscript at the end of this posting to help you out. Alternatively you can manually modify the registry for each account:

[HKEY_USERS\\Software\Microsoft\Windows\CurrentVersion\WinTrust\Trust Providers\Software Publishing]
"State"=dword:00023e00 "

The following script applies the registry change to all users on a server. This will solve the spin-up time for the service accounts, interactive users and new users.

const HKEY_USERS = &H80000003
strComputer = "."
Set objReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\\" _
& strComputer & "\root\default:StdRegProv")
strKeyPath = ""
objReg.EnumKey HKEY_USERS, strKeyPath, arrSubKeys
strKeyPath = "\Software\Microsoft\Windows\CurrentVersion\WinTrust\Trust Providers\Software Publishing"
For Each subkey In arrSubKeys
  objReg.SetDWORDValue HKEY_USERS, subkey & strKeyPath, "State", 146944


Wednesday, 26 May 2010

Warning - BizTalk Server 2009 and SQL Server 2008 R2 are incompatible - wait for BizTalk 2010 (aka BizTalk 2009 R2) for "realignment" of compatibility

During installation of Biztalk 2009 tonight I found that SQL Server 2008 R2 and 2009 are in fact incompatible - I couldn't ever get the BizTalk group to install as it was giving errors in the log like so:

2010-05-26 01:30:09:0039 [WARN] AdminLib GetBTSMessage: hrErr=c0c02524; Msg=Failed to create Management database "BizTalkMgmtDb" on server "SERVER01".

You will also get a message box with just a hex code of "0xC0C02524" as below:

I tried manually creating the database - but then it started to give errors with the stored procedure creation.

The below blog matches what I experienced during BizTalk 2009 Group Configuration:

Tuesday, 25 May 2010

Fix - When Configuring BizTalk 2009 - "Failed to connect to the SQL database SSODB on SQL Server SERVERNAME"

If you get the error "Failed to connect the the SQL database SSODB on SQL Server SERVERNAME", in the Biztalk 2009 Configuration Wizard Or the Enterprise SSO Service doesn't start, then you may not have the sso assemblies registered. To do this, do the following:
  1. Open a Visual Studio Command Prompt (so regasm.exe is in the path)
  2. Go to the directory C:\Program Files\Common Files\Enterprise Single Sign-On
  3. Register the assembles with the command regasm ssosql.dll.
  4. Reboot.
  5. DONE!
Additional Note:
If you are doing this in a 64-bit environment, you MUST use the 64-bit version of regasm to register the assemblies. Otherwise it will succeed the regasm command but will not support the Biztalk 2009 configuration wizard and keep failing. The 64 bit version of regasm is here: C:\Windows\Microsoft.NET\Framework64\v2.0.50727 
[As per

SharePoint Installation/Deployment Best Practice - Using SQL Server Aliases to avoid issues if the SQL Database Server is Renamed, a SQL Instance is added or the SQL Port is changed

One of the problems SharePoint 2007 is that the database name and server name is held in several tables within the configuration database. When you need to change the name of the database server from Central Administration, several references in the configuration tables need to change. There is the hard way and the easy way of doing this as below:
Changing SharePoint Database Server Name - the HARD/WRONG way
Here is the typical process to change the database Server that SharePoint is using:
  1. Move or attach all the sharepoint databases to the new server.
  2. Change the location of the config database with
    stsadm.exe -o setconfigdb -databaseserver ServerName -farmuser MyUserName – farmpassword MyPassword
  3. Delete the Central Administration Site in IIS
  4. Re-run the the SharePoint Products and Technologies Configuration Wizard.
  5. DONE
Changing SharePoint Database Server Name - the EASY/RIGHT way
The above process is a bit painful - and I never like deleting a core component of SharePoint like Central Admin. 
To greatly simplify migraton if you should need to change the port or the server name of your SharePoint database server, you should instead use SQL Server Aliases for your connection from SharePoint to SQL. To do this, go to a command prompt on all servers (as they all connect to the database) and enter the following to open the SQL Client Configuration Tool (note that it doesn't have an i between the n and the f):


Once this is open, you can add a TCP/IP alias as neccessary, which points to your physical Server name, port or instance. Use this alias name when entering the database server in SharePoint - and you wont' look back! The server name change process is then as follows:
  1. Move or attach all the sharepoint databases to the new server.
  2. Change the Alias
  3. DONE!

Friday, 21 May 2010

Demo Virtual Machine with Office 2010, SharePoint 2010, Visual Studio 2010, Project 2010, Visio 2010 - Hyper-V image now RTM

The "2010 Information Worker Demonstration and Evaluation Virtual Machine (RTM)" image is finally available on Microsoft Downloads here:

The beta has been around for ages so I'm looking forward to getting this working. You can even convert the Hyper-V image to VMWare with the VMware vCenter ConverterTM

It includes the following:
  1. Windows Server 2008 R2 Standard Evaluation Edition x64, running as an Active Directory Domain Controller for the “CONTOSO.COM” domain with DNS and WINS
  2. Microsoft SQL Server 2008 R2 Enterprise Edition with Analysis, Notification, and Reporting Services
  3. Microsoft Office Communication Server 2007 R2
  4. Microsoft Visual Studio 2010
  5. Microsoft SharePoint Server 2010 Enterprise Edition
  6. Microsoft Office Web Applications
  7. Microsoft FAST Search for SharePoint 2010
  8. Microsoft Project Server 2010
  9. Microsoft Office Professional Plus 2010
  10. Microsoft Visio 2010
  11. Microsoft Project 2010
  12. Microsoft Office Communicator 2007 R2  

Fix for SharePoint 2007 Deployment Issue - Solutions in Permanent state of "Deploying", Application Server Administration Service Timer Job in permanent State of 'Initialized'

If you encounter the following issues when attempting to deploy a SharePoint solution (wsp):

  1. It is permanently/constantly in a status of "Deploying" (as seen in Central Administration - Operations - Solution Management)
  2. You cannot cancel the Deployment (no matter how many times you click on 'Cancel')
  3. The SharePoint Timer job definitions are permanently in a status of "Initialized" (as seen in Central Administration - Operations - Timer Job Status).
  4. Your SharePoint Log files are being flooded with the following 5uuf error and growing extremely large (one of ours got to 3GB!):

    05/21/2010 11:41:44.87 OWSTIMER.EXE (0x08E0) 0x08E8 Windows SharePoint Services Timer 5uuf Monitorable The previous instance of the timer job 'Config Refresh', id '{604B2E6E-5850-4C95-8015-D49A61449456}' for service '{681C12E2-4C2C-4BB5-9C9C-BCCF5B4FF5BE}' is still running, so the current instance will be skipped. Consider increasing the interval between jobs.
This is due to an invalid configuration cache on one or or all of the servers. What was happening in my situation was that:
  1. The SharePoint Configuration databasse went down.
  2. The XML configuration files on SERVER02 were updated by SharePoint but not on SERVER01
  3. In fact, there were 500 XML files on one server and 520 on the other whereas they should always be in sync.
However, is is not sufficient to JUST do it on the central admin server. You have to follow this blog entry ( or (  and perform a similar process (as desribed above on each server.)

You must do the following:

  1. Stop the OWSTIMER service on the Index Server and then ALL of the MOSS servers in the farm (for me it was the 2 servers mentioned above e.g. SERVER01 and SERVER02. Just use the following at a command prompt to do this on EVERY WFE and INDEX SERVER
    net stop "Windows SharePoint Services Timer"
    Go to C:\Documents and Settings\All Users\Application Data\Microsoft\SharePoint\Config\
    Move (not just copy) all the xml config files (don't delete the config file or the folder itself, just all the xml files) to another location (e.g. a "zz" folder") in a temporary directory as a backup.
    Open the cache.ini with Notepad and reset the number to 1. Save and close the file.
  4. Once all XML files are removed and the cache.ini files reset to 1 for ALL SERVERS, run
    net stop "Windows SharePoint Services Timer"on the Index Server first.
  5. Once all the Xml files are generated, run
    net stop "Windows SharePoint Services Timer" on Query Servers and Web Front End Servers in turn.

Additional Blog Note - how to remove unwanted solutions that are stuck in a "Deploying"State (without fixing the underlying Config problems as detailed above)

  1. Reboot Server (in line with my "If in doubt, restart it!" motto). This in effect will restart all the services anyway.
  2. Use the following command to get a list of all solutions that are being deployed.
    stsadm -o enumdeployments
  3. Use the JobId GUID that comes back from this e.g. Deployment JobId="e99b7304-cfc0-419a-a3f2-18ca5193c838"
  4. Cancel the "stuck" deployment (in "Deploying" status" with the following command
    stsadm -o canceldeployment -id e99b7304-cfc0-419a-a3f2-18ca5193c838
  5. Delete the stuck solution once and for all with
    stsdm -o deletesolution -name mysolution.wsp -override
  6. Redeploy your solution.

Wednesday, 5 May 2010

How to Recursively Get the Group Membership of a User in Active Directory using .NET/C# and LDAP (without just 2 hits to Active Directory)

What should you do if you need to find all the indirect group memberships that a user has in Active Directory? While it is possible to recursively navigate (ie traverse) through all the group structures that your user is a member of, this can be a very intensive process and can potentially involve 100s of calls to the LDAP server (A slight performance hit to say the list. In sum, BAD!)

In the example below, how do we determine that a user is a member of a top level group without making intensive, recursive calls to Active Directory/LDAP?

A better option is to use the power of [Microsoft's Implementation] of LDAP to get the results in only 2 hits to the server.
  1. We start of with the user's login name (e.g. david.klein)
  2. We query ldap to get their Container Name (CN) e.g. CN=David Klein
  3. We use the special query syntax provided by Microsoft LDAP in the Directory Searcher Filter to recursively get a list of all groups that the user is directly AND indirectly a member of.

See source code below for 2 helper methods you can use to recursively determine if the designated user is directly or indirectly a member of a particular group. Note that we use the special filter syntax using a specific member flag that will get all indirect memberships automatically for us:

"(member:1.2.840.113556.1.4.1941:=CN=My User Name,OU=Users,OU=NSW,OU=DDKONLINE,DC=DDKONLINE,DC=int)"

        /// Recursively Gets ALL nested group memberships of a user and checks the input group is there.
e.g. david.klein or kled123/// 
Container Name of Group e.g. "SP_DEV_HR"/// Uses following config entries
        public static bool IsUserMemberOfGroup(string username, string groupname)
            ///ConfigHelper.LDAPRoot is "LDAP://DC=DDKONLINE,DC=int"
            DirectoryEntry entry = new DirectoryEntry(ConfigHelper.LDAPRoot);
            // Create a DirectorySearcher object.
            DirectorySearcher mySearcher = new DirectorySearcher(entry);
            //Filter by special recursive LDAP string e.g. 
            mySearcher.Filter = string.Format(ConfigHelper.LDAPGroupMemberFilterRecursive, 
            mySearcher.SearchScope = SearchScope.Subtree; //Search from base down to ALL children. 
            SearchResultCollection result = mySearcher.FindAll();
            //StringBuilder sb = new StringBuilder();

            for (int i = 0; i < result.Count - 1; i++)
                if (result[i].Path.ToUpper().Contains(string.Format("CN={0}", groupname.ToUpper())))
                    return true; //Success - group found
            //No match found
            return false;

        /// Gets the Container Name (CN) of the input user.
        public static string GetUserContainerName(string userName)
            DirectoryEntry entry = new DirectoryEntry(ConfigHelper.LDAPRoot);
            // Create a DirectorySearcher object.
            DirectorySearcher mySearcher = new DirectorySearcher(entry);
            mySearcher.Filter = string.Format("(&(sAMAccountName={0}))", userName);
            mySearcher.SearchScope = SearchScope.Subtree; //Search from base down to ALL children. 
            SearchResultCollection result = mySearcher.FindAll();
            if (result.Count == 0)
                throw new ApplicationException(string.Format("User '{0}' Not Found in Active Directory.", userName));
            return result[0].GetDirectoryEntry().Name.Replace("CN=",string.Empty);  

Example Unit Test Methods
        /// This Test checks that the recursive search works correctly against Active directory.
        /// ie. that it picks up indirect membership
        /// Uses following config entries
        public void IsUserMemberOfGroup_DirectMembership_Positive_Test()
            string username = "sp_dev_pdmtest1"; 
            string groupname = "SP_DEV_HR"; // TODO: Initialize to an appropriate value
            bool expected = true; // TODO: Initialize to an appropriate value
            bool actual;
            actual = ADHelper.IsUserMemberOfGroup(username, groupname);
            Assert.AreEqual(expected, actual);

        /// This Test checks that the recursive search works correctly against Active directory.
        /// ie. that it picks up indirect membership
        public void IsUserMemberOfGroup_IndirectMembership_Positive_Test()
            string username = "sp_dev_pdmtest1";
            //Naming Convention for Groups is Environment_AppDomain_FunctionalArea_ObjectType (e.g. Form)_Role
            string groupname = "SP_DEV_Onlineforms_Peoplemgmt_Termination_F_Contributors"; // TODO: Initialize to an appropriate value
            bool expected = true; // TODO: Initialize to an appropriate value
            bool actual;
            actual = ADHelper.IsUserMemberOfGroup(username, groupname);
                Assert.AreEqual(expected, actual);

        ///This Test Checks that the container name is resolved. Container name is used by the recursive group search.
        public void GetUserContainerNameTest()
            string username = "david.klein"; 
            string expected = "David Klein"; 
            string actual = ADHelper.GetUserContainerName(username);
            Assert.AreEqual(expected, actual);