Wednesday, 28 July 2010

Using SharePoint 2007 and SharePoint 2010 with Encrypted Databases

With SQL 2008, a new facility is provided called "TDE" or "Transparent Data Encryption". This is sometimes required by clients whose corporate governance rules require files in a filesystem to be encrypted. What do you have to do to get this working with SharePoint 2007 or SharePoint 2010?

Nothing!

As the name of the feature suggests, you simply have to set it up on the SQL Server side (as per http://technet.microsoft.com/en-us/library/bb934049(SQL.100).aspx), and your underlying database files (and SharePoint Content) and any backups thereof will be encrypted without any extra effort on your part.



DDK

Friday, 23 July 2010

LINQ to Objects - Performing a wildcard (LIKE in SQL) match between 2 different lists (aka Converting For Loops to LINQ queries or a Cool Feature of Resharper)

We'll start with an example. How would I get a list of any items in the "letterList" List below that matches (ie Contains) any of the numbers in the "numbersList" List below?

var letterList = new List<string>() { "A1", "A2", "A3", "A4", "B1", "B2", "B3", "B4", "C1", "C2", "C3", "C4"};

var numberList = new List<string>() { "1", "2", "3" }; 

We could do this in a looping fashion, or we could use LINQ to perform the query in a more declarative fashion.

For loop solution:
[TestMethod]
public void TestForEach()
{
    //We want all items in the letterList that wildcard 
    //match numbers in the numberList. The output for this example should
    //not include any items in the letterlist with "4" as it is not in the 
    var letterList = new List<string>() { "A1", "A2", "A3", "A4", 
        "B1", "B2", "B3", "B4", "C1", "C2", "C3", "C4"};
    var numberList = new List<string>() { "1", "2", "3" };
    var outputList = new List<string>();

    foreach (var letter in letterList)
    {
        foreach (var number in numberList)

            if (letter.Contains(number))
            {
                outputList.Add(letter);
            }
    }
}

How would we do this in LINQ?
One of the problems is that the LINQ Contains method only matches one value at a time (not Numbers 1,2,3 at the same time). We also can't use a normal LINQ equijoin as the LINQ join syntax doesn't support wildcard matches.

The answer is to do the below:
[TestMethod]
public void TestForEachLINQ()
{
    //We want all items in the letterList that wildcard 
    //match numbers in the numberList. The output for this example should
    //not include any items in the letterlist with "4" as it is not in the 
    var letterList = new List<string>() { "A1", "A2", "A3", "A4", 
        "B1", "B2", "B3", "B4", "C1", "C2", "C3", "C4"};
    var numberList = new List<string>() { "1", "2", "3" };
    var outputList = (
        from letter in letterList 
        from number in numberList 
        where letter.Contains(number) select letter).ToList();
}

This effectively does a wildcard match between 2 different lists. When you look at it, it really is very similar to a SQL Server wildcard join - but just using a WHERE statement.

The simplest wayway to make a conversion like this is to use one of the new features of Resharper 5 - the "Convert Part of body into LINQ-expression" refactoring functionality. This will automatically convert the for each syntax to the declarative LINQ syntax. EASY!


DDK

Tuesday, 13 July 2010

Anatomy of an IT Disaster - How the IBM/SAP/Workbrain Queensland Health Payroll System Project Failed

There has been a lot of media interest in the failed SAP payroll project at Queensland Health recently. It has been termed as an "unprecedented failure of public administration''. Just today in the Australian Financial Review, it was stated that even the superannuation calculations have become a tangled web of manual overrides and inconsistency (due to the original payroll amounts being incorrectly calculated). There is also going to be an internal government hearing today to work out how this failure happened. Surprisingly though, the Queensland Minister for Health will apparently keep his job (as per the following news article in The Australian Newspaper http://www.theaustralian.com.au/australian-it/minister-keeps-job-despite-queensland-health-payroll-debacle/story-e6frgakx-1225886060838). Now disaster on such a large scale (like a large train crash) drew my curiosity and I just had to ask:

How did this massive project failure happen, and how did it go so wrong, so far,  for so long?

This blog article is something akin to "Air Crash Investigation" on TV - but from an IT software perspective. As the US philosopher George Santayana (1905) said - "Those who cannot remember the past are condemned to repeat it." - and I'd like to learn from such a systemic failure in the Australian IT context.


Project Statistics:
The project was large by anyones's measure:


More recently, blame has been levelled at problems sourcing from the management by the CorpTech Shared Services - as per this computerworld article:(http://www.computerworld.com.au/article/352346/corptech_called_account_shared_services_failing/).
I know some SAP developers who worked on the project and they had some explanations as to what the main reasons for failure. They bailed out themselves as they could see the trainwreck that would happen down the line. They identified that IBM wasn't the sole point of failure - they were simply the last company to try and come in and fix the mess.

The Queensland Government is now attempting to sue IBM even though it has signed the application off as satisfactory. In terms of fallout from the disaster, the 2 top people in Queensland IT have been sacked, and it is likely that CorpTech middle managers involved will be disbanded.

Problems with the Queensland Health Project (aka Project Post-Mortem):
  1. [Project Management Issue] There was NO contingency plan (aka "Plan B") in place in case the payroll system went belly up (and it did). Way too much trust was put into the contractors to deliver a perfect, bug free result (no real-world software is 100% bug free) and not enough common sense was used to mitigate risks. 
  2. [Project Management Issue/Testing and Reporting Issues] - Testing Plan and Numbers were fiddled (!) so the application passed testing criteria - According to the Courier Mail Newspaper article(http://www.couriermail.com.au/news/queensland/queensland-health-payroll-fallout-to-reshape-awards/story-e6freoof-1225885400871 - they (quite heinously) fiddled the numbers - "Instead of slowing the process, the project board agreed to revise the definition of Severity 1 and 2 defects – effectively shifting the goalposts so the project passed exit criteria."
  3. [Project Management Issue] - There was no parallel run for the payroll between the WorkBrain System and SAP Payroll. This is what was recommended by SAP itself. I've had the SAP QA team come out to my clients and they do a pretty thorough job.
  4. [Project Management Issue] - There should have been a Gradual Rollout (you can't do ANY large payroll system in one hit/using a "big-bang" approach).
  5. [Architecture Issue] - The Architectural design is questionable. The Integration between the 2 systems is wrong - as WorkBrain rostering is writing directly to SAP (using flat files to pump data into SAP) rather than using the timesheets as the intermediary entry mechanism first. SAP Payroll systems are effectively bypassed by using WorkBrain and a bespoke system for payroll calculation and generation.
  6. [Testing Issue - Government Due Diligence Issue]  - The system had been signed off by Queensland Government without proper checking on their part (they are subsequently trying to disavow themselves of this responsibility though the end decision to go live was theirs and done through their project board).
  7. [Architecture and Project Management Issue] - Whether WorkBrain should have been used at all as it is a rostering application. Other States have just SAP systems and they operate acceptably.
  8. [Project Management/Procedural Issue] A failure of a contractor [IBM] and CorpTech to Follow SAP's recommendations.
  9. [Change Management Issues/Lack of training] - The training plans for this project were very limited and didn't take account of the difficulty in operating a new payroll system. 
DDK
[NOTE: I have no affiliations to IBM/Queensland Government/SAP]

Fix for WCF Client Proxy deserialization issue (related to svcutil.exe) when referencing Non-Microsoft Services (e.g. SAP services from SharePoint) - "Unable to generate a temporary class (result=1)."

When creating a client proxy for the SAP Service Registry (so I could dynamically set endpoints for my other WCF client calls), I had the following issue today when running a unit test:

Test method DDK.UnitTest.UDDIProxyTest.GetEndPointBasicTest threw exception: System.ServiceModel.CommunicationException: There was an error in serializing body of message findServiceDefinitionsRequest: 'Unable to generate a temporary class (result=1).

error CS0030: Cannot convert type 'DDK.BusinessService.UDDIRegistrySearchProxy.classificationPair[]' to 'DDK.BusinessService.UDDIRegistrySearchProxy.classificationPair'

This error is a result of issues with .NET commandline tools wsdl.exe or svcutil.exe incorrectly creating multidimensional arrays in the strongly typed proxy class (Reference.cs), as per screenshot below:


Cause:
This problem occurs when the svcutil.exe or the Web Services Description Language Tool (Wsdl.exe) are used to generate the client information. When you publish a schema that contains nested nodes that have the maxOccurs attribute set to the "unbounded" value, these tools create multidimensional arrays in the generated datatypes.cs file. Therefore, the generated Reference.cs file contains incorrect types for the nested nodes.

The problem and fix is described in the following kb articles:
http://support.microsoft.com/kb/891386 and 
http://support.microsoft.com/kb/326790/en-us

The fix is to basically change the multi-dimensional array in the Reference.cs file related to your service reference to a single dimension.
e.g.

classificationPair[] [] 
instead becomes
classificationPair[] 

Note that you will of course need to update all parameter references in the Reference.cs file to this multi-dimensional array, not just the original declarations.
DDK

Monday, 21 June 2010

Error when Deploying Solutions in SharePoint using stsadm - "The farm is unavailable" and "Object reference not set to an instance of an object."

If you receive errors when Deploying Solutions in SharePoint using stsadm - such as "The farm is unavailable" and "Object reference not set to an instance of an object.", then you have a permissions issue.

You will typically get errors like this when running stsadm commands such as those found in this PowerShell script snippet below:
if ($isValidConfig -eq "true")
{
 Write-Host "Retracting Solution -  SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 stsadm -o retractsolution -name SolutionName.wsp -immediate -url $siteUrl
 stsadm -o execadmsvcjobs
 Write-Host "Deleting Solution -  SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 stsadm -o deletesolution -name SolutionName.wsp -override
 stsadm -o execadmsvcjobs
 Write-Host "Adding Solution -  SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 stsadm -o addsolution -filename SolutionName.wsp 
 stsadm -o execadmsvcjobs
 Write-Host "Deploying Solution -  SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 stsadm -o deploysolution -name SolutionName.wsp -url $siteUrl -immediate -allowgacdeployment -force
 stsadm -o execadmsvcjobs
 Write-Host "Activating Feature - SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 stsadm -o activatefeature -name FeatureName -url $siteUrl -force
 stsadm -o execadmsvcjobs
 Write-Host "OPERATION COMPLETE - SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 stsadm -o execadmsvcjobs
 Write-Host "Resetting IIS so we avoid 'Unknown Error' or 'File Not Found' errors - SERVER:$computer, SITE:$siteUrl" -Fore DarkGreen
 iisreset
 stsadm -o execadmsvcjobs
}

Errors that occur with the script if you don't have correct permissons on the SharePoint configuration database:



You should have dbo permissions to the Configuration database for your farm. See my related article for details on the permissions you need for solution deployment - http://ddkonline.blogspot.com/2010/03/list-of-permissions-required-for.html

DDK

How to change the Read Only Attribute of Files in Powershell using a Visual Studio Pre-Build command (ie not using the DOS attrib command)

When using Microsoft PowerShell 2.0, you can just put this in your Visual Studio project pre-build event to remove the read-only attribute on binary files:

$(ProjectDir)FixTemplateFolderAttributes.cmd $(ProjectDir)
This points to a command file in your project directory called "FixTemplateFolderAttributes.cmd" like so:

:: Changes file attributes as needed.
cd %1
powershell Set-ExecutionPolicy RemoteSigned
powershell ../Build/Scripts/FixTemplateFolderAttributes.ps1

This calls the following powershell commands to make files writable:

$computer = gc env:computername

$fileList = Get-ChildItem ".\InfoPath Form Template" | Where-Object {$_.name -like "*.dll" -or $_.name -like "*.pdb" -or $_.name -like "*.xsf"  }

foreach ($fileItem in $fileList) 
{
 $fileItem.set_IsReadOnly($false) # Remove readonly flag
}

$fileList = Get-ChildItem ".\obj\Debug\" | Where-Object {$_.name -like "*.dll" -or $_.name -like "*.pdb" -or $_.name -like "*.txt"}

foreach ($fileItem in $fileList) 
{
 $fileItem.set_IsReadOnly($false) # Remove readonly flag
}

$fileList = Get-ChildItem ".\bin\Debug\" | Where-Object {$_.name -like "*.dll" -or $_.name -like "*.pdb" -or $_.name -like "*.txt"}

foreach ($fileItem in $fileList) 
{
 $fileItem.set_IsReadOnly($false) # Remove readonly flag
}


DDK

Monday, 31 May 2010

Fix - SharePoint Very slow to start after an IISRESET or Recycle of App Pool (30-130 seconds)

I was asked by another team at my current client to look at a performance issue that they'd been having major issues with. There were no obvious errors in the Windows Event Log or SharePoint logs related to the issue. The problem was that:
  1. If the application pool is recycled, it would take around 90-120 seconds for the first page to be served. This would be unacceptable to the client in case the App pool was recycled in the middle of the day - it would mean 2 minutes of downtime for all employees.
  2. A similar issue with after an IIS reset was performed - it also happened with ALL sites, not just one or two.
To diagnose the issue, I did the following:
  1. ANY performance improvement should be measurable. So I used the Fiddler Web Debugger (http://www.fiddler2.com/fiddler2/)  to measure the total request time. Time was 84 seconds on this particular test server.
  2. Used Sysinternals Process Explorer to see what the threads were doing. This revealed little - but it was clear that the process wasn't 100% the whole time so it wasn't a problem related to intensive CPU processing.
  3. I enabled ASP.NET tracing at the application level as per http://msdn.microsoft.com/en-us/library/1y89ed7z(VS.71).aspx and viewed the trace log through http://servername/Pages/Trace.axd. However, looking at the load of the control tree - nothing was taking a particularly long time. Even when the trace.axd was loading up, it would take an inordinately long time to start up and server the first requested page. This ruled out the possibility of it being a slow control being rendered.
  4. I created a completely new web application in SharePoint and it exhibited the same problem. I began to suspect machine-level config settings.
  5. I found and fixed several errors in the Windows Event Log and Sharepoint Log but they made no difference.
  6. I began to look at the Fiddler trace while testing again and by chance noticed that requests were also being made to an external address at Microsoft for code signing certificates. I thought this was unusual - so did a bit of research and found that it was checking for a revoked certificates list on a Microsoft web server. This is done when any of the cryptography calls are performed. Some details about this can be found here - but the article is related to Exchange specifically:
    http://msexchangeteam.com/archive/2010/05/14/454877.aspx  
  7. To work around the issue, I tried the registry entries suggested by http://msexchangeteam.com/archive/2010/05/14/454877.aspx, but it didn't seem to work. What DID work was pointing the hosts file so that crl.microsoft.com would resolve to the local host (127.0.0.1).  This meant that the call would much more quickly fail when it tries to access the certificate revoke list at http://crl.microsoft.com/pki/crl/products/CSPCA.crl and http://crl.microsoft.com/pki/crl/products/CodeSignPCA2.crl, and not hold up the loading of Applications on the SharePoint server.
  8. After the HOSTs file change, recycle time (and reset time) went from 84 seconds to 20 seconds.
Hopefully this blog entry helps someone else with diagnosing this slowdown problem. Note that this fix only applies if your server doesn't have access to the internet - it is a problem specific to offline or intranet servers.

[UPDATE] - Found that someone else encountered this same issue as per
http://blogs.technet.com/b/markrussinovich/archive/2009/05/26/3244913.aspx and
http://www.muhimbi.com/blog/2009/04/new-approach-to-solve-sharepoints.html


The first article suggests the use of an XML file in each config - but I've not tested this out:

<?xml version="1.0" encoding="utf-8"?> 
<configuration> 
      <runtime> 
              <generatePublisherEvidence enabled="false"/> 
      </runtime> 
</configuration>

[UPDATE - 11 October 2010]
One of my colleagues from Oakton had a similar issue and the above fix (using the hosts file) didn't work for them.

One of the fixes that did work was to do the following:
"Disable the CRL check by modifying the registry for all user accounts that use STSADM and all service accounts used by SharePoint. Find yourself a group policy wizard or run the vbscript at the end of this posting to help you out. Alternatively you can manually modify the registry for each account:


[HKEY_USERS\\Software\Microsoft\Windows\CurrentVersion\WinTrust\Trust Providers\Software Publishing]
"State"=dword:00023e00 "

The following script applies the registry change to all users on a server. This will solve the spin-up time for the service accounts, interactive users and new users.


const HKEY_USERS = &H80000003
strComputer = "."
Set objReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\\" _
& strComputer & "\root\default:StdRegProv")
strKeyPath = ""
objReg.EnumKey HKEY_USERS, strKeyPath, arrSubKeys
strKeyPath = "\Software\Microsoft\Windows\CurrentVersion\WinTrust\Trust Providers\Software Publishing"
For Each subkey In arrSubKeys
  objReg.SetDWORDValue HKEY_USERS, subkey & strKeyPath, "State", 146944
Next



DDK

Wednesday, 26 May 2010

Warning - BizTalk Server 2009 and SQL Server 2008 R2 are incompatible - wait for BizTalk 2010 (aka BizTalk 2009 R2) for "realignment" of compatibility

During installation of Biztalk 2009 tonight I found that SQL Server 2008 R2 and 2009 are in fact incompatible - I couldn't ever get the BizTalk group to install as it was giving errors in the log like so:

2010-05-26 01:30:09:0039 [WARN] AdminLib GetBTSMessage: hrErr=c0c02524; Msg=Failed to create Management database "BizTalkMgmtDb" on server "SERVER01".

You will also get a message box with just a hex code of "0xC0C02524" as below:

I tried manually creating the database - but then it started to give errors with the stored procedure creation.

The below blog matches what I experienced during BizTalk 2009 Group Configuration:

http://blogs.msdn.com/b/biztalkcpr/archive/2009/11/09/biztalk-09-and-sql-r2-not-supported-biztalk-09-and-project-references.aspx