Thursday 18 October 2012

Hiking in New Zealand (Milford Track) - The best way to get and consume Free Offline Topographic Maps for use on Android Phones

I'm about to go hiking on the Milford Track in New Zealand (one of the "Great Walks") which has the debatable title of being the "finest track in the world". I booked the huts throught the New Zealand department of conservation (DOC - http://www.doc.govt.nz/) - with much of the season well into 2013 already sold out. I was lucky to nab some of the last cabin places.

Looking at the map images online and Google Maps itself, it got me thinking that "fine" as the track  may be, I was still keen on having a GPS will full topographic maps along for the ride. I wasn't so keen on forking out for a dedicated GPS device, so I naturally looked to my Samsung SII Android phone as a solution.

Even though Google Maps added full offline capabilities in July 2012, there are many limitations to both the maps and the offline implementation which led me to search for much better solutions for a hiking GPS system. Google Maps offline is basically more of an "alpha" product which:
  1. Only Allows 6 offline mapping regions (not sure how they arrived at 6); each of which is limited to ~90MB.
  2. Doesn't currently allow layers (like topo or satellite images)
  3. Doesn't provide any kind of offline searching capability even within those offline maps (this requires a full data connection).
The Google Maps solution was a workable (though not slick) solution for guiding us on the drive to Te Anau Downs (where the boat to the Milford Track picks up walkers). However, Google Maps just has the track at its most basic level - not have enough detail to help with indicating how far we are away from the next cabin.

I found a good solution was to do the following for full offline maps for the Milford Track:
  1. Download and Install to the Mobile Atlas Creator (MOBAC) on your PC (Windows/Linux/OSX). You may also need to download and install the Java Runtime Environment (JRE) 1.6 or above). Download from http://mobac.sourceforge.net/. Even if you have a GEOTIFF available (e.g. the freely available GeoTiffs from the NZ Topographic Authority), you cannot just upload it to your phone as the images are typically massive and would take an aeon to load and render.
  2. Use the Default World Map via in MOBAC source to scroll to view New Zealand (if you just use the NZ Topo Maps you will just see a large group of arrows with no way to navigate to New Zealand).
  3. Switch to the NZ Topo Maps Datasource in MOBAC (see http://mobac.sourceforge.net/quickstart/using3_1.htm for details). You should now see the full New Zealand topographic map.
  4. Zoom into the Area you want and mark it as a selection. As per http://mobac.sourceforge.net/quickstart/using3_3.htm
  5. Click the settings button change the Output folder to the location you want.
  6. Click Create Atlas and select your desired output mapping format (I use Alpine Quest lite which is available for free on Google Play) and all the neccessary images for your selection (at required zoom levels) will be downloaded and aggregated into the requested file format.
You now have an exported map consumable by your offline GPS application.
To consume your newly created atlas in Alpine Quest:
  1. Copy the output file (e.g. Apline Quest Map (.AQM file) to the Alpine Quest /maps directory on your Phone's SD card
  2. In AlpineQuest, click on Maps and Click Open Offline Map.
  3. You can now view your map and add waypoints as needed on your fully detailed topographical map!
GPS functions on phones are also notoriously power-hungry. Just in case, I've also purchased a couple of spare batteries and a solar charger as backup.

Auf Wiedersehen!
DDK

Monday 8 October 2012

WCF 4.5 - Host Unreachabl​e when calling a WCF service from soapUI

The WCF Test Client (WCFTestClient.exe) is Visual Studio's disappointingly basic tool for testing your WCF services. If you have your hopes up that the the Visual Studio 2012 WCF Client tool for .NET 4.5 is any better, forget about it. It is still so simple that it doesn't support client X509 Certificates or even username and passwords. Other tools like Fiddler and Wireshark are also compulsory items on the toolbelt (along with .NET coded integration unit tests). Fiddler supports client certificates out of the box and works a charm (especially for REST services).
I have used soapUI (http://www.soapui.org/) extensively for the past few years for basic integration testing of WCF (with basic endpoint bindings) and SAP web services. I've been using those also to test out connections to my current client's back-end banking systems.

soapUI isn't perfect though - as there are differences in interpretations of the WS-* standards between the Java and the .NET worlds - and soapUI is based on a Java stack. One such problem that you'll find with calling Microsoft WCF services with wsHttpBinding soapUI is that you get the following exception when you use the default settings:



    
      http://www.w3.org/2005/08/addressing/soap/fault
   
    
      
         
            s:Sender
            
               a:DestinationUnreachable
             
         
         
            The message with To '' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher.  Check that the sender and receiver's EndpointAddresses agree.
          
      
   

This is because there is no "To" entry in the header of the message. The fix for this is to go to the soapUI WS-A tab and ensure that the "add default wsa-To address is checked on" as per the screenshot below:

Friday 7 September 2012

Neutral Bay Public School - Spring Carnival - Mini Golf 2012 a big success!



Some photos of the Neutral Bay Mini-Golf course that we constructed to raise funds for Neutral Bay Public School. It was such a hit that we even had queues of eager golfers going around the corner!

Thanks again to all those who helped with drills, equipment, blood, sweat and tears. It was a real adventure designing, planning, constructing and dismantling the course.






















 


Till Next Time,
DDK







 


My Alienware 18x R2 Arrives!

It has been more than 4 years since my massive Dell M1730 arrived  - now the replacement has finally landed and it's a phenomenal piece of kit - the Alienware 18xR2.

Contrary to the popular trend of making everything smaller and lighter, I've gone all-out with a bigger, heavier and beefier laptop - 18.4 inches of goodness!

CORE SPECS:
2X Nvidia 680m (in SLI)
3 SSD (512GB X 3)
32GB RAM (1600)

With those specs, even Heidi and Zach are all over it!






 





Friday 10 August 2012

Using Reflection to Force Evaluation of Parameterized Static Methods on Controls without requiring an Instance to be Loaded

At my current Telecommunications client, we have been using EpiServer CMS and associated frameworks (essentially ASP.NET webforms) to develop their website.

We have the concept of "Accordion" controls which are essentially a group of user controls sitting inside an ASP.NET repeater that are loaded based on CMS configuration settings. The user controls are only loaded when the user clicks to expand one of these accordions.

This presented a problem because we wanted to access settings and properties of those user controls to determine whether to display the title outside the user control or not (before any of those user controls were loaded).

I initially thought of using events and delegates - but that wouldn't work because they would require us to spin up instances the controls to get those to fire (not ideal).

Instead of this, we used a nice trick (thanks Marcus for the idea) with reflection which calls a static method on each of those user controls. Any instance objects we needed to use in the static class, we just passed in as a parameter to our User control's method.

This meant that we could use reflection on the static method for each of our user controls - and it would return to use whether we should hide or show the title for that control. We put the type name in the CMS so it would know what user control type and method it should Invoke (without even loading the control). To easly allow instances our dependency-injected controllers to be called from this static method, we just passed them in as parameters to the static method as below:

A code snippet of the meat of the method is as below:

        public static bool GetIsAccordionAndTitleVisible(string typeName, IServiceController serviceController)
        {
            var type = Type.GetType(typeName);

            if (type != null)
            {
                var result = true;
                System.Reflection.MethodInfo mi = type.GetMethod("GetIsControlTitleVisible");
                if (mi != null)
                {
                    var p = new object[1]; //Invoke requires object array for parameters
                    p[0] = serviceController;
                    result = (bool) mi.Invoke(null, p);
                }
                return result;
            }
            return true; //Default=true.
        }

In this way, we can defer logic to user controls that haven't even been loaded yet. We could have also used our Registry to access our dependency-injected service controller as an alternative to the IServiceController parameter on GetIsControlTitleVisible()

Friday 3 August 2012

Preferred Microsoft technologies for creating a Lightweight Service Bus - creating RESTful Services with the ASP.NET Web API


I'm starting with a new banking client next week and I thought it was time to make sure all my preferred technology choices are still the latest and greatest on the block.

This ongoing technology-revalidation process should be par for the course for any technology-based consultant. This is especially considering software framework release cycles of players like Microsoft have shortened from years to months.

One of the core components of this work for my new client is the development of a lightweight ESB as a "Clayton's" SOA layer (e.g. with limited discoverability). One of the key aims is towards optimising performance.

Obviously REST is the flavour of the moment as all the largest sites in the world (e.g. Twitter and Facebook) are fully on board with it. It is also the most performant and platform-agnostic approach to exposing services. I had to interact with Twitter heavily via REST for my previous client (a large media company).

Naturally, I would prefer to be creating RESTful services and returning JSON (or optionally XML) just like the largest sites in the world are doing in their APIs.

Windows Communication Foundation (WCF) is the preeminent technology in Microsoft .NET for producing web services. However, I found out that a few months ago, there was a complete consolidation of the leading frameworks for RESTful service creation in .NET:

"For several years now the WCF team has been working on adding support for REST. This resulted in several flavors of REST support in WCF: WCF WebHTTP, WCF REST Starter Kit, and then finally WCF Web API. In parallel the ASP.NET MVC team shipped support for building basic web APIs by returning JSON data from a controller. Having multiple ways to do REST at Microsoft was confusing and forced our customers to choose between two partial solutions. So, several months ago the WCF and ASP.NET teams were merged together and tasked with creating a single integrated web API framework. The result is ASP.NET Web API."

To this end, you should be considering first and foremost the ASP.NET Web API for your RESTful Service creation needs. For more details, see http://www.asp.net/web-api

For consumption of those RESTful services via the available client-side frameworks? Well that's another blog post altogether!

Till next time,
DDK

Thursday 24 May 2012

SharePoint 2010 - An easier way to hide Ribbon Items and Ribbon Dropdown List Items without Code

My client recently had a requirement to lock down the set of styles available to users whilst editing content (using the standard SharePoint 2010 Rich Text editor control). The rationale was that if you give users free reign ("enough rope to hang themselves") and absolute freedom on a site, there is a serious risk that it will end up like something that Pro Hart cobbled together with a paintball gun. Needless to say this would not be a great mess to be in (depending on your artistic viewpoint)!

Unfortunately, SharePoint 2010 provides the ability to add new styles to the rich text editor - but NOT to remove those that come with SharePoint out of the box.

SharePoint does provides for an ability to hide controls through the SPRibbon Object model like so:
            SPRibbon ribbon = SPRibbon.GetCurrent(this.Page);
            ribbon.TrimById("ribbon.editingtools.cpedittab.styles.styles");

You can also do this declaratively via a Custom Action as described on MSDN here - http://msdn.microsoft.com/en-us/library/ff408060.aspx

Unfortunately, it doesn't get any more granular than that - you can just hide or show controls but not their content. I suppose you could do a recursive FindControl() to get at the rendered components - but that would be a performance killer if it had to run on every page load. It is also painful to hook into the list with jQuery as the HTML list is rendered dynamically as the ribbon tabs change and different content is selected in the page - so a PageComponent would need to be created as described at http://msdn.microsoft.com/en-us/library/gg552606.aspx.

Yes another alternative approach is through SharePoint's ability to use the PrefixStyleSheet flag on the SharePoint 2010 PublishingWebControls:RichHtmlField - but this would only be for Page Content fields on page layouts. This wouldn't cater for Content Editor Web Parts (CEWP) that also allow for entry of Rich Text. To control that you would also have to make a custom web part to replace the CEWP and hide the out of the box one - this is also a bit of work.
Based on the KISS principle, there has to be an easier way - and there is:
As an example, if you want to hide Heading 4, then you can just use the following CSS:
#Ribbon\.EditingTools\.CPEditTab\.Paragraph\.ElementWithStyle\.Menu\.Styles a[title="Heading 4"]
{
 display:none;
}

This will hide the "Heading 4" item in the Styles Menu. Note that because the controls use the full stop to fully qualify the html element name, you must escape them with a backslash ("\").

To me, the CSS solution seems the lesser of 3 performance evils (CSS vs jQuery vs Server side recursion) and several higher maintenance evils (code complexity and additional solution/feature elements that need to be deployed in your wsp).

Too easy!
DDK

Wednesday 23 May 2012

SharePoint 2010 - Content Editor Web Part (CEWP) Versioning

I've been seeing (and hearing) a lot of misinformation about the SharePoint 2010 Content Editor Web Part (CEWP) from people that assume it behaves exactly the same as it did in SharePoint 2007. This is also then used as a rationale as to why you should use page fields as part of page layouts (i.e. the "Page Content" field) to store content - "if you want versioning, you have to use page fields and page layouts. Period! CEWPs just don't support versioning". This is incorrect!

On the contrary, the SharePoint 2010 CEWP does support versioning and you can easily roll back to a previous version of a page with the content of all your Content Editor Web Parts still intact from that old version. It would appear that this is one of the less-well publicised "new" features of SP 2010.

While "page content" fields within page layouts (and their supporting content types) still have a role (when you want more control over layout or don't want javascript put into your pages), a more flexible combination of basic page layouts (e.g. for a column-based structure), with the CEWP provides for more options to end users - particularly when requirements are not set in stone (are they ever?).

Some entries on the blogosphere which indicate (erroneously) that content isn't versioned within CEWP in SharePoint 2010:
http://geekswithblogs.net/SoYouKnow/archive/2011/07/28/a-dummies-guide-to-sharepoint-and-jqueryndashgetting-started.aspx

http://dsgeorge2976.wordpress.com/2011/04/29/versioning-sharepoint-content-editor-webpart-content/

DDK

Wednesday 21 March 2012

Consuming Twitter Feeds in .NET using the Twitter REST API and JSON

My current client had a requirement to provide a twitter feed as part of a SharePoint 2010 intranet site. Unfortunately the RSS feed is no longer an actively supported mechanism for consuming Twitter feeds - instead the recommended way to get Tweets is via the REST APIs. The Twitter widgets were also not an option as I had to cache the results locally to reduce bandwidth consumed by the web part . I tried using LINQ2Twitter - and it worked well until I ran into problems with it not supporting an authenticated HTTP Proxy (ISA Server).

To get around this, I did the following:
  1. Performed a search via http://search.twitter.com using the query syntax e.g. from:news_com_au OR from:time  to get values from time and news_com_au Tweets
  2. Grabbed the querystring to use against the REST API. e.g. http://twitter.com/#!/search/from%3Anews_com_au%20OR%20from%3Atime%20
  3. Grabbed the search string from the above step and did a REST query specifying the JSON result format like so: http://search.twitter.com/search.json?q=from%3Anews_com_au%20OR%20from%3Atime%20
  4. With the JSON javascript that was downloaded, I created a JSON serialization class in C# using the great tool JSON2CSHARP - http://json2csharp.com/. This inferred the information from the JSON results that I provided from Twitter.
The class generated was as follows:
//Twitter JSon Result
namespace CompanyName.HRPortal.Repository.Dto.TwitterJsonResultDto
{
    public class Metadata
    {
        public string result_type { get; set; }
    }

    public class Result
    {
        public string created_at { get; set; }
        public string from_user { get; set; }
        public int from_user_id { get; set; }
        public string from_user_id_str { get; set; }
        public string from_user_name { get; set; }
        public object geo { get; set; }
        public object id { get; set; }
        public string id_str { get; set; }
        public string iso_language_code { get; set; }
        public Metadata metadata { get; set; }
        public string profile_image_url { get; set; }
        public string profile_image_url_https { get; set; }
        public string source { get; set; }
        public string text { get; set; }
        public object to_user { get; set; }
        public object to_user_id { get; set; }
        public object to_user_id_str { get; set; }
        public object to_user_name { get; set; }
    }

    public class RootObject
    {
        public double completed_in { get; set; }
        public long max_id { get; set; }
        public string max_id_str { get; set; }
        public int page { get; set; }
        public string query { get; set; }
        public string refresh_url { get; set; }
        public List results { get; set; }
        public int results_per_page { get; set; }
        public int since_id { get; set; }
        public string since_id_str { get; set; }
    }
}

I could then use the standard .NET System.Runtime.Serialization.Json.DataContractJsonSerializer to parse the results and expose it locally as a WCF service (using ASP.NET caching).

Here's the sample code to get the Twitter feed from behind a proxy using REST and JSON:

[TestMethod]
        public void TestGetPublicTweetsJson()
        {
            string feedUrl =
                "http://search.twitter.com/search.json?q=%09from%3Anews_com_au%20OR%20from%3Atime%20since%3A2012-03-20";
            HttpWebRequest httpWebRequest = (HttpWebRequest) HttpWebRequest.Create(feedUrl);
            httpWebRequest.Proxy = WebRequest.DefaultWebProxy;

            //Use The Thread's Credentials (Logged-In User's Credentials)
            if (httpWebRequest.Proxy != null)
                httpWebRequest.Proxy.Credentials = CredentialCache.DefaultCredentials;

            using (var httpWebResponse = (HttpWebResponse) httpWebRequest.GetResponse())
            {
                using (var responseStream = httpWebResponse.GetResponseStream())
                {
                    if (responseStream == null) return;
                    DataContractJsonSerializer jsonSerializer =
                        new DataContractJsonSerializer(typeof (RootObject));
                    RootObject root =
                        (RootObject) jsonSerializer.ReadObject(responseStream);
                }
            }
        }

Thats it!
DDK

Monday 13 February 2012

SQL Server Reporting Services (SSRS) - Options for Dynamically Setting Report Data Source

I received a call from one of my colleagues in the Oakton Canberra office today who was working at a Federal Government department. He was using a SharePoint-integrated mode of SSRS.
He wanted deployed reports to point to the correct datasource without manually updating the DataSource of the report each time a new site is provisioned. It was not a feasible option for him to be manually changing the report datasources each time a new site in SharePoint was deployed (based on a particular site template that included Reports) - especially when he moved back off site.

I suggested a couple of ways forward - each with their own advantages and disadvantages:
1) Create a custom SQL Reporting Services Reports Extension that can source data dynamically (as outlined here http://msdn.microsoft.com/en-us/library/microsoft.reportingservices.dataprocessing.aspx and here http://msdn.microsoft.com/en-us/library/bb283184.aspx)
2) Using Expression Based Connection Strings
3) Setting the datasource at Deploy Time instead via Powershell or another scripting technology (or as part of a feature event receiver in SharePoint)

The Deploy-time option was the simplest and cleanest option (though it would have a limitation of breaking if the site locations are moved). A script would need to update the datasources as part of the site migration process. This was deemed to be an acceptable tradeoff.
DDK

SharePoint 2010 - When Hosting Custom WCF Services in SharePoint, SPContext is Null due to MultipleBaseAddressBasicHttpBindingServiceHostFactory Bug

The recommended way to expose a WCF Service through SharePoint 2010 is to NOT manipulate the Web.Config manually/through code to set your WCF bindings.

Instead, it is recommended that you use one of the Service factories provided for you as part of the  Microsoft.SharePoint.Client.Services namespace. These generate the neccessary bindings entries for you.
The 3 types of Binding Service Host Factories are as below:
  • SOAP = MultipleBaseAddressBasicHttpBindingServiceHostFactory (WARNING This has Bugs if the site collection is not at the root)
  • REST = MultipleBaseAddressWebServiceHostFactory
  • Data Service = MultipleBaseAddressDataServiceHostFactory 
There appears to be a bug in MultipleBaseAddressBasicHttpBindingServiceHostFactory that means that the endpoints generated by that factory are only correct if the site is hosted in the root. If your site collection is hosted in say /sites/myportal and you deploy your WCF Service there, then the endpoint will be:
http://servername/mycompanyname/_vti_bin/myportal/CommonService.svc when it should really be
http://servername/mycompanyname/sites/myportal/_vti_bin/CommonService.svc

Consequently if you try to query this with the default bindings then the requests will work but they wont have a valid SPContext even though the HTTP Context is valid and has a valid authenticated user (e.g. when calling through SOAPUI, the WCF Test tool or otherwise (e.g. jQuery).

I wondered why the examples that Microsoft provides work (See the "Revert" WCF Sample here http://msdn.microsoft.com/en-us/library/ff521582.aspx) - and it is because the Microsoft samples actually override the bindings in the client side code and use a constructor overload of WCF classes that accepts the bindings and the endpoints as parameters. This is clearly not an option in say jQuery - so it is not an ideal example at all. Take away the endpoints set in code and calls to SPContext will fail.

I confirmed this behaviour by turning off the loopback exclusion in Fiddler and executing the same calls and saw that indeed the WCF service was trying to connect to an invalid location at the root of my server. This is why the SPContext was null when calling my custom SharePoint WCF Services.

Workarounds:
1) Don't use the SOAP binding factory if you need to deploy your WCF services outside the root. You can manually change/script the web.config changes to add the correct bindings there.
2) Use the REST factory as it doesn't suffer from the same issue.

DDK

Monday 6 February 2012

SharePoint 2010 - People Picker works in Central Admin (for adding Farm Administrators) - but it doesn't recognize them in the Site Collection Administrator People Picker

I had a question from a colleague today as to why users from our main corporate domain were not showing up in the People Picker in SharePoint 2010. This has come up 3 times in the last week, so it warranted a blog entry.

In this situation, the SharePoint 2010 instance was installed in my company's corporate development domain. No 2-way trust relationship exists between the main and the development domains. Users from our main  domain were correctly recognized in Central Admin (when adding Farm administrators) - but were not being recognized in the people pickers in site collections.

Why does this happen? By default, the SharePoint 2010 people picker control (at the site collection level) will not search domains other than the one you used to install SharePoint. The one in Central Administration does work as it has the correct properties set by default.

To correct this situation, you need to run the "peoplepicker-searchadforests" command against the site collections for which you want the people picker control to search additional domains.

A sample of this command can be found below. So all paths are correct (for stsadm), you should run the following command from the "SharePoint 2010 Management Shell"):
stsadm.exe -o setapppassword -password [AppPassword]

stsadm.exe -o setproperty -pn "peoplepicker-searchadforests" -pv "forest:ddkonline.com.au,ddkonline\trusteduserinotherdomain,[password];forest:ddkonline.dev.local,[DevDomainAccount],[DevDomainAccountPassword]" -url https://sitename.com.au

The above command adds 2 forests to be queried when using the people picker - both a development domain (ddkonline.dev.local)  and the main corporate domain (ddkonline.com.au).
DDK

Friday 20 January 2012

Just passed the TOGAF 9 Certified Exam (Level 1 and 2) with 90% pass mark!

Just passed the TOGAF 9 Level 1 and 2 Combined exam with a pass mark of 90/100. Part 2 was particularly difficult as expected and really tests your knowledge well. In particular, it tested my knowledge of the different architecture viewpoints that are tailored to match stakeholder concerns.

I won't give too much away (lest I violate the non-disclosure agreement) - but the official Study Guide for Part 2  (https://www2.opengroup.org/ogsys/jsp/publications/PublicationDetails.jsp?catalogno=b096) absolutely has to be known back-to-front to be able to answer the Part 2 questions. It contains content that isn't obvious from the publicly-available TOGAF 9.1 document (http://pubs.opengroup.org/architecture/togaf9-doc/arch/)
I was a bit worried going into the exam as I had just discovered one of my colleagues failed the TOGAF 8 bridging exam. Luckily I survived the gauntlet (with several late nights of study) and made it through.

DDK

Tuesday 10 January 2012

SonicWall VPN - "The peer does not allow saving of username and password" - How to Automatically Log in without entering username and pass every time

There is a server-side flag in the SonicWall Firewall Administration Tool which prevents you from saving your username and password. By default this is on - and if you go to the settings for your VPN connection, you cannot put them in. The text boxes are disabled, and you are shown the following message:

The peer does not allow saving of username and password

If your connection is poor, you will have to enter your username and password in several times a day - and this can be very frustrating. To work around this, you can use the following commandline for your SonicWall Global VPN Client if you don't want to enter the username and password every time you log into your VPN:

"C:\Program Files\SonicWALL\SonicWALL Global VPN Client\SWGVC.exe"/e "VPNName" /u "username" /p "password"

WARNING:
note that if you save this to a batch file, it will not be encrypted - and so your system is inherently less secure if your machine gets stolen. Naturally, that's the main reason the "save username and password" functionality is disabled by default for all users.

DDK