Monday, 9 March 2020

NIST 800-207 - What is Zero Trust Architecture (ZTA) and Why Has It Become Important? (aka the X-Files - Trust No One)

One of the primary concerns, when operating in cloud environments and accessing resources over the internet, is cybersecurity. Traditional firewalls and edge-approaches to security no longer align with how we use technology.

This has given rise to the recent release of the National Institute of Standards and Technology (NIST) 800-207 security draft https://csrc.nist.gov/publications/detail/sp/800-207/draft. The release of this document has highlighted the prominence that has come to the Zero Trust approach to network security. Zero trust is a necessary security model that has arisen due to evolving user and mobility expectations and the rise of different software and infrastructure delivery models such as the cloud.

Bodies of knowledge such as NIST and CISSP recommend a layered approach to security (also known as "defence in depth" and "Segmentation/Micro-segmentation") - Zero Trust Architecture is a type of layered approach which will protect the confidentiality, integrity and availability of your information. This includes not just servers and devices but also protecting at the application/microservice (e.g. with JSON Web Tokens) and user levels.

What is Zero Trust Security?


  • Zero Trust follows the motto of the X-Files - "Trust No One". Regardless of whether the traffic is from internal or external sources - access is regularly scrutinized, verified, validated and processed in the same way. 
  • Zero Trust assumes that there is no implicit trust based on a user's or resource's location (e.g. intranet or intranet). Normal perimeter or edge-based security approaches segment the network this way in a static way based on location, subnets and IP ranges.
  • A useful analogy that is often used is the Castle versus the Hotel Model. Once inside a castle, a device or user has great lateral freedom. In a hotel, each room requires a key and is checked on entry to different rooms (representing applications and/or systems). 
  • Zero trust security focuses more on protecting the resources and users both inside and outside those network boundaries. It includes Establishing Trust (e.g. do I trust a jail-broken/unpatched/unencrypted/unsecured/unrecognized device with all of its ports open?), Enforcing Access and Continuously verifying the trust. It also includes continuous monitoring to detect anomalies. It is a combination of technologies and methods of protection.

  • Zero Trust is a more granular and flexible approach to securing resources reflective of the reality of modern workplaces. 
  • Zero Trust typically uses the following parameters and checks in combination to determine policy-based access to resources:
    • User Identity
    • Device (including assurance services, Mobile Device Management Flags - identifying patch levels to establish device-level trust or vulnerabilities)
    • Location
    • Session Risk (such as anomalous/unusual access behaviors or times)


Why has it become important?

  • The rise of working from home, remote users, and Bring Your Own Devices (BYOD) and cloud-based services (e.g. Salesforce, Office 365, Microsoft Teams and other AWS, Azure and GCP-based applications) have led to resources and users being located outside traditional network boundaries. 
  • Consequently, authentication and authorization cannot be assumed to be valid just because of the source location of a request - credentials and associated tokens need to be validated independently of location. 
  • Zero Trust is also required because of greater awareness of the "Insider Threat" from contractors and employees - through negligence or malicious intent.
  • As part of the Zero Trust mindset - there are also greater requirements around monitoring, logging and auditing activities as part of due diligence when complying with legal obligations (e.g. Australian Prudential Regulation Laws such as APRA Prudential Standard CPS 234). It is not good enough just to log external activities - internal activities need to be monitored as well. 

Why is it difficult?

  • Zero Trust requires a much better understanding of the assets and resources that need protection and the behavior of the users consuming and accessing those resources. 
  • Phenomena such as "Shadow IT" also introduce problems because they are not visible and so Zero Trust approaches may actually exclude previously functioning devices from resource access. 
  • Zero Trust requires the creation of more refined corporate and technical policies to handle the more granular resource-based approach to accessing your critical corporate systems.
  • Zero Trust requires much more intensive logging and scrutiny of user activity. This typically necessitates AI other anomaly detection mechanisms (e.g. out of hours access alerts).


Saturday, 29 February 2020

Basic Guidelines for Product Offering Go/No-Go Decisions (Including Product Fit/Market Fit)

I've worked for Software/IT Consulting companies, Product Development Companies and System/Service Integration companies along my career path. Most recently I've noticed that some of the basic decision making around what products and service offerings that should be developed have missed some critical gateways that resulted in full or partial product failure (i.e. it doesn't make a good return on investment or ever turn a profit).

Often, component licensing costs are ignored or forgotten or the actual pricing is something that the market cannot bear. Sometimes this is due to lack of multi-tenancy support so the product offering economics are not scalable.

Licensing and subscription costs may go down over time (esp with AWS and Azure services becoming gradually cheaper as they reach greater economies of scale and proportional levels of competition). However, this may not happen quickly enough over the product lifetime to deliver profitability. In this case service offerings/products need to be "end-of-lifed" ("EOL'd") or migrated to new platforms and using components with lower cost structures.

Going back to basics, I put this diagram together to outline some key principles of product offer development which should be considered as gateways when deciding to bring a product to market.

What makes a product worthwhile? It starts with being something customers want to buy (and buy enough of). If you find this sweet spot, then you have product/market fit - which means you're no longer pushing your product onto customers. You also need to have a clear vision - otherwise delivery will be problematic when building your product out. There are many articles on product and market fit available - these are just some of my ideas that resonate based on recent experience.

In particular, critical profitability constraints can be forgotten when that "cool new tech" comes out or "everyone else is doing this in the market":


What is clear is that product-market fit is an ideal - but not a sufficient indicator of whether a product should go to market. There are other factors to be considered including cost structures and viability of ongoing product development and marketing to maintain that fit, customer value and (hopefully) margins.  


Friday, 7 June 2019

List of Azure Region Codes for Azure 2019 DevOps Migration Tool (and TFSMigrator Tool)

Whilst using the Azure DevOps 2019 migration tool to move from an on-premise DevOps server to the cloud, you will be required to enter the desired destination region. Below is a list of all the valid entries as at June 2019:

CC = Central Canada
WEU = Western Europe
EA = East Asia
EAU = East Australia
CUS = Central US
MA = South India
SBR = South Brazil
WCUS = West Central US
UKS = UK South
EUS = East US
NCUS = North Central US
SCUS = South Central US
WUS2 = West US 2
GH = ?
EUS2 = East US 2

These values appear to come from the server and are not embedded in the tool - otherwise I'd be able to use Reflection to get more information! These region codes seem to be undocumented by Microsoft at present.

[Update - documentation has some more details - but doesn't cover off all available Region options - https://docs.microsoft.com/en-us/azure/devops/migrate/migration-import?view=azure-devops#supported-azure-regions-for-import)]

Deleting Azure Active Directory Tenant – Unable to delete all Enterprise Applications - Can't Delete Azure DevOps from within User Interface

Encountered an issue today with removal of an Azure AD Tenant that is no longer used. When attempting to delete the Azure AD Directory - I simply received warnings that I had to "Delete All Enterprise Applications" - with a warning status indicator.

When I tried to remove the single Azure Enterprise Application (Azure DevOps) - the Delete button was disabled. As you could imagine - this put me in a bit of a pickle!

The fix that worked for me is as follows:

1. Create a new Global Admin account in the Azure Active Directory you are trying to delete. Make sure you copy the temporary password as you'll need to log in with it.

2. To ensure you have the Azure AD Powershell extensions, Start Powershell and run:
Install-Module -Name AzureAD

3. Once done run Connect-AzureAD. You will be prompted to login. Login with the user you created and will be asked to change your password.


4. Run
Get-AzureADServicePrincipalto retrieve the Object Id of the Enterprise Application that you can't delete.

5. Run
Remove-AzureADServicePrincipal -objectid [enter objectid here] directly.

6. Remove your temporary user.

You should now be able to delete your Azure Active Directory (Azure AD) Tenant instance.

Source: https://blogs.msdn.microsoft.com/kennethteo/2017/09/19/deleting-azure-ad-tenant/

Wednesday, 13 June 2018

Forcing Synchronization of Display Name and Email from Active Directory without User Profile Synchronization - PowerShell Script

Just made this script to Synchronize Display Name and Email for all users in a root web if they have been updated in AD and aren't flowing through to your display name in SharePoint. This may be required if the user profile service is not set up or is failing. This problem is discussed in more detail at https://gallery.technet.microsoft.com/office/User-Information-List-in-8b420e8c

Add-PSSnapin "Microsoft.SharePoint.PowerShell"
#As discussed in https://gallery.technet.microsoft.com/office/User-Information-List-in-8b420e8c 

Write-Host  -ForegroundColor Yellow "-------------------Process Start---------------------------------------------------------------"
Write-Host  -ForegroundColor Yellow "This script will sync the AD display name and email from AD without running a user profile sync"
Write-Host  -ForegroundColor Yellow "As discussed in https://gallery.technet.microsoft.com/office/User-Information-List-in-8b420e8c" 
Write-Host  -ForegroundColor Yellow "-------------------Process Start---------------------------------------------------------------"

$stopWatch = [Diagnostics.Stopwatch]::StartNew()

$rootWeb = Get-SPWeb "https://demo01.berkeleyit.com/"
ForEach ($user in $rootWeb.AllUsers)
{
    Write-Host  -ForegroundColor Green "Syncing Email and DisplayName with Active Directory... for $user" 
    Set-SPUser -Web $rootWeb -identity $user.UserLogin -SyncFromAD
}

$stopWatch.Stop()

$timeTaken = $stopWatch.Elapsed

Write-Host  -ForegroundColor Yellow "-------------------Process Completed in $timeTaken second(s)------------------"

Wednesday, 28 March 2018

TypeScript - Importing jQuery TypeScript Definitions (d.ts) for Visual Studio 2017

TypeScript is building in popularity and JQuery is remains one of the most popular JavaScript frameworks. Consequently you will typically want to use them together in the same project at one stage or another.
If you do want to use JQuery within your TypeScript files in Visual Studio 2017, you need a "DefinitelyTyped" definition specifically for jQuery. This will allow Visual Studio to correctly recognise jQuery objects when validating and compiling (or transpiling based on your preferred terminology).


To do this, just download the TypeScript definitions (d.ts) for jQuery through the Nuget Package Manager (npm) UI or with the Nuget package manager console, use the following command:

Install-Package jquery.TypeScript.DefinitelyTyped

Now the typescript compiler will recognise your jQuery calls:


DDK

Friday, 1 September 2017

BOSE QuietComfort 35 - How to Tell You Have a Fake set of BOSE Headphones

I've had fake MicroSD cards sent to me previously when bought online - but the attention to detail in the fake BOSE QC35 headphones I recently received was amazing.

After ordering my Bose QuietComfort 35 noise-cancelling headphones off Ebay, I was surprised how slick they looked - but the positive impression did not last. Once I plugged them in and charged them up it became painfully apparent that the electronics inside them were not up to scratch:

1) They would cut out intermittently from the Bluetooth connection.
2) The noise cancellation was ineffective. I've used the QC25 headphones before and it is clear when the noise cancellation is turned on and off (thinking night and day). When noise cancellation is on, it reminds me of when people go deaf (in movies like "Saving Private Ryan" or "Dunkirk") from the concussive effects of a grenade or bomb (yes they're that good!).

I contacted Bose (Report_Counterfeits@bose.com) and they confirmed that they were in fact very well done fakes and they didn't exhibit most of the superficial faults that most fakes have. In particular, the "BOSE" writing on the headphones was embossed perfectly and there was no clear marks where the ear cups had been glued together. All the serial numbers on the headphones and on the box also matched. The box, plastic and packaging was also hard to fault.

The only issue was that the serial number was invalid - the number after the Z is meant to be a date.

S/N:072536Z08231568AE

So it seems that the most important (and most expensive components) - the internal chips and electronics - are the part you are probably paying for and the part that is hardest to reproduce accurately by the guys making the yum-cha/knock-off copies.

So the simplest way to check for a fake is to attempt to register you headphones online at:
https://www.bose.com.au/en_au/support/product_registration.html


The guy I was dealing with on Ebay even tried to negotiate the refund to 30% of the original price before I told him how it is meant to work. Make sure you demand a 100% refund including the return postage cost on a fake substandard item like this. The guys at BOSE will also help you to get a refund if needs be.

Hope this helps!
DDK

Monday, 13 March 2017

Fix - Error in lc.exe when Compiling Solution upgraded to Visual Studio 2017 RTM from Visual Studio 2015

Upgraded our product solution today to the latest Visual Studio 2017 RTM and everything seemed to work fine - until I started getting the following error in the build:

"The specified task executable "lc.exe" could not be run. The filename or extension is too long"

What is this lc.exe command and why is it running? It is used by the standard .NET licensing mechanism and is maintained by Visual Studio for information about all licensed components.

In my case, the error was occurring in "C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets" Line 2975 according to my error log.

Clearly this problem was related to the fact we are using Telerik Controls which require a licx file to compile (or so I thought).

I turned on full diagnostics in Visual Studio 2017 to help get to the bottom of the issue:




This showed the full path that was being passed to lc.exe is over 42000 characters long (!):
1> C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.6.1 Tools\lc.exe /target:ESSP.ApplicationPages.dll /complist:Properties\licenses.licx /outdir:obj\Debug_SP2013\ ....[SNIP]
 


Several places such as on Microsoft Connect - suggested that the "solution" (pardon the pun) is just to delete the licx file (also as per http://docs.telerik.com/devtools/aspnet-ajax/licensing/license-file) I could then recompile without any build exceptions.

This issue comes about because the lc.exe executable can only handle a parameter length of 32000 characters or less - and the full path is used for all references. Needless to say this is a restrictive limitation in the licensing mechanism!

So the possible alternatives to fix this issue:
1) Remove the licx file if possible when you don't need the full licensed functionality (in my case this was fine - as we don't need design mode for the Telerik controls).
2) Reduce the length of your references by adding a shared drive or logical redirect to a shorter path e.g. c:\references instead of c:\src\DDK\product name\releases\ etc)
3) Reduce the number of references that you have in the project that has issues with lc.exe

DDK