Thursday 17 June 2021

React - Error with create-react-app in Windows Environments - "Error: EEXIST: file already exists, mkdir 'C:\Users\XXXXX" - Fix/Solution

I've been asked this a more than once - so thought it prudent to document the current recommended workaround for this issue. While it is possible to install react apps globally to work around the error, it is not recommended per 

"If you've previously installed create-react-app globally via npm install -g create-react-app, we recommend you uninstall the package using npm uninstall -g create-react-app or yarn global remove create-react-app to ensure that npx always uses the latest version."


  • The windows user name has spaces in it (e.g. c:\User\David Klein)

Steps to reproduce error/issue:

  1. User attempts to create react app with npx in a Windows environment:
    npx create-react-app my-app
  2. Error is generated:
  1. Seems that the npm cache path will consequently have a space in the path and npm (or more likely create-react-app) can't handle this.

Solution (until bug is fixed):
  1. Get the current cache path with "npm config get cache"
  2. cd to the c:\Users\ directory and run dir /x to get the short name of that folder (e.g. DAVIDK~1
  3. Once you have the short path, set the npm cache to use that path with the following:
    npm config set cache c:\users\davidk~1\AppData\Roaming\npm-cache --global
You should now be free to continue with your react app goodness in Windows:

Monday 22 March 2021

Best Practices for Azure Multifactor Authentication (MFA)

When configuring Azure MFA and Conditional Access there is the potential to lock out all users from the system including the Azure Portal. As with any security control/mechanism, the costs of implementation and maintenance always need to be commensurate with the risks and costs of not implementing the control (e.g. assets at risk, reputational risk).

With this in mind, here are some key best practices you should follow when enabling MFA:

  1. Ensure that end users are informed adequately that MFA is coming as it can negatively affect the user experience and cause confusion. Microsoft provides communication templates and end user documentation for this purpose - Microsoft provides communication templates and user documentation - per (per  
  2. Always grant exclusions for every MFA policy - this will ensure there is always an MFA backdoor so you don't completely lock yourself out (especially if conditional access rules apply to all apps or the Azure portal). When enabling conditional access, make sure exclusions are made for 
    1. Administrators 
    2. Support staff.
    3. Any trusted IPs and known IP addresses/named locations.

  3. Testing - Use what-if policies to test effective permissions when making changes.
  4. Pilot changes using select groups to apply and test MFA policies.
  5. Don't block users who report fraud as users can lock themselves out (though this is less secure there is a danger of false positives). 
  6. Don't use MFA portal and Conditional access at same time - It's not a good idea to use MFA through the MFA control panel as well as conditional access. Disable user accounts for MFA management in the MFA portal prior to if you are using conditional access - otherwise you'll have 2 competing rulesets.
  7. Use Azure Identity Protection (IdP) - as good way to ensure users are forced to register MFA (MFA needs to be configured first) and to ensure MFA coverage. Also allows notifications, blocks or requires MFA when administrative accounts are logged into during high sign-in risk activities such as when seeing anomalous travel of sign-ins. 

Monday 1 March 2021

CalDigit TS3 Plus Thunderbolt 3 Docking Station - Issues with Windows 10 USB Devices

I've been having a few USB connection and power issues with the CalDigit TS3 Plus Docking Station (even after the January 2021 version 44.1 firmware update from CalDigit themselves). This is especially the case when I power up the laptop separately from the dock and then plug it in whilst still on.

The Problem:
The display adapters would work - but USB connectivity and audio was failing  - even after plugging and unplugging USB and associated devices and powering down the hub. All USB devices wouldn't even power up when the issue was in effect.

Discovery/Resolution Steps: 
The only thing that would fix it (most of the time) was a full power down restart.

Looking at Device Manager - I was getting a Code 31 saying that the "Object Name Already Exists". In the Device Event history, the following error kept appearing:

Device PCI\VEN_1B73&DEV_1100&SUBSYS_11061AB6&REV_10\8&1b6ac812&0&0000000800E0 was not migrated due to partial or ambiguous match.

Uninstalling and reinstalling these Generic USB Host Controllers "USB xHCI Compliant Host Controller" didn't work.

There is a teardown video of this dock with details of all the chips/controllers inside that gave me an idea - Looking up the Vendor and Device details in the red error above, it seems that the USB Controller Chip used in the CalDigit docking station is the Fresco Logic xHCI 1100 (USB3) Controller.

After a short search I found the following device driver page for that company to try (rather than the Generic Microsoft xHCI Host Controller driver) -  

Once I installed this, the device was correctly recognised in Windows and no reboot required. It has worked without issue post installation of the Fresco driver (fingers crossed!). I believe that this Fresco Driver installation should go on the CalDigit support page to resolve this issue as the default Generic Drivers seem to have problems.

Thursday 11 June 2020

APRA CPS 234 - Summary of Security Compliance Requirements

In my work with NTT, I've recently been dealing with several FSI-based (Financial Services Industry) organisations who have to comply with the
Australian Prudential Regulation Authority (APRA) Standard CPS 234 July 2019. Here's a brief overview of what that compliance with CPS 234 entails:
  1. APRA CPS 234 is Cybersecurity 101 for Banks, Insurers and related institutions.
  2. As with standards like ISO27001:2013, it is a risk-based approach about ensuring that adequate CIA (Confidentiality, Integrity and Availability) is maintained for information assets.
  3. The Board is ultimately responsible for ensuring appropriately robust policies and controls are in place for both the organisation and 3rd party contractors.
  4. Per basic concepts in CISSP (Certified Information Systems Security Professional), controls should really only be implemented if the cost of control implementation is less than the costs of the data being lost/breached.
  5. To this effect - the information security capability basically has to pass the "reasonableness" test
    1. The security capability should match the size and extent of threats
    2. The controls should match criticality and sensitivity of the assets
  6. CPS 234 aims to ensure that an APRA-regulated entity takes measures to be resilient against information security incidents (including cyberattacks) by maintaining an information security capability commensurate with information security vulnerabilities and threats:
    • Know your responsibilities (The board is ultimately responsible).
    • Know what you have and protect it appropriately. An APRA-regulated entity must classify its information assets, including those managed by related parties and third parties, by criticality and sensitivity.
      • Ideally - should be using Azure Info protection or similar to provide security labels (e.g. classification, sensitivity or dissemination limiting markers) to drive preventative and detective controls. 
      1. Detect and React appropriately:
        • Have Incident Plans and RACIs (Responsible, Accountable, Consult, Inform) in terms of response
        • Appropriately skilled people to detect incidents. This requires user awareness and security practices.
        • Notification of breach within 72 hours.
          • Implies that proper threat detection (pro-active) and monitoring systems should be in place. If you don't know it's happening then you can't comply.
      2. Test and Audit Regularly. Must test effectiveness of controls with a systematic testing program - that is run at least annually.
          • This lends itself to regular, automated (static/dynamic) testing.

    It is always critical to keep in mind that threats come from both threat actors inside (insider threat) and outside the organisation (organised or individual actors) - which lends itself to zero trust approaches to cybersecurity.

    Monday 9 March 2020

    NIST 800-207 - What is Zero Trust Architecture (ZTA) and Why Has It Become Important? (aka the X-Files - Trust No One)

    One of the primary concerns, when operating in cloud environments and accessing resources over the internet, is cybersecurity. Traditional firewalls and edge-approaches to security no longer align with how we use technology.

    This has given rise to the recent release of the National Institute of Standards and Technology (NIST) 800-207 security draft The release of this document has highlighted the prominence that has come to the Zero Trust approach to network security. Zero trust is a necessary security model that has arisen due to evolving user and mobility expectations and the rise of different software and infrastructure delivery models such as the cloud.

    Bodies of knowledge such as NIST and CISSP recommend a layered approach to security (also known as "defence in depth" and "Segmentation/Micro-segmentation") - Zero Trust Architecture is a type of layered approach which will protect the confidentiality, integrity and availability of your information. This includes not just servers and devices but also protecting at the application/microservice (e.g. with JSON Web Tokens) and user levels.

    What is Zero Trust Security?

    • Zero Trust follows the motto of the X-Files - "Trust No One". Regardless of whether the traffic is from internal or external sources - access is regularly scrutinized, verified, validated and processed in the same way. 
    • Zero Trust assumes that there is no implicit trust based on a user's or resource's location (e.g. intranet or intranet). Normal perimeter or edge-based security approaches segment the network this way in a static way based on location, subnets and IP ranges.
    • A useful analogy that is often used is the Castle versus the Hotel Model. Once inside a castle, a device or user has great lateral freedom. In a hotel, each room requires a key and is checked on entry to different rooms (representing applications and/or systems). 
    • Zero trust security focuses more on protecting the resources and users both inside and outside those network boundaries. It includes Establishing Trust (e.g. do I trust a jail-broken/unpatched/unencrypted/unsecured/unrecognized device with all of its ports open?), Enforcing Access and Continuously verifying the trust. It also includes continuous monitoring to detect anomalies. It is a combination of technologies and methods of protection.

    • Zero Trust is a more granular and flexible approach to securing resources reflective of the reality of modern workplaces. 
    • Zero Trust typically uses the following parameters and checks in combination to determine policy-based access to resources:
      • User Identity
      • Device (including assurance services, Mobile Device Management Flags - identifying patch levels to establish device-level trust or vulnerabilities)
      • Location
      • Session Risk (such as anomalous/unusual access behaviors or times)

    Why has it become important?

    • The rise of working from home, remote users, and Bring Your Own Devices (BYOD) and cloud-based services (e.g. Salesforce, Office 365, Microsoft Teams and other AWS, Azure and GCP-based applications) have led to resources and users being located outside traditional network boundaries. 
    • Consequently, authentication and authorization cannot be assumed to be valid just because of the source location of a request - credentials and associated tokens need to be validated independently of location. 
    • Zero Trust is also required because of greater awareness of the "Insider Threat" from contractors and employees - through negligence or malicious intent.
    • As part of the Zero Trust mindset - there are also greater requirements around monitoring, logging and auditing activities as part of due diligence when complying with legal obligations (e.g. Australian Prudential Regulation Laws such as APRA Prudential Standard CPS 234). It is not good enough just to log external activities - internal activities need to be monitored as well. 

    Why is it difficult?

    • Zero Trust requires a much better understanding of the assets and resources that need protection and the behavior of the users consuming and accessing those resources. 
    • Phenomena such as "Shadow IT" also introduce problems because they are not visible and so Zero Trust approaches may actually exclude previously functioning devices from resource access. 
    • Zero Trust requires the creation of more refined corporate and technical policies to handle the more granular resource-based approach to accessing your critical corporate systems.
    • Zero Trust requires much more intensive logging and scrutiny of user activity. This typically necessitates AI other anomaly detection mechanisms (e.g. out of hours access alerts).

    Saturday 29 February 2020

    Basic Guidelines for Product Offering Go/No-Go Decisions (Including Product Fit/Market Fit)

    I've worked for Software/IT Consulting companies, Product Development Companies and System/Service Integration companies along my career path. Most recently I've noticed that some of the basic decision making around what products and service offerings that should be developed have missed some critical gateways that resulted in full or partial product failure (i.e. it doesn't make a good return on investment or ever turn a profit).

    Often, component licensing costs are ignored or forgotten or the actual pricing is something that the market cannot bear. Sometimes this is due to lack of multi-tenancy support so the product offering economics are not scalable.

    Licensing and subscription costs may go down over time (esp with AWS and Azure services becoming gradually cheaper as they reach greater economies of scale and proportional levels of competition). However, this may not happen quickly enough over the product lifetime to deliver profitability. In this case service offerings/products need to be "end-of-lifed" ("EOL'd") or migrated to new platforms and using components with lower cost structures.

    Going back to basics, I put this diagram together to outline some key principles of product offer development which should be considered as gateways when deciding to bring a product to market.

    What makes a product worthwhile? It starts with being something customers want to buy (and buy enough of). If you find this sweet spot, then you have product/market fit - which means you're no longer pushing your product onto customers. You also need to have a clear vision - otherwise delivery will be problematic when building your product out. There are many articles on product and market fit available - these are just some of my ideas that resonate based on recent experience.

    In particular, critical profitability constraints can be forgotten when that "cool new tech" comes out or "everyone else is doing this in the market":

    What is clear is that product-market fit is an ideal - but not a sufficient indicator of whether a product should go to market. There are other factors to be considered including cost structures and viability of ongoing product development and marketing to maintain that fit, customer value and (hopefully) margins.  

    Friday 7 June 2019

    List of Azure Region Codes for Azure 2019 DevOps Migration Tool (and TFSMigrator Tool)

    Whilst using the Azure DevOps 2019 migration tool to move from an on-premise DevOps server to the cloud, you will be required to enter the desired destination region. Below is a list of all the valid entries as at June 2019:

    CC = Central Canada
    WEU = Western Europe
    EA = East Asia
    EAU = East Australia
    CUS = Central US
    MA = South India
    SBR = South Brazil
    WCUS = West Central US
    UKS = UK South
    EUS = East US
    NCUS = North Central US
    SCUS = South Central US
    WUS2 = West US 2
    GH = ?
    EUS2 = East US 2

    These values appear to come from the server and are not embedded in the tool - otherwise I'd be able to use Reflection to get more information! These region codes seem to be undocumented by Microsoft at present.

    [Update - documentation has some more details - but doesn't cover off all available Region options -]

    Deleting Azure Active Directory Tenant – Unable to delete all Enterprise Applications - Can't Delete Azure DevOps from within User Interface

    Encountered an issue today with removal of an Azure AD Tenant that is no longer used. When attempting to delete the Azure AD Directory - I simply received warnings that I had to "Delete All Enterprise Applications" - with a warning status indicator.

    When I tried to remove the single Azure Enterprise Application (Azure DevOps) - the Delete button was disabled. As you could imagine - this put me in a bit of a pickle!

    The fix that worked for me is as follows:

    1. Create a new Global Admin account in the Azure Active Directory you are trying to delete. Make sure you copy the temporary password as you'll need to log in with it.

    2. To ensure you have the Azure AD Powershell extensions, Start Powershell and run:
    Install-Module -Name AzureAD

    3. Once done run Connect-AzureAD. You will be prompted to login. Login with the user you created and will be asked to change your password.

    4. Run
    Get-AzureADServicePrincipalto retrieve the Object Id of the Enterprise Application that you can't delete.

    5. Run
    Remove-AzureADServicePrincipal -objectid [enter objectid here] directly.

    6. Remove your temporary user.

    You should now be able to delete your Azure Active Directory (Azure AD) Tenant instance.