Thursday, 11 June 2020

APRA CPS 234 - Summary of Security Compliance Requirements


In my work with NTT, I've recently been dealing with several FSI-based (Financial Services Industry) organisations who have to comply with the
Australian Prudential Regulation Authority (APRA) Standard CPS 234 July 2019. Here's a brief overview of what that compliance with CPS 234 entails:
  1. APRA CPS 234 is Cybersecurity 101 for Banks, Insurers and related institutions.
  2. As with standards like ISO27001:2013, it is a risk-based approach about ensuring that adequate CIA (Confidentiality, Integrity and Availability) is maintained for information assets.
  3. The Board is ultimately responsible for ensuring appropriately robust policies and controls are in place for both the organisation and 3rd party contractors.
  4. Per basic concepts in CISSP (Certified Information Systems Security Professional), controls should really only be implemented if the cost of control implementation is less than the costs of the data being lost/breached.
  5. To this effect - the information security capability basically has to pass the "reasonableness" test
    1. The security capability should match the size and extent of threats
    2. The controls should match criticality and sensitivity of the assets
  6. CPS 234 aims to ensure that an APRA-regulated entity takes measures to be resilient against information security incidents (including cyberattacks) by maintaining an information security capability commensurate with information security vulnerabilities and threats:
    • Know your responsibilities (The board is ultimately responsible).
    • Know what you have and protect it appropriately. An APRA-regulated entity must classify its information assets, including those managed by related parties and third parties, by criticality and sensitivity.
      • Ideally - should be using Azure Info protection or similar to provide security labels (e.g. classification, sensitivity or dissemination limiting markers) to drive preventative and detective controls. 
      1. Detect and React appropriately:
        • Have Incident Plans and RACIs (Responsible, Accountable, Consult, Inform) in terms of response
        • Appropriately skilled people to detect incidents. This requires user awareness and security practices.
        • Notification of breach within 72 hours.
          • Implies that proper threat detection (pro-active) and monitoring systems should be in place. If you don't know it's happening then you can't comply.
      2. Test and Audit Regularly. Must test effectiveness of controls with a systematic testing program - that is run at least annually.
          • This lends itself to regular, automated (static/dynamic) testing.

    It is always critical to keep in mind that threats come from both threat actors inside (insider threat) and outside the organisation (organised or individual actors) - which lends itself to zero trust approaches to cybersecurity.


    Monday, 9 March 2020

    NIST 800-207 - What is Zero Trust Architecture (ZTA) and Why Has It Become Important? (aka the X-Files - Trust No One)

    One of the primary concerns, when operating in cloud environments and accessing resources over the internet, is cybersecurity. Traditional firewalls and edge-approaches to security no longer align with how we use technology.

    This has given rise to the recent release of the National Institute of Standards and Technology (NIST) 800-207 security draft https://csrc.nist.gov/publications/detail/sp/800-207/draft. The release of this document has highlighted the prominence that has come to the Zero Trust approach to network security. Zero trust is a necessary security model that has arisen due to evolving user and mobility expectations and the rise of different software and infrastructure delivery models such as the cloud.

    Bodies of knowledge such as NIST and CISSP recommend a layered approach to security (also known as "defence in depth" and "Segmentation/Micro-segmentation") - Zero Trust Architecture is a type of layered approach which will protect the confidentiality, integrity and availability of your information. This includes not just servers and devices but also protecting at the application/microservice (e.g. with JSON Web Tokens) and user levels.

    What is Zero Trust Security?


    • Zero Trust follows the motto of the X-Files - "Trust No One". Regardless of whether the traffic is from internal or external sources - access is regularly scrutinized, verified, validated and processed in the same way. 
    • Zero Trust assumes that there is no implicit trust based on a user's or resource's location (e.g. intranet or intranet). Normal perimeter or edge-based security approaches segment the network this way in a static way based on location, subnets and IP ranges.
    • A useful analogy that is often used is the Castle versus the Hotel Model. Once inside a castle, a device or user has great lateral freedom. In a hotel, each room requires a key and is checked on entry to different rooms (representing applications and/or systems). 
    • Zero trust security focuses more on protecting the resources and users both inside and outside those network boundaries. It includes Establishing Trust (e.g. do I trust a jail-broken/unpatched/unencrypted/unsecured/unrecognized device with all of its ports open?), Enforcing Access and Continuously verifying the trust. It also includes continuous monitoring to detect anomalies. It is a combination of technologies and methods of protection.

    • Zero Trust is a more granular and flexible approach to securing resources reflective of the reality of modern workplaces. 
    • Zero Trust typically uses the following parameters and checks in combination to determine policy-based access to resources:
      • User Identity
      • Device (including assurance services, Mobile Device Management Flags - identifying patch levels to establish device-level trust or vulnerabilities)
      • Location
      • Session Risk (such as anomalous/unusual access behaviors or times)


    Why has it become important?

    • The rise of working from home, remote users, and Bring Your Own Devices (BYOD) and cloud-based services (e.g. Salesforce, Office 365, Microsoft Teams and other AWS, Azure and GCP-based applications) have led to resources and users being located outside traditional network boundaries. 
    • Consequently, authentication and authorization cannot be assumed to be valid just because of the source location of a request - credentials and associated tokens need to be validated independently of location. 
    • Zero Trust is also required because of greater awareness of the "Insider Threat" from contractors and employees - through negligence or malicious intent.
    • As part of the Zero Trust mindset - there are also greater requirements around monitoring, logging and auditing activities as part of due diligence when complying with legal obligations (e.g. Australian Prudential Regulation Laws such as APRA Prudential Standard CPS 234). It is not good enough just to log external activities - internal activities need to be monitored as well. 

    Why is it difficult?

    • Zero Trust requires a much better understanding of the assets and resources that need protection and the behavior of the users consuming and accessing those resources. 
    • Phenomena such as "Shadow IT" also introduce problems because they are not visible and so Zero Trust approaches may actually exclude previously functioning devices from resource access. 
    • Zero Trust requires the creation of more refined corporate and technical policies to handle the more granular resource-based approach to accessing your critical corporate systems.
    • Zero Trust requires much more intensive logging and scrutiny of user activity. This typically necessitates AI other anomaly detection mechanisms (e.g. out of hours access alerts).


    Saturday, 29 February 2020

    Basic Guidelines for Product Offering Go/No-Go Decisions (Including Product Fit/Market Fit)

    I've worked for Software/IT Consulting companies, Product Development Companies and System/Service Integration companies along my career path. Most recently I've noticed that some of the basic decision making around what products and service offerings that should be developed have missed some critical gateways that resulted in full or partial product failure (i.e. it doesn't make a good return on investment or ever turn a profit).

    Often, component licensing costs are ignored or forgotten or the actual pricing is something that the market cannot bear. Sometimes this is due to lack of multi-tenancy support so the product offering economics are not scalable.

    Licensing and subscription costs may go down over time (esp with AWS and Azure services becoming gradually cheaper as they reach greater economies of scale and proportional levels of competition). However, this may not happen quickly enough over the product lifetime to deliver profitability. In this case service offerings/products need to be "end-of-lifed" ("EOL'd") or migrated to new platforms and using components with lower cost structures.

    Going back to basics, I put this diagram together to outline some key principles of product offer development which should be considered as gateways when deciding to bring a product to market.

    What makes a product worthwhile? It starts with being something customers want to buy (and buy enough of). If you find this sweet spot, then you have product/market fit - which means you're no longer pushing your product onto customers. You also need to have a clear vision - otherwise delivery will be problematic when building your product out. There are many articles on product and market fit available - these are just some of my ideas that resonate based on recent experience.

    In particular, critical profitability constraints can be forgotten when that "cool new tech" comes out or "everyone else is doing this in the market":


    What is clear is that product-market fit is an ideal - but not a sufficient indicator of whether a product should go to market. There are other factors to be considered including cost structures and viability of ongoing product development and marketing to maintain that fit, customer value and (hopefully) margins.