Next-Gen. AI. Attack Vector. CIA Triad. Deep Learning. Each of these has one thing in common: they represent cybersecurity buzzwords. Merriam-Webster dictionary aptly defines ‘buzzword’ as “an important-sounding usually technical word or phrase often of little meaning used chiefly to impress laymen.” Never, in my 30 years of running IT and cyber teams in over 80 countries, have I heard someone refer to the 7-layer OSI model. We have become inundated with words that do not express what they mean. Because of this, my team plays cyber bingo to habituate using practical terms that aid understanding. Our language must mirror the motto of our state: Esse Quam Videri, or ‘to be, rather than to seem.’
Despite recent popularity, my favorite buzzword entered our lexicon over a decade ago: zero trust. In their recently published book Zero Trust Security, Jason Garbis and Jerry W. Chapman themselves note how misleading the term can be. To combat confusion, I prefer the mental model offered by “intentional trust” over “zero trust.” Laziness in communication and terminology leads to laziness in thinking—a luxury cybersecurity professionals cannot afford.
Internet development was based on the foundation of trusted communications—inherent trust—between all parties. Although we now understand that the internet is, by definition, insecure, we need to fight against allowing the pendulum to swing from inherent trust to “zero trust” (making buzzword-centric vendors rich in the process). In theory, zero trust means continuous authentication, authorization, and configuration validation. Although Crowdstrike and others emphasize users, this mental model should be set for any entity—user, device, system, software—with any relationship to your analog and digital signature. This means communications, access, number of devices…anything. We are in a world with no network edge and must think accordingly. If we start with a faulty mental model, we inevitably end up with poorly designed solutions, if not operations.
Our development and leveraging of information systems should hinge on the mental model of intentional trust. The best way to accomplish intentional trust is to architect information systems that validate the trust between at least the three primary “actors” in a transaction. A typical software development and test team writes code and validates that it meets functional system requirements with a separate team defining those requirements. It falls to the system development team to ensure that the ‘ilities’ — reliability, maintainability, supportability, and scalability—are everything they ought to be. Once finished, the product is shipped. This, as SolarWinds and so many others have aptly demonstrated, is inherently flawed. However, it doesn’t have to stay this way.
To maintain distance from financial and social pressures that disincentivize good quality, we must separate duties through three points of trust. The first is our product management team. Requirements, including validation methods, must be independently set by the product management team. This team owns the definition of those requirements: the functional, systems, security, reliability, and documentation requirements. Another team creates the second point of trust. This independent team builds the product (e.g., development) to meet the defined requirements.
The third and final point of trust should be established between the release team and the primary system requirements team. Because the development team—dealing with prioritization, scoping, and technical design—has little to do with the creation of system requirements (but, of course, a great deal to do with the prioritization, scoping, and technical design), the release team has confidence in the product based on the trustworthiness of the requirements team.
Through the separation of duty between those three roles, each team has a primary goal: the secure completion of their tasks with checks and balances that create intentionally trusted relationships. The team documenting requirements has a distinct separation of duties/responsibilities relative to the development team. The team developing the system has no responsibility for code and system validation, delivery, or implementation (technical debt discovery, change control validation, etc.). Finally, the release team only implements after validating the code against the documented requirements of success.
In our example, we have shown how three independent points of trust can continuously and independently authenticate, authorize, and validate configurations across a digital signature. Now apply our method to leveraging Kerberos (first point of trust) with access scans (second point of trust) with configuration scans (third point of trust). Each of these three “systems” are independently “controlled” and yet require mutual authentication and validation of trust between all three (versus two as in Kerberos alone). Cybersecurity is not about ‘arriving’; these opportunities create a journey instead of a destination.
If we believe, like Stephen Covey says, “business moves at the speed of trust,” then zero trust is not an option. While it may be quotable, communicating that correct security measures involve zero trust damages the quality of our critical thinking. Ultimately, it undermines the checks and balances necessary to security even as it promises enhanced security. Instead of communicating zero trust, let us commit to intentional trust.
At Carolina Cyber Center, we seek to create cyber professionals of character, whether they begin as an amateur fresh out of high school, mid-life career changers, or are cyber professionals continuing their education. To learn more about what the Carolina Cyber Center offers, visit our website or call us at 828.419.0737.