Hellman & Friedman to Acquire Checkmarx at a $1.15B Valuation

Software Architecture with Shortest Time-to-Market Consideration

Survival of the Fastest

Today, everything is getting faster. With social media and our smartphones, we expect immediate responses to our messages. When searching for the answer to a question, the internet can deliver it in seconds. Even Amazon’s one- or two-day delivery is no longer fast enough, and we can now get what we want delivered the same day. It is only natural that we also expect to get new features and functionality fast for the applications we use, and we often choose software that delivers on time, meeting our expectations.

In comparison, when the product team at Checkmarx asked engineering to develop a set of new product-complimentary features, they made it clear that time-to-market was critical. When time-to-market is one of the major requirements, it has a pretty large impact on the application architecture. As a result, I’ve gathered some of the architectural considerations that enabled our project to come to fruition is the shortest time possible.

Aligning with Stakeholders on What to Optimize

Make sure the stakeholders (e.g., product management in this case) understand the diagram below and know exactly what they want to optimize in the context of quality, cost, and time.

For example, if stakeholders want:

  • Good Quality but Cheap Cost, this means the work will be Slow.
  • Good Quality but Fast Time, this means the work will be Expensive.
  • Cheap Cost but Fast Time, this means the work will be Inferior.

Understanding the relationship of the three principles in the diagram above makes some architectural decisions easier—simply choose what will deliver a high-quality outcome, while saving development time, and not necessary what is cheapest. Concerning the request from product management, the application improvements must be made in the shortest time possible with the highest quality. Therefore, cost is not necessarily a consideration in this case.

Shorten the Design Cycle by Using Co-Evolving Requirements and a Rough Design

Creating a rough design is an evolving process. Both requirements and design are developed in parallel, and each require negotiation between the team doing the work and the stakeholders requesting the improvements. The design artifacts embody the agreements and decisions that the group has reached.

The process of requirements refinement, and design or redesign, continues throughout the life of the project. Knowing when to transition the primary focus from design to construction requires balancing several factors:

  • The maturity of the team
  • Their familiarity with the problem domain
  • The size and complexity of the project
  • The extent to which the requirements can be pre-stated, or if they must be discovered

These variables are usually principal factors in deciding when to begin construction.

Before moving on, let’s look at a few quotes from Jeff Atwood’s blog post about using the “The Last Responsible Moment” technique for making software design decisions.

  • “Paradoxically, it’s possible to make better decisions by not deciding.”
  • “Deciding too late is dangerous, but deciding too early in the rapidly changing world of software development is arguably even more dangerous. Let the principle of Last Responsible Moment be your guide.”

This concept is further described in the Lean Software Development: An Agile Toolkit book and is called the Concurrent Software Development Technique. In the graphic below, we can see the effects of early decisions vs. late decisions, in addition to the cost of deferring and cost of deciding and where they intersect—being the last responsible moment of decision. Certainly, there is a balancing act happening in the context of software development.

Focus on the Business Logic, and Minimize Effort Spent on Infrastructure Design and Development by using Managed Services

A managed service is a cloud feature that you can use without having to take care of the underlying hardware administration. For instance, in the Amazon ecosystem you’ll find services like Lambda, Aurora, DynamoDB, and others. What do all of these services have in common? The service provider, and not your organization, is responsible for getting deployments up and running on these platforms. The whole point is that it will always be far easier to deal with managed services than to deploy your own infrastructure.

Cloud managed services allow teams to focus more on code and business logic than on infrastructure. And by implementing external API integrations, you can avoid having to reinvent the wheel, and instead, be able to react faster to market demands. With the various options available, your task is to analyze them and see how they fit within the cost/benefit parameters your business demands.

There is little doubt that cloud managed services continues to grow in interest. For example, when searching on Google Trends for “cloud managed services” over the past 12 years, we can observe a significant uptick in interest as shown below.

Another benefit of managed services is the pay-as-you-go payment model that charges clients based on usage. When you introduce a new product or a set of features and plan a gradual adoption rate by clients, the benefit is that the cost of the managed services is relative to the usage. You don’t have to spend a lot of money on the infrastructure in advance, for the new product or set of features you’re delivering.

In our case, when it was time to decide where to store data for our project, we chose to go with Athena, which is an AWS managed service that provides SQL abilities over files stored in S3. This decision saved us time for setting up the right database, creating the correct schemas, and the ETL (extract, transform, and load) process for files that initially come in JSON format and were stored in S3.

Clearly, it was not a long-term solution, since it cannot be optimized for performance. But for the first phase, it provided all the necessary functionality with minimum effort. We will likely not stay with this solution forever and at some point there will be a need to replace it with an actual database. Therefore, it must be part of the design consideration that will allow further changes of specific parts when needed.

Flexibility and Scalability: Think about the Big Picture and Plan the Next Steps

Today’s requirements are good for today. But tomorrow, the application will likely have to be better, faster, with more features, and capable of serving more clients. With this in mind, the architecture we chose had to be flexible enough to support improving and optimizing the necessary parts at the later stage of application lifecycle, without the need to re-design and rewrite the entire thing. We want the ability to modify small parts of the software independently, while keeping the entire product in stable state. So, it was an obvious choice to go with a microservices architecture.

In a microservices architecture, applications are built and deployed as highly decoupled, focused services. Think of it like building a structure out of Legos. With Legos, the individual building blocks are already there for you to build whatever you want. Microservices, like Legos, are specialized building blocks—individual pieces with individual functionalities. And, like Legos, microservices can be combined to build something bigger. Additionally, each single microservice component can be scaled independently.

A decoupled application architecture allows each component to perform its tasks independently. It also allows components to remain completely autonomous and unaware of each other. A change to one service shouldn’t require a change to the other services. This brings the development to speed by allowing an independent work on each microservice that we want to modify or enhance. In addition, using this practice, we benefit from the stability of the other pieces of system.

Wrapping it all up

If you and your development team want to deliver a high-quality solution in the shortest time possible, you should get aligned with the stakeholders in product management early on in the process, take advantage of cloud infrastructure and managed services, understand the co-evolving requirements, and make sure the solution is capable of being scaled and easily modified along the way. You have to move forward every day and these techniques will help you to have the time on your side in making better software.

Questions about which additional approaches we found useful and which not? For more information, please feel free to connect with me on LinkedIn.

 

Jump to Category