Skip to content

2.3 Summarize secure application development, deployment, and automation concepts.

Last edited 965 days ago by Makiel [Muh-Keel].

Software Development Life Cycle

The Software development Life Cycle process is the template used in the planning and creation of software.
Building, deploying, and maintaining software requires security involvement throughout the software’s life cycle. Secure software development life cycles include incorporating security concerns at every stage of the software development process.
image.png

Software Development Phases

Regardless of which SDLC or process is chosen by your organization, a few phases appear in most SDLC models:
Feasibility: The Feasibility phase is where initial investigations into whether the effort should occur are conducted. Feasibility also looks at alternative solutions and high-level costs for each solution proposed.
It results in a recommendation with a plan to move forward.
Analysis and Requirements :Once an effort has been deemed feasible, it will typically go through an Analysis and Requirements definition phase.
In this phase customer input is sought to determine what the desired functionality is, what the current system or application currently does and what it doesn’t do, and what improvements are desired. Requirements may be ranked to determine which are most critical to the success of the project.
Security Requirements is an important part of the analysis and requirements definition phase. It ensures that the application is designed to be secure and that secure coding practices are used.
Design: The Design phase includes design for functionality, architecture, integration points and techniques, dataflows, business processes, and any other elements that require design consideration.
Development: The actual coding of the application occurs during the Development phase.
This phase may involve testing of parts of the software, including unit testing, the testing of small components individually to ensure they function properly.
Testing and Integration: Formal testing with customers or others outside of the development team occurs in the testing and integration phase.
Individual units or software components are integrated and then tested to ensure proper functionality. In addition, connections to outside services, data sources, and other integration may occur during this phase.
During this phase user acceptance testing (UAT) occurs to ensure that the users of the software are satisfied with its functionality.
Training and Transition: The important task of ensuring that the end users are trained on the software and that the software has entered general use occurs in the training and transition phase.
This phase is sometimes called the acceptance, installation, and deployment phase.
Ongoing Operations and Maintenance: Once a project reaches completion, the application or service will enter what is usually the longest phase: ongoing operations and maintenance.
This phase includes patching, updating, minor modifications, and other work that goes into daily support.
Disposition: The disposition phase occurs when a product or system reaches the end of its life.
Although disposition is often ignored in the excitement of developing new products, it is an important phase for a number of reasons: shutting down old products can produce cost savings, replacing existing tools may require specific knowledge or additional effort, and data and systems may need to be preserved or properly disposed of.

Code Deployment Environment

Many organizations use multiple environments for their software and systems development and testing. Below are the different types of environments:
The Development Environment is typically used for developers or other “builders” to do their work.
Some workflows provide each developer with their own development environment; others use a shared development environment.
image.png
The Test Environment is where the software or systems can be tested without impacting the production environment.
In some schemes, this is preproduction, whereas in others a separate preproduction staging environment is used
Quality Assurance (QA) occurs in this phase. QA’s objectives are to:
Verify all features are working as expected
Validates new functionality
Verifies old errors don’t reappear
The Staging Environment is a transition environment for code that has successfully cleared testing and is waiting to be deployed into production.
image.png
The Production Environment is the live system. Software, patches, and other changes that have been tested and approved move to production.
image.png

Provisioning and Deprovisioning

Provisioning

Provisioning is the process of making something available. If you are provisioning an application, then you’re probably going to deploy a web server, a database server.
There may be a middleware server and other configuration and settings to be able to use this particular application
When we describe the deployment of an application, we’re referring to it as an application instance, because it usually involves many, many different components all working together.
Application Software Security: The provisioning of this application instance also includes security components. We want to be sure that the operating system and the application are up to date with the latest security patches.
Network Security: We also want to be sure that our network design and architecture is also secure.
So we might want to create a secure VLAN just for this application.
There might be a set of configurations and a firewall for internal access and a completely different set of security requirements for external access
Software Deployment to Workstations: You need to run scans on the software before deployment, and scans on the workstation before deployment as well.
Make sure that the workstation itself is up to date with the latest patches and is running all of the latest security configurations.

Deprovisioning

Deprovisioning occurs when you are dismantling and removing the applications created. This is not simply turning off the application. This is completely removing any remnants of that application from running in your environment.
If you’re deprovisioning a web server or database server with an application instance, then you also have to deprovision all of your security components as well.
Not only are you removing the firewalls and other security devices, you may have to remove individual rules in the existing firewalls that remain.
So make sure that if you are provisioning and opening up those firewall rules– that you are, during the deprovisioning process, removing those firewall rules.
What happens to the Data?
Decisions have to be made over the data that was created while this application was in place.
It may be that the data is copied off to a secure area or it may be that everything that was created during the use of that application is now removed during the deprovision process.

Elasticity and Scalability

When we design applications, we should create them in a manner that makes them resilient in the face of changing demand. We do this through the application of two related principles:
Scalability says that applications should be designed so that computing resources they require may be incrementally added to support increasing demand.
Elasticity goes a step further than scalability and says that applications should be able to automatically provision resources to scale when necessary and then automatically deprovision those resources to reduce capacity (and cost) when it is no longer needed.

Scalability

When we first build this application instance, we’re creating a particular level of Scalability. We know that based on the particular web server, database server, and all of the other components that make up this application instance– that we can support a certain load.
Maybe at a particular time of the year, an application gets busier, or maybe the popularity of an application has taken off.
And suddenly, a lot of people are wanting to use this application.
It’s important to be able to scale in proportion to meet the demands.
Ex. You might build an application instance that can handle 100,000 transactions per second. And everything that you put together would be able to handle any load up to that point.
100K transactions may not be enough to handle the load if the demand is too high.

Elasticity

We define the Elasticity as the ability to increase or decrease the available resources for an application based on the workload. Elasticity refers to how quickly an application can respond to a needed workload.
Ex. If we know that our application scalability can support 100,000 transactions per second and we suddenly need 500,000 transactions per second, then we can take advantage of this elasticity and create five separate application instances all running simultaneously.
Application becomes unpopular? Deprovision the number of application instances to meet workload demands.

Secure Coding Techniques

Good news is that there are a number of tools available to assist in the development of a defense-in-depth approach to security.
Through a combination of secure coding practices and security infrastructure tools, cybersecurity professionals can build robust defenses against application exploits.

Input Validation

Applications that allow user input should perform Input Validation of that input to reduce the likelihood that it contains an attack.
Improper input handling practices can expose applications to injection attacks, cross-site scripting attacks, and other exploits.
Always ensure that validation occurs server-side rather than within the client’s browser.
Client-side validation is useful for providing users with feedback on their input, but it should never be relied on as a security control
Input Whitelisting is when developer describes the exact type of input that is expected from the user and then verifies that the input matches that specification before passing the input to other processes or servers.
Input Blacklisting describes potentially malicious input that must be blocked.
Ex. Developers might restrict the use of HTML tags or SQL commands in user input. When performing input validation, developers must be mindful of the types of legitimate input that may appear in a field.
You may inadvertently block any valid entry inputs. Ex. Blocking the Apostrophe stops anybody from entering a name like O’Brian or O’Driscoll.

Normalization

Database Normalization is a set of design principles that database designers should follow when building and modifying databases.
Some of the advantages of implementing these principles:
Prevent data inconsistency
Prevent update anomalies
Reduce the need for restructuring existing databases
Make the database schema more informative

Stored Procedures

A Stored Procedure provides an important layer of security between the user interface and the database.
A user can use a ‘select-command’, which is where we’re asking for data from the database, from a particular table in the database, and a particular value within that table.
If adversaries realize this command is pulling data straight from the application ↔ database, they’re able to modify the information within this query and gain access to information that normally would not be available.
This is where a stored procedure comes in; Instead of having the client send this information to the database server to run that command, that query is instead stored on the database server, and the application just sends a message saying, CALL, and then the name of the stored procedure to run.
It stops users from being able to modify anything.
The User is not able to modify any of the parameters of that database call. They can only choose to either run that stored procedure or not run that stored procedure.

Obfuscation/Camouflage

Obfuscation/Camouflage is a way to take something that normally is very easy to understand, made it so that is very difficult to understand. From an application developers’ perspective, they’re taking, application code that they’ve written, and they’re changing that code to make it very difficult to read by other human beings.
Data Minimization reduces risk because you can’t lose control of information that you don’t have in the first place!
Data minimization is the best defense. Organizations should not collect sensitive information that they don’t need and should dispose of any sensitive information that they do collect as soon as it is no longer needed
Tokenization replaces personal identifiers that might directly reveal an individual’s identity with a unique identifier using a lookup table.
Ex. We might replace a widely known value, such as a student ID, with a randomly generated 10-digit number. We’d then maintain a lookup table that allows us to convert those back to student IDs if we need to determine someone’s identity
Hashing uses a cryptographic hash function to replace sensitive identifiers with an irreversible string of characters.
Salting these values with a random number prior to hashing them makes these hashed values resistant to a type of attack known as a rainbow table attack.
‘Hello World’ can be written simple or overly complicated; That’s what obfuscation is!
image.png
image.png

Code Reuse/ Dead Code

Many organizations Reuse Code not only internally but by making use of third-party software libraries and software development kits (SDKs). Third-party software libraries are a common way to share code among developers.
image.png

Use of 3rd Party libraries and software development kits (SDKs)

Organizations also introduce third-party code into their environments when they outsource code development to other organizations.
Security teams should ensure that outsourced code is subjected to the same level of testing as internally developed code.
It’s fairly common for security flaws to arise in shared code, making it extremely important to know these dependencies and remain vigilant about security updates.
Dead Code occurs when there has been a code put into the application, it is running through and processing some type of logic, but the ultimate results of that logic aren’t used anywhere else within the app.
Make sure that the code that they’ve put into the application, is actually, performing a useful function.
Any type of code added to an application, adds the potential for a security issue.
So it’s important to make sure that any of the coding that we’re doing, is following the best practices, and avoiding any type of code reuse, or dead code.

Server-Side vs. Client-Side

Server-Side Validation is when data is sent to a server and validated by code on the same server.
Prevents user from making changes to the data before getting to the server.
Using server-side validation is usually considered the safer alternative. And if you only had to choose between one of these, you would probably choose server-side validation.

Client-Side Validation is when data is validated on the client’s machine before being sent off to the server.
End user’s application will validate the input on the client’s machine and then send it to the server for processing.
All input validation checks are being made on the local machine.
May speed things up a bit since you’re not waiting for the server to evaluate the input.

Memory Management

Developers have to be mindful how memory is allocated during application usage, applications are often responsible for managing their own use of memory and majority of the time this poor memory management undermines the overall security of the application.
We also have to think about this from a security perspective, because users will input different types of data into the application, and that information will be stored in memory.

Resource Exhaustion

Whether intentional or accidental, systems may consume all of the memory, storage, processing time, or other resources available to them, rendering the system disabled or crippled for other uses.
Memory Leaks are one example of resource exhaustion.
When an application fails to return some memory that it no longer needs, perhaps by simply losing track of an object that it has written to a reserved area of memory.
It can slowly consume all the memory available to the system, causing it to crash.
Rebooting the system often resets the problem, returning the memory to other uses but if the memory leak isn’t corrected, the cycle simply begins anew.

Pointer De-Referencing

Memory pointers can also cause security issues. Pointers are a commonly used concept in application development. They are simply an area of memory that stores an address of another location in memory.
Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.