Software Development Life Cycle
The Software development Life Cycle process is the template used in the planning and creation of software.
Building, deploying, and maintaining software requires security involvement throughout the software’s life cycle. Secure software development life cycles include incorporating security concerns at every stage of the software development process.
Software Development Phases
Regardless of which SDLC or process is chosen by your organization, a few phases appear in most SDLC models:
Feasibility: The Feasibility phase is where initial investigations into whether the effort should occur are conducted. Feasibility also looks at alternative solutions and high-level costs for each solution proposed. It results in a recommendation with a plan to move forward. Analysis and Requirements :Once an effort has been deemed feasible, it will typically go through an Analysis and Requirements definition phase. In this phase customer input is sought to determine what the desired functionality is, what the current system or application currently does and what it doesn’t do, and what improvements are desired. Requirements may be ranked to determine which are most critical to the success of the project. Security Requirements is an important part of the analysis and requirements definition phase. It ensures that the application is designed to be secure and that secure coding practices are used. Design: The Design phase includes design for functionality, architecture, integration points and techniques, dataflows, business processes, and any other elements that require design consideration. Development: The actual coding of the application occurs during the Development phase. This phase may involve testing of parts of the software, including unit testing, the testing of small components individually to ensure they function properly. Testing and Integration: Formal testing with customers or others outside of the development team occurs in the testing and integration phase. Individual units or software components are integrated and then tested to ensure proper functionality. In addition, connections to outside services, data sources, and other integration may occur during this phase. During this phase user acceptance testing (UAT) occurs to ensure that the users of the software are satisfied with its functionality. Training and Transition: The important task of ensuring that the end users are trained on the software and that the software has entered general use occurs in the training and transition phase. This phase is sometimes called the acceptance, installation, and deployment phase. Ongoing Operations and Maintenance: Once a project reaches completion, the application or service will enter what is usually the longest phase: ongoing operations and maintenance. This phase includes patching, updating, minor modifications, and other work that goes into daily support. Disposition: The disposition phase occurs when a product or system reaches the end of its life. Although disposition is often ignored in the excitement of developing new products, it is an important phase for a number of reasons: shutting down old products can produce cost savings, replacing existing tools may require specific knowledge or additional effort, and data and systems may need to be preserved or properly disposed of.
Code Deployment Environment
Many organizations use multiple environments for their software and systems development and testing. Below are the different types of environments:
The Development Environment is typically used for developers or other “builders” to do their work. Some workflows provide each developer with their own development environment; others use a shared development environment. The Test Environment is where the software or systems can be tested without impacting the production environment. In some schemes, this is preproduction, whereas in others a separate preproduction staging environment is used Quality Assurance (QA) occurs in this phase. QA’s objectives are to: Verify all features are working as expected Validates new functionality Verifies old errors don’t reappear The Staging Environment is a transition environment for code that has successfully cleared testing and is waiting to be deployed into production. The Production Environment is the live system. Software, patches, and other changes that have been tested and approved move to production.
Provisioning and Deprovisioning
Provisioning
Provisioning is the process of making something available. If you are provisioning an application, then you’re probably going to deploy a web server, a database server.
There may be a middleware server and other configuration and settings to be able to use this particular application When we describe the deployment of an application, we’re referring to it as an application instance, because it usually involves many, many different components all working together. Application Software Security: The provisioning of this application instance also includes security components. We want to be sure that the operating system and the application are up to date with the latest security patches.
Network Security: We also want to be sure that our network design and architecture is also secure.
So we might want to create a secure VLAN just for this application. There might be a set of configurations and a firewall for internal access and a completely different set of security requirements for external access Software Deployment to Workstations: You need to run scans on the software before deployment, and scans on the workstation before deployment as well.
Make sure that the workstation itself is up to date with the latest patches and is running all of the latest security configurations.
Deprovisioning
Deprovisioning occurs when you are dismantling and removing the applications created. This is not simply turning off the application. This is completely removing any remnants of that application from running in your environment.
If you’re deprovisioning a web server or database server with an application instance, then you also have to deprovision all of your security components as well. Not only are you removing the firewalls and other security devices, you may have to remove individual rules in the existing firewalls that remain. So make sure that if you are provisioning and opening up those firewall rules– that you are, during the deprovisioning process, removing those firewall rules. What happens to the Data? Decisions have to be made over the data that was created while this application was in place. It may be that the data is copied off to a secure area or it may be that everything that was created during the use of that application is now removed during the deprovision process.
Elasticity and Scalability
When we design applications, we should create them in a manner that makes them resilient in the face of changing demand. We do this through the application of two related principles:
Scalability says that applications should be designed so that computing resources they require may be incrementally added to support increasing demand. Elasticity goes a step further than scalability and says that applications should be able to automatically provision resources to scale when necessary and then automatically deprovision those resources to reduce capacity (and cost) when it is no longer needed. Scalability
When we first build this application instance, we’re creating a particular level of Scalability. We know that based on the particular web server, database server, and all of the other components that make up this application instance– that we can support a certain load.
Maybe at a particular time of the year, an application gets busier, or maybe the popularity of an application has taken off. And suddenly, a lot of people are wanting to use this application. It’s important to be able to scale in proportion to meet the demands. Ex. You might build an application instance that can handle 100,000 transactions per second. And everything that you put together would be able to handle any load up to that point.
100K transactions may not be enough to handle the load if the demand is too high. Elasticity
We define the Elasticity as the ability to increase or decrease the available resources for an application based on the workload. Elasticity refers to how quickly an application can respond to a needed workload.
Ex. If we know that our application scalability can support 100,000 transactions per second and we suddenly need 500,000 transactions per second, then we can take advantage of this elasticity and create five separate application instances all running simultaneously. Application becomes unpopular? Deprovision the number of application instances to meet workload demands.
Secure Coding Techniques
Good news is that there are a number of tools available to assist in the development of a defense-in-depth approach to security.
Through a combination of secure coding practices and security infrastructure tools, cybersecurity professionals can build robust defenses against application exploits.
Input Validation
Applications that allow user input should perform Input Validation of that input to reduce the likelihood that it contains an attack.
Improper input handling practices can expose applications to injection attacks, cross-site scripting attacks, and other exploits. Always ensure that validation occurs server-side rather than within the client’s browser. Client-side validation is useful for providing users with feedback on their input, but it should never be relied on as a security control Input Whitelisting is when developer describes the exact type of input that is expected from the user and then verifies that the input matches that specification before passing the input to other processes or servers. Input Blacklisting describes potentially malicious input that must be blocked. Ex. Developers might restrict the use of HTML tags or SQL commands in user input. When performing input validation, developers must be mindful of the types of legitimate input that may appear in a field. You may inadvertently block any valid entry inputs. Ex. Blocking the Apostrophe stops anybody from entering a name like O’Brian or O’Driscoll.
Normalization
Database Normalization is a set of design principles that database designers should follow when building and modifying databases.
Some of the advantages of implementing these principles: Prevent data inconsistency Reduce the need for restructuring existing databases Make the database schema more informative
Stored Procedures
A Stored Procedure provides an important layer of security between the user interface and the database.
A user can use a ‘select-command’, which is where we’re asking for data from the database, from a particular table in the database, and a particular value within that table. If adversaries realize this command is pulling data straight from the application ↔ database, they’re able to modify the information within this query and gain access to information that normally would not be available. This is where a stored procedure comes in; Instead of having the client send this information to the database server to run that command, that query is instead stored on the database server, and the application just sends a message saying, CALL, and then the name of the stored procedure to run. It stops users from being able to modify anything. The User is not able to modify any of the parameters of that database call. They can only choose to either run that stored procedure or not run that stored procedure.
Obfuscation/Camouflage
Obfuscation/Camouflage is a way to take something that normally is very easy to understand, made it so that is very difficult to understand. From an application developers’ perspective, they’re taking, application code that they’ve written, and they’re changing that code to make it very difficult to read by other human beings.
Data Minimization reduces risk because you can’t lose control of information that you don’t have in the first place! Data minimization is the best defense. Organizations should not collect sensitive information that they don’t need and should dispose of any sensitive information that they do collect as soon as it is no longer needed Tokenization replaces personal identifiers that might directly reveal an individual’s identity with a unique identifier using a lookup table. Ex. We might replace a widely known value, such as a student ID, with a randomly generated 10-digit number. We’d then maintain a lookup table that allows us to convert those back to student IDs if we need to determine someone’s identity Hashing uses a cryptographic hash function to replace sensitive identifiers with an irreversible string of characters. Salting these values with a random number prior to hashing them makes these hashed values resistant to a type of attack known as a rainbow table attack. ‘Hello World’ can be written simple or overly complicated; That’s what obfuscation is!
Code Reuse/ Dead Code
Many organizations Reuse Code not only internally but by making use of third-party software libraries and software development kits (SDKs). Third-party software libraries are a common way to share code among developers.
Use of 3rd Party libraries and software development kits (SDKs)
Organizations also introduce third-party code into their environments when they outsource code development to other organizations. Security teams should ensure that outsourced code is subjected to the same level of testing as internally developed code. It’s fairly common for security flaws to arise in shared code, making it extremely important to know these dependencies and remain vigilant about security updates. Dead Code occurs when there has been a code put into the application, it is running through and processing some type of logic, but the ultimate results of that logic aren’t used anywhere else within the app.
Make sure that the code that they’ve put into the application, is actually, performing a useful function. Any type of code added to an application, adds the potential for a security issue. So it’s important to make sure that any of the coding that we’re doing, is following the best practices, and avoiding any type of code reuse, or dead code. Server-Side vs. Client-Side
Server-Side Validation is when data is sent to a server and validated by code on the same server.
Prevents user from making changes to the data before getting to the server. Using server-side validation is usually considered the safer alternative. And if you only had to choose between one of these, you would probably choose server-side validation.
Client-Side Validation is when data is validated on the client’s machine before being sent off to the server.
End user’s application will validate the input on the client’s machine and then send it to the server for processing. All input validation checks are being made on the local machine. May speed things up a bit since you’re not waiting for the server to evaluate the input.
Memory Management
Developers have to be mindful how memory is allocated during application usage, applications are often responsible for managing their own use of memory and majority of the time this poor memory management undermines the overall security of the application.
We also have to think about this from a security perspective, because users will input different types of data into the application, and that information will be stored in memory. Resource Exhaustion
Whether intentional or accidental, systems may consume all of the memory, storage, processing time, or other resources available to them, rendering the system disabled or crippled for other uses.
Memory Leaks are one example of resource exhaustion.
When an application fails to return some memory that it no longer needs, perhaps by simply losing track of an object that it has written to a reserved area of memory. It can slowly consume all the memory available to the system, causing it to crash. Rebooting the system often resets the problem, returning the memory to other uses but if the memory leak isn’t corrected, the cycle simply begins anew. Pointer De-Referencing
Memory pointers can also cause security issues. Pointers are a commonly used concept in application development. They are simply an area of memory that stores an address of another location in memory.
Ex. We have a pointer called ‘photo’ that contains the digital address of a location in memory where the actual photo is stored. When an application needs to access the actual photo, it performs an operation called Pointer De-Referencing. The application follows the pointer and accesses the memory referenced by the pointer address. Issues arise if the pointer is empty, or contain a null value. If the application tries to de-reference this null pointer, it causes a condition known as a null pointer exception. Best Case: A null pointer exception causes the program to crash, providing an attacker with access to debugging information that may be used for reconnaissance of the application's security. Worst Case: A null pointer exception may allow an attacker to bypass security controls.
Buffer Overflow
Buffer Overflow attacks occur when an attacker manipulates a program into placing more data into an area of memory than is allocated for that program’s use. The goal is to overwrite other information in memory with instructions that may be executed by a different process running on the system.
An attacker sends more data than what was expected and overflow a section of memory. This might cause the application to crash, or it might cause the application to provide the user with additional access, that they would not normally have. Integer Overflow is simply a variant of a buffer overflow where the result of an arithmetic operation attempts to store an integer that is too large to fit in the specified buffer. Data Exposure
Minimizing Data Exposure should also always be a main priority. n many applications, we’re inputting a lot of sensitive data: Name, SSN, Address, Birthdate, medical information, and so much more.
Make sure any sensitive information being stored is also being encrypted. If we’re displaying this information on the screen, we may want to filter out some of this information or display it in a way that’s more secure. Any time we have data going into an application, or being sent from that application, we need to check it and make sure that any type of data is not being exposed. By doing this, we can be assured that all of the data will be protected from the time we put it into the application, until the time we see it on our screen.
Open Web Application Security Project (OWASP)
One of the best resources for secure coding practices is the Open Web Application Security Project (OWASP). OWASP is the home of a broad community of developers and security practitioners, and it hosts many community-developed standards, guides, and best practice documents, as well as a multitude of open source tools
The describes the most important control and control categories that every architect and developer should absolutely, 100% include in every project. OWASP’s Top 10 Proactive Controls for security: Define Security Requirements: Implement security throughout the development process. Leverage Security Frameworks and Libraries: Preexisting security capabilities can make securing applications easier. Secure Database Access: Prebuild SQL queries to prevent injection and configure databases for secure access. Encode and Escape Data: Remove special characters. Validate All Inputs: Treat user input as untrusted and filter appropriately. Implement Digital Identity: Use multifactor authentication, secure password storage and recovery, and session handling. Enforce Access Controls: Require all requests to go through access control checks, deny by default, and apply the principle of least privilege. Protect Data Everywhere: Use encryption in transit and at rest. Implement Security Logging and Monitoring: This helps detect problems and allows investigation after the fact. Handle all Errors and Exceptions: Errors should not provide sensitive data, and applications should be tested to ensure that they handle problems gracefully.
Software Diversity
Software Diversity is when we use different tricks in the compiler to change where the paths go during the compilation process. This means that the final binary file will be different every time you compile the application. This won’t change the functionality of the application or the way that it works. It only changes the final binary file itself.
If an attacker finds a vulnerability inside of this file in a person’s machine, and they create an exploit for that vulnerability, they may find that they’re not able to use that exploit on a different person’s machine because it’s running a different version of that file But if you’re able to have this diversity in applications between all of these different machines, you could minimize the attack surface if anyone does find an exploit with this particular application. Security professionals seek to avoid single points of failure in their environments to avoid availability risks if an issue arises with a single component.
Security professionals should watch for places in the organization that are dependent on a single piece of source code, binary executable files, or compilers. Though it may not be possible to eliminate all of these dependencies, tracking them is a critical part of maintaining a secure codebase.
Automation and Scripting
The application development process is a constantly moving and constantly changing process. And it becomes important to be able to keep up with these changes, not only so that the application can change, but so that the security based on that application can stay up to date.
Automated courses of action:
We can also create automation around the deployment of these applications and automation based on how we react to problems that might occur when that application is executing. Continuous Monitoring:
An approach where an organization constantly monitors its IT systems and networks to detect security threats, performance issues, or non-compliance problems in an automated manner. The goal is to identify potential problems and threats in real time to address them quickly. You may need to continuously check for a particular event, then react accordingly. Ex. if we know that the storage area of log files for an application was to fill up, it would cause the application to fail. So we might want to constantly monitor that particular drive and make sure that it never gets to a point where it gets too full or too highly utilized. If we do constantly monitor this drive, and we notice that it’s running out of drive space, we can then automatically work behind the scenes to delete any older files or free up any needed disk space. Continuous Validation
Checking critical IT systems to make sure their settings are validated before going live is a necessity. Continuous Validation is the act of making sure the provisioning + deprovisioning of certain cloud systems are correct (valid).
Continuous Integration
When application developers constantly update an application and perhaps even merge it into a central repository many times a day. Continuous integration (CI) is a development practice that checks code into a shared repository on a consistent ongoing basis. In CI environments, this can range from a few times a day to a very frequent process of check-ins and automated builds. The main goal of this approach is to enable the use of automation and scripting to implement automated courses of action that result in continuous delivery of code.
Continuous Deployment
Continuous Deployment (CD) (sometimes called continuous delivery), rolls out tested changes into production automatically as soon as they have been tested.
Continuous Integration relies on an automated build process, so it’s often paired with Continuous Deployment. When you automate the testing and the release of this particular application. Automated security checks will occur during the testing process. And it will wait for us to click a button to deploy this application into production.
Version Control
Version Control is, also known as source control, is the practice of tracking and managing changes to software code. Version control systems are software tools that help software teams manage changes to source code over time.
Software developers use it to keep track of some software, check it into a version control system, and then next time they’re ready to update this code, they can keep track of anything that may have changed between one version and another.