Security is about finding a balance(It means finding the right level of protection balanced against the value of the asset and its importance to the organization.), because all systems have limits. No one person or company has unlimited funds to secure everything, therefore, they can’t always take the most secure approach.This requires risk management(decision-based security, risk acceptance, avoidance, and reduction). Security is a continually changing, multifaceted process that requires you to build a multilayered defense-in-depth model.The stack concept demonstrates that defense can be layered at levels throughout the process.
Countermeasures found in each Layer
Security countermeasures are the controls used to protect the confidentiality, integrity, and availability of data and information systems. There is a wide array of security controls available at every layer of the stack. Overall security can be greatly enhanced by adding additional security measures, removing unneeded services, hardening systems, and limiting access.
- Virus Scanners Antivirus programs can use one or more techniques to check files and applications for viruses. While virus programs didn’t exist as a concept until 1984, they are now a persistent and perennial problem, which makes maintaining antivirus software a requirement. These programs use a variety of techniques to scan and detect viruses, including signature scanning, heuristic scanning, integrity checks, and activity blocking.
- Pretty Good Privacy (PGP) In 1991, Phil Zimmerman initially developed PGP as a free e-mail security application, which also made it possible to encrypt files and folders. PGP works by using a public-private key system that uses the International Data Encryption Algorithm (IDEA) algorithm to encrypt files and email messages.
- Secure Multipurpose Internet Mail Extensions (S/MIME) S/MME secures e-mail by using X.509 certificates for authentication. The Public Key Cryptographic Standard is used to provide encryption, and can work in one of two modes: signed and enveloped. Signing provides integrity and authentication. Enveloped provides confidentiality, authentication, and integrity.
- Privacy Enhanced Mail (PEM) PEM is an older e-mail security standard that provides encryption, authentication, and X.509 certificate-based key anagement.
- Secure Shell (SSH) SSH is a secure application layer program with different security capabilities than FTP and Telnet. Like the two aforementioned programs, SSH allows users to remotely log into computers and access and move files. The design of SSH means that no cleartext usernames/passwords can be sent across the wire. All of the information flowing between the client and the server is encrypted, which means network security is greatly enhanced. Packets can still be sniffed but the information within the packets is encrypted.
- Secure Electronic Transmission (SET) SET is a protocol standard that was developed by MasterCard, VISA, and others to allow users to make secure transactions over the Internet. It features digital certificates and digital signatures, and uses of Secure Sockets Layer (SSL).
- Terminal Access Controller Access Control System (TACACS) Available in several variations, including TACACS, Extended TACACS (XTACACS), and 14 TACACS+.TACACS is a centralized access control system that provides authentication,authorization, and auditing (AAA) functions.
- Kerberos Kerberos is a network authentication protocol created by the Massachusetts Institute of Technology (MIT) that uses secret-key cryptography and facilitates single sign-on. Kerberos has three parts: a client, a server, and a trusted third party (Key Distribution Center [KDC] or AS) to mediate between them.
- SSL Netscape Communications Corp. initially developed SSL to provide security and privacy between clients and servers over the Internet. It’s application-independent and can be used with HTTP, FTP, and Telnet. SSL uses Rivest, Shamir, & Adleman (RSA) public key cryptography and is capable of client authentication, server authentication, and encrypted SSL connection.
- Transport Layer Security (TLS) TLS is similar to SSL in that it is application-independent. It consists of two sublayers: the TLS record protocol and the TLS handshake protocol.
- Windows Sockets (SOCKS) SOCKS is a security protocol developed and established by Internet standard RFC 1928. It allows client-server applications to work behind a firewall and utilize their security features.
- Secure RPC (S/RPC) S/RPC adds an additional layer of security to the RPC process by adding Data Encryption Standard (DES) encryption.
- IPSec IPSec is the most widely used standard for protecting IP datagrams. Since IPSec can be applied below the application layer, it can be used by any or all applications and is transparent to end users. It can be used in tunnel mode or transport mode.
- Point-to-point Tunneling Protocol (PPTP) Developed by a group of vendors including Microsoft, 3Com, and Ascend, PPTP is comprised of two components: the transport that maintains the virtual connection and the encryption that insures confidentiality. PPTP is widely used for virtual private networks (VPNs).
- Challenge Handshake Authentication Protocol (CHAP) CHAP is an improvement over previous authentication protocols such as Password Authentication Protocol (PAP) where passwords are sent in cleartext. CHAP uses a predefined secret and a pseudo random value that is used only once (i.e., a hash is generated and transmitted from client to server). This facilitates security because the value is not reused and the hash cannot be reversed-engineered.
- Wired Equivalent Privacy (WEP) While not perfect, WEP attempts to add some measure of security to wireless networking. It is based on the RC4 symmetric encryption standard and uses either 64-bit or 128-bit keys. A 24-bit Initialization Vector (IV) is used to provide randomness; therefore, the “real key” may be no more than 40 bits long. There have been many proven attacks based on the weaknesses of WEP.
- Wi-Fi Protected Access (WPA) WPA was developed as a replacement for WEP. It delivers a more robust level of security. WPA uses Temporal Key IntegrityProtocol (TKIP) , which scrambles the keys using a hashing algorithm and adds an integrity-checking feature that verifies that the keys haven’t been tampered with. Next,WPA improves on WEP by increasing the IV from 24 bits to 48 bits. WPA also prevents rollover (i.e., key reuse is less likely to occur). Finally,WPA uses a different secret key for each packet.
- Packet Filters Packet filtering is configured through access control lists (ACLs). ACL’s allow rule sets to be built that will allow or block traffic based on header information. As traffic passes through the router, each packet is compared to the rule set and a decision is made whether the packet will be permitted or denied.
- Network Address Translation (NAT) Originally developed to address the growing need for intrusion detection (ID) addresses, NAT is discussed in RFC 1631. NAT can be used to translate between private and public addresses. Private IP addresses are those considered non-routable (i.e., public Internet routers will not route traffic to or from addresses in these ranges).
- Fiber Cable The type of transmission media used can make a difference in security. Fiber is much more secure than wired alternatives and unsecured wireless transmission methods.
- Secure Coding It is more cost-effective to build secure code up front than to try and go back and fix it later. Just making the change from C to a language such as .NET or CSharp can have a big security impact. The drive for profits and the additional time that QA for security would introduce, causes many companies to not invest in secure code.
The Current State of IT Security
According to CERT, in the year 2000 there were 1,090 vulnerabilities reported, in 2001 there were 2,437, and in 2005 that number climbed to 5,990.With such an increase in the number of known vulnerabilities, it’s important to consider how we got to this current state. There is also real value in studying the past to try and learn from our mistakes and prevent
them in the future. What follows is a somewhat ordered look at the history of security.
Long before any other type of security was created, physical security existed. The Egyptians used locks more than 2,000 years ago. If information was important, it was carved in stone or later written on paper. When information was transmitted or moved from one location to another, it was usually done with armed guards. The only way for the enemy to gain the information was to physically seize it. The loss of information usually meant the loss of critical assets, because knowledge is power. Even when information was not in transit, many levels of protection were typically used to protect it, including guards, walls, dogs, motes, and fences.
All of the concerns over physical security made early asset holders concerned about the protection of their assets. Think about it: one mistake in transit meant that your enemy was now in control of vital information. There had to be a way to protect information in transit and in storage beyond physical storage. A means of security was found in the discovery of encryption, which meant that the confidentiality
of information in-transit could be ensured. Encryption dates to the Spartans, who
used a form of encryption known as Skytale. The Hebrews used a basic cryptographic system called ATBASH that worked by replacing each letter used with another letter the same distance away from the end of the alphabet (e.g.,“A” would be sent as a Z and “B” would be sent as a “Y”). More complicated substitution ciphers were developed throughout the middle ages as individuals became better at breaking simple encryption systems. In the ninth century, Abu al-Kindi published what is considered to be the first paper that discusses how to break cryptographic systems. It is titled “A Manuscript on Deciphering Cryptographic Messages,” and deals with using frequency analysis to break cryptographic codes. After the first part of the twentieth century, the science of encryption and cryptography moved more quickly because the US government and the National Security Agency (NSA) became involved. One of the key individuals that worked with the NSA in its early years was William Frederick Friedman, who is considered one of the best cryptologists of all time. Mr. Friedman helped break the encryption scheme used by the Japanese. Many of his inventions and cryptographic systems were never patented, because they were considered so significant that the release of any information about them would aid the enemy.
While encryption provided another level of needed security, history shows that it wasn’t always enough. Systems like telephones were known to be vulnerable; at the same time, work began on a system to intercept electronic emissions from other systems. This developed into the TEMPEST program, a US-led initiative designed to develop shielding for equipment to make it less vulnerable to signal theft. The problem of signal security repeated itself when the first cordless phones were released. The early cordless phones had no security. If you and your neighbor had the same frequency or you had a scanner, your conversations were easy to intercept. Early cell phones were also easily intercepted. Luckily, there have been advances in signal security such as spread spectrum technology, which was pioneered by the military. This technology is implemented in two different methods: direct-sequence spread spectrum (DSSS) and frequency-hopping spread spectrum (FHSS). These systems of transmission provide security and improved reliability.
Computer security is focused on secure computer operations. The protection ring model provides the operating system with various levels at which to execute code or restrict access. It provides much greater granularity than a system that operates in user and privileged mode. As you move toward the outer bounds of the model, the numbers increase and the level of trust decreases. Another advancement in computer security was in the development of computer security models based on confidentiality and integrity. The Bell LaPadula model was one of the first and was designed to protect the confidentiality of information. The Clark-Wilson model was the first integrity model, and differed from previous models because it was developed with the intention to be used for commercial activities. Clark Wilson dictates that the separation of duties must be enforced, subjects must access data through an application, and auditing is required. Bell LaPadula, Clark Wilson, and others led the US government to adopt standards to measure these computer security controls. One of the first of these standards to be developed was the Trusted Computing System Evaluation Criteria (TCSEC) (also known as the “Orange Book”). The Orange Book defines the confidentiality of computer systems according to the following scale:
- A:Verified Protection The highest security division
- B: Mandatory Security Has mandatory protection of the TCB
- C: Discretionary Protection Provides discretionary protection of the TCB
- D: Minimal Protection Failed to meet any of the standards of A, B, or C; has no security controls
While network security has long been a concern, the advent of the Internet and the growth of e-commerce have increased the need. Most home users no longer use slow dial-up connections; they use DSL or cable Internet. Not only is there increased bandwidth, but many of these systems are always turned on, which means that attackers can benefit from the bandwidth available to these users to launch attacks.
The need for network security was highlighted by the highly successful attacks such as
Nimda, Code Red, and SQL Slammer. Nimda alone is believed to have infected more than 1.2 million computers. Once a system was infected, Nimda scanned the hard drive once every 10 days for e-mail addresses, which were used to send copies of itself to other victims. Nimda used its own internal mail client, making it difficult for individuals to determine who sent the infected e-mail. Nimda also had the capability to add itself to executable files to spread itself to other victims. Exploits such as these highlight the need for better network security.
Organizations are responding by implementing better network security. Firewalls have
improved. Many companies are moving to intrusion prevention systems (IPS), and antivirus and e-mail-filtering products have become must-have products. However, these systems don’t prevent crime; they simply move the criminal down to other unprotected sites. The same analogy can be applied to network security. While some organizations have taken the threat seriously and built adequate defenses, many others are still unprotected. All of the virus infections, dangers of malicious code, and DoS zombies are simply relocated to these uncaring users.
Where does this leave us? Physical security is needed to protect our assets from insiders and others who gain access. Communication security is a real requirement as encryption offers a means to protect the confidentiality and integrity of information in storage and in transit. Signal security gives us the ability to prevent others from intercepting and using signals that emanate from our facility and electronic devices. Computer security provides us the ability to trust our systems and the operating systems on which they are based. It provides the functionality to control who has read, write, execute, or full control over our data and informational resources. Network security is another key component that has grown in importance as more and more systems have connected to the Internet. This means there is a need for availability, which can be easily attacked.The Distributed Denial of Service (DDoS) attacksagainst Yahoo and others in 2000 are good examples of this.
None of the items discussed by themselves are enough to solve all security risks. Only
when combined together and examined from the point of information security can we start to build a complete picture. In order for information security to be successful, it also requires senior management support, good security policies, risk assessments, employee training, vulnerability testing, patch management, good code design, and so on.
Different types of security tests can be performed, ranging from those that merely examine policy (audit) to those that attempt to hack in from the Internet and mimic the activities of true hackers (penetration testing). The process of vulnerability testing includes a systematic examination of an organization’s network, policies, and security controls. The purpose is to determine the adequacy of security measures, identify security deficiencies, provide data from which to predict the effectiveness of potential security measures, and confirm the adequacy of such measures after implementation.
When performing vulnerability tests, never exceed the limits of your authorization. Every assignment has rules of engagement, which not only include what you are authorized to target, but also the extent that you are authorized to control or target such systems.
While you may be eager to try out some of the tools and techniques you find here, make certain that you receive written approval before beginning. Proper authorization through documented means is a critical event in the testing process. Before any testing begins, you need to receive approval in writing. Even basic vulnerability testing tools like Nessus can bring down a computer system.
There are a variety of ways that an organization’s IT infrastructure can be probed, analyzed, and tested. Some common types of tests are:
- Security Audits This review seeks to evaluate how closely a policy or procedure matches the specified action. Are security policies actually used and adhered to? Are they sufficient?
- Vulnerability Scanning Tools like Nessus and others can be used to automatically scan single hosts or large portions of the network to identify vulnerable services and applications.
- Ethical Hacks (Penetration Testing) Ethical hacks seek to simulate the types of attacks that can be launched across the Internet. They can target HTTP, SMTP, SQL, or any other available service.
- Stolen Equipment Attack This simulation is closely related to physical security and communication security. The goal is to see what information is stored on company laptops and other easily accessible systems. Strong encryption is the number one defense for stolen equipment attacks. Otherwise attackers will probably be able to extract critical information, usernames, and passwords.
- Physical Entry This simulation seeks to test the organization’s physical controls. Systems such as doors, gates, locks, guards, Closed Circuit Television (CCTV), and alarms are tested to see if they can be bypassed.
- Signal Security Attack This simulation is tasked with looking for wireless access points and modems. The goal is to see if these systems are secure and offer sufficient authentication controls.
- Social Engineering Attack Social engineering attacks target an organization’s employees and seeks to manipulate them in order to gain privileged information. Proper controls, policies and procedures, and user education can go a long way in defeating this form of attack.
open-sourced methodology is the Open Source Security Testing Methodology Manual
(OSSTMM). The OSSTMM divides security reviews into six key points known as sections:
- Physical Security
- Internet Security
- Information Security
- Wireless Security
- Communications Security
- Social Engineering
800-26, OCTAVE, and ISO 17799.
Finding and Reporting Vulnerabilities
If your security testing is successful, you will probably find some potential vulnerabilities that need be fixed. Throughout the security testing process you should be in close contact with management to keep them abreast of your findings. There shouldn’t be any big surprises dropped on management at the completion of the testing. Keep them in the loop. At the conclusion of these assessment activities, you should report on your initial findings before you develop a final report. You shouldn’t be focused on solutions at this point, but on what you found and its potential impact.
Keep in mind that people don’t like to hear about problems. Many times, administrators and programmers deny that a problem exists or that the problem is of any consequence. There have been many stories about well-meaning security professional being threatened with prosecution after reporting vulnerabilities. If you feel you must report a vulnerability in a system other than your own, CERT has developed a way to report these anonymously. While this step does not guarantee anonymity, it does add a layer of protection. This form can be found here
Any people exposing vulnerabilities on systems that they don’t own or control may quickly find themselves accused of being a hacker. The end result is that many researchers are now advising individuals to walk away and not report vulnerabilities, because it is not worth the risk (FBI against computer intrusion).