Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Wednesday, 1 July 2015

What Is the Role of Security Switch in Managing Security Threats?

Managing who can access your network from the inside is more important than ever because nearly everyone is carrying a laptop, smart phone, or computer tablet configured to locate the nearest Wi-Fi network. Switches are the foundation framework of a network which connects computers, servers, printers, and additional devices. A security switch is essential for realizing a safe network environment by checking various network attacks and access levels based upon behavior blocking.
The first role of a security switch is to prevent trouble by blocking harmful traffic in the access level. Harmful traffic includes worms, viruses, malware, and DDoS attacks. It will also prevent the internal spread of any harmful traffic that may bog down network speed. From a network administrator's perspective, a security switch helps maintain a stable network environment. From a ISPs perspective it ensures high-quality Internet service and enhanced customer satisfaction.
Another role of security switches is to protect confidential information of individuals as well as the company relying on them. A switch plays a critical role in protecting internal confidential information from leaking out. It also limits the risk of privacy infringement and financial loss due to IP phone wiretapping and common forms of hacking.
A switch utilizing multi-dimension security engines can perform security functions by analyzing incoming and outgoing traffic transmitted through switching fabric. A high quality switch can do this regardless of the network speed. Additionally, as data is being analyzed, a switch can ensure maximum line performance by minimizing the additional loss of resources required for harmful traffic filtering.
The key to effective security switch is selectively blocking harmful packets of data while leaving other traffic untouched. This is the key to maintaining business continuity via web services and mail services while simultaneously creating a critical layer of protection against external threats.
Many security switch programs rely on an integrated security management system. This system makes it easy for network administrators to view the status of the network at any given moment on the same screen as their switch. It is essential for network administrators to have the ability to monitor and manage network conditions in real-time even when the workplace network is distributed. This includes gaining a detailed log of detected and blocked traffic. To maximize the value of this data it must be displayed in a way that is easy to understand and includes actionable information.
There are a growing number of different types of security switches available. The key is identifying the right option for particular institutions based upon a number of factors including cost, function, management capabilities, ease of installation, and overall effectiveness.

How To Use The Risk Management Framework for Requirement And Threat Traceability

Cybersecurity and Information Security (InfoSec) activities are implemented to protect data, information, systems, and users. Skilled security, program and system stakeholders work together to ensure that business objectives are met while minimizing the risk of threats where data or system control may be lost. This loss may be due to theft, natural disasters, computer/server malfunction, unauthorized or risky operation, or from any other threats. Program Management and security approaches are combined to maximize business functions and capabilities while also protecting an organization. These approaches include: Requirements Management, Risk Management, Threat Vulnerability Scanning, Continuous Monitoring, and System and Information Backups. All of these management approaches require significant experience to maximize results and prevent issues that could have otherwise been prevented.
Program Managers, as representatives of their companies and clients, call for the timely delivery of quality products and services to operations. Significant experience maximizes product quality and performance while also minimizing risks. Experience facilitates oversight, open collaboration, and decision-making to maximize innovation, reliability, sustainability, and the coordination of assets and resources.
An important Program Management concern today is that a great deal of confidential information is collected, processed and stored by every entity and shared across various private and public networks to other computers. Compounding this concern is the fast pace of technology, software, standards, and other changes that industry must maintain awareness of. It is essential that this information be carefully managed within businesses and protected to prevent both the business and its customers from widespread, irreparable financial loss, not to mention damage to your company's reputation. Protecting our data and information is an ethical and legal requirement for every project and requires proactive engagement to be effective.
Multiple Cybersecurity tools and techniques are used to effectively manage risk within system development and business operations. By necessity, management, engineering, and Cybersecurity activities must proactively work within the execution of requirements to maximize system functions and capabilities while also minimizing risks. Make no mistake; the threats to our businesses, systems, and users are real. As requirements are sufficiently documented, so must the security controls that are intended to help mitigate the known risks to our systems.
Requirements and threats are documented in much the same way as to ensure traceability and repeatability. Proactive management is needed to implement, execute, control, test, verify, and validate that the requirements have been met and the applicable threats have been mitigated. The management difference is while requirements must ultimately be met, threats are managed and mitigated on the likelihood and severity of the threat to our users, businesses, and systems. Risks are documented to show management and mitigation. Documenting these requirements and threats and their supporting details is the key to the proactive and repeatable effort that is needed. We believe the best approach in doing this is to keep this management as straightforward as possible and as detailed as needed to plan, execute, and control the program or business.
Risk Management Framework (RMF) processes are applied to the Security Controls that are found in Cybersecurity and Information Security references. These RMF activities are well documented and overlap the best practices of management and engineering. Often, you will find that the activities recommended of the RMF are activities that you should already be doing with significant proficiency. Traceability of these program and security activities require the ability to verify the history and status of every security control, regardless if the system is in development or in operation. Documentation by necessity is detailed. Traceability includes the identification between requirement, security control, and the necessary information needed to trace between requirements, security controls, strategies, policies, plans, processes, procedures, control settings, and other information that is needed to ensure repeatable lifecycle development and operational repeatability.
Program Management and Risk Management experience is of primary importance to managing requirements and risk. A tremendous and fundamental aid of the experienced is the Requirement Traceability Matrix (RTM) and Security Control Traceability Matrix (SCTM). The RTM and SCTM are fundamentally direct in purpose and scope which facilitates traceability and repeatability for the program. The variables of a RTM and SCTM can be very similar and are tailorable to the needs of the program and customer. There are many examples for the content details of the RTM or SCTM, both separate but similar documents, that may include:
1) A unique RTM or SCTM identification number for each requirement and security control,
2) referenced ID numbers of any associated items for requirements tracking,
3) a detailed, word for word description of the requirement or security control,
4) technical assumptions or customer need linked to the functional requirement,
5) the current status of the functional requirement or security control,
6) a description of the function to the architectural/design document,
7) a description of the functional technical specification,
8) a description of the functional system component(s),
9) a description of the functional software module(s),
10) the test case number linked to the functional requirement,
11) the functional requirement test status and implementation solution,
12) a description of the functional verification document, and
13) a miscellaneous comments column that may aid to traceability.
While the contents of the RTM and SCTM are flexible, the need for such tools is not. With the complexity and need to protect systems and services today from multiple threats, experienced managers, engineers, users and other professionals will look for the traceability that quality and secure systems require.

Tuesday, 22 July 2014

Complexity Science in Cyber Security

1. Introduction
Computers and the Internet have become indispensable for homes and organisations alike. The dependence on them increases by the day, be it for household users, in mission critical space control, power grid management, medical applications or for corporate finance systems. But also in parallel are the challenges related to the continued and reliable delivery of service which is becoming a bigger concern for organisations. Cyber security is at the forefront of all threats that the organizations face, with a majority rating it higher than the threat of terrorism or a natural disaster.
In spite of all the focus Cyber security has had, it has been a challenging journey so far. The global spend on IT Security is expected to hit $120 Billion by 2017 [4], and that is one area where the IT budget for most companies either stayed flat or slightly increased even in the recent financial crises [5]. But that has not substantially reduced the number of vulnerabilities in software or attacks by criminal groups.
The US Government has been preparing for a "Cyber Pearl Harbour" [18] style all-out attack that might paralyze essential services, and even cause physical destruction of property and lives. It is expected to be orchestrated from the criminal underbelly of countries like China, Russia or North Korea.
The economic impact of Cyber crime is $100B annual in the United states alone [4].
There is a need to fundamentally rethink our approach to securing our IT systems. Our approach to security is siloed and focuses on point solutions so far for specific threats like anti viruses, spam filters, intrusion detections and firewalls [6]. But we are at a stage where Cyber systems are much more than just tin-and-wire and software. They involve systemic issues with a social, economic and political component. The interconnectedness of systems, intertwined with a people element makes IT systems un-isolable from the human element. Complex Cyber systems today almost have a life of their own; Cyber systems are complex adaptive systems that we have tried to understand and tackle using more traditional theories.
2. Complex Systems - an Introduction
Before getting into the motivations of treating a Cyber system as a Complex system, here is a brief of what a Complex system is. Note that the term "system" could be any combination of people, process or technology that fulfils a certain purpose. The wrist watch you are wearing, the sub-oceanic reefs, or the economy of a country - are all examples of a "system".
In very simple terms, a Complex system is any system in which the parts of the system and their interactions together represent a specific behaviour, such that an analysis of all its constituent parts cannot explain the behaviour. In such systems the cause and effect can not necessarily be related and the relationships are non-linear - a small change could have a disproportionate impact. In other words, as Aristotle said "the whole is greater than the sum of its parts". One of the most popular examples used in this context is of an urban traffic system and emergence of traffic jams; analysis of individual cars and car drivers cannot help explain the patterns and emergence of traffic jams.
While a Complex Adaptive system (CAS) also has characteristics of self-learning, emergence and evolution among the participants of the complex system. The participants or agents in a CAS show heterogeneous behaviour. Their behaviour and interactions with other agents continuously evolving. The key characteristics for a system to be characterised as Complex Adaptive are:
  • The behaviour or output cannot be predicted simply by analysing the parts and inputs of the system
  • The behaviour of the system is emergent and changes with time. The same input and environmental conditions do not always guarantee the same output.
  • The participants or agents of a system (human agents in this case) are self-learning and change their behaviour based on the outcome of the previous experience
Complex processes are often confused with "complicated" processes. A complex process is something that has an unpredictable output, however simple the steps might seem. A complicated process is something with lots of intricate steps and difficult to achieve pre-conditions but with a predictable outcome. An often used example is: making tea is Complex (at least for me... I can never get a cup that tastes the same as the previous one), building a car is Complicated. David Snowden's Cynefin framework gives a more formal description of the terms [7].
Complexity as a field of study isn't new, its roots could be traced back to the work on Metaphysics by Aristotle [8]. Complexity theory is largely inspired by biological systems and has been used in social science, epidemiology and natural science study for some time now. It has been used in the study of economic systems and free markets alike and gaining acceptance for financial risk analysis as well (Refer my paper on Complexity in Financial risk analysis here [19]). It is not something that has been very popular in the Cyber security so far, but there is growing acceptance of complexity thinking in applied sciences and computing.
3. Motivation for using Complexity in Cyber Security
IT systems today are all designed and built by us (as in the human community of IT workers in an organisation plus suppliers) and we collectively have all the knowledge there is to have regarding these systems. Why then do we see new attacks on IT systems every day that we had never expected, attacking vulnerabilities that we never knew existed? One of the reasons is the fact that any IT system is designed by thousands of individuals across the whole technology stack from the business application down to the underlying network components and hardware it sits on. That introduces a strong human element in the design of Cyber systems and opportunities become ubiquitous for the introduction of flaws that could become vulnerabilities [9].
Most organisations have multiple layers of defence for their critical systems (layers of firewalls, IDS, hardened O/S, strong authentication etc), but attacks still happen. More often than not, computer break-ins are a collision of circumstances rather than a standalone vulnerability being exploited for a cyber-attack to succeed. In other words, it's the "whole" of the circumstances and actions of the attackers that cause the damage.
3.1 Reductionism vs Holisim approach
Reductionism and Holism are two contradictory philosophical approaches for the analysis and design of any object or system. The Reductionists argue that any system can be reduced to its parts and analysed by "reducing" it to the constituent elements; while the Holists argue that the whole is greater than the sum so a system cannot be analysed merely by understanding its parts [10].
Reductionists argue that all systems and machines can be understood by looking at its constituent parts. Most of the modern sciences and analysis methods are based on the reductionist approach, and to be fair they have served us quite well so far. By understanding what each part does you really can analyse what a wrist watch would do, by designing each part separately you really can make a car behave the way you want to, or by analysing the position of the celestial objects we can accurately predict the next Solar eclipse. Reductionism has a strong focus on causality - there is a cause to an affect.
But that is the extent to which the reductionist view point can help explain the behaviour of a system. When it comes to emergent systems like the human behaviour, Socio-economic systems, Biological systems or Socio-cyber systems, the reductionist approach has its limitations. Simple examples like the human body, the response of a mob to a political stimulus, the reaction of the financial market to the news of a merger, or even a traffic jam - cannot be predicted even when studied in detail the behaviour of the constituent members of all these 'systems'.
We have traditionally looked at Cyber security with a Reductionist lens with specific point solutions for individual problems and tried to anticipate the attacks a cyber-criminal might do against known vulnerabilities. It's time we start looking at Cyber security with an alternate Holism approach as well.
3.2 Computer Break-ins are like pathogen infections
Computer break-ins are more like viral or bacterial infections than a home or car break-in [9]. A burglar breaking into a house can't really use that as a launch pad to break into the neighbours. Neither can the vulnerability in one lock system for a car be exploited for a million others across the globe simultaneously. They are more akin to microbial infections to the human body, they can propagate the infection as humans do; they are likely to impact large portions of the population of a species as long as they are "connected" to each other and in case of severe infections the systems are generally 'isolated'; as are people put in 'quarantine' to reduce further spread [9]. Even the lexicon of Cyber systems uses biological metaphors - Virus, Worms, infections etc. It has many parallels in epidemiology, but the design principles often employed in Cyber systems are not aligned to the natural selection principles. Cyber systems rely a lot on uniformity of processes and technology components as against diversity of genes in organisms of a species that make the species more resilient to epidemic attacks [11].
The Flu pandemic of 1918 killed ~50M people, more than the Great War itself. Almost all of humanity was infected, but why did it impact the 20-40yr olds more than others? Perhaps a difference in the body structure, causing different reaction to an attack?
Complexity theory has gained great traction and proven quite useful in epidemiology, understanding the patterns of spread of infections and ways of controlling them. Researchers are now turning towards using their learnings from natural sciences to Cyber systems.
4. Approach to Mitigating security threats
Traditionally there have been two different and complimentary approaches to mitigate security threats to Cyber systems that are in use today in most practical systems [11]:
4.1 Formal validation and testing
This approach primarily relies on the testing team of any IT system to discover any faults in the system that could expose a vulnerability and can be exploited by attackers. This could be functional testing to validate the system gives the correct answer as it is expected, penetration testing to validate its resilience to specific attacks, and availability/ resilience testing. The scope of this testing is generally the system itself, not the frontline defences that are deployed around it.
This is a useful approach for fairly simple self-contained systems where the possible user journeys are fairly straightforward. For most other interconnected systems, formal validation alone is not sufficient as it's never possible to 'test it all'.
Test automation is a popular approach to reduce the human dependency of the validation processes, but as Turing's Halting problem of Undecideability[*] proves - it's impossible to build a machine that tests another one in all cases. Testing is only anecdotal evidence that the system works in the scenarios it has been tested for, and automation helps get that anecdotal evidence quicker.
4.2 Encapsulation and boundaries of defence
For systems that cannot be fully validated through formal testing processes, we deploy additional layers of defences in the form of Firewalls or network segregation or encapsulate them into virtual machines with limited visibility of the rest of the network etc. Other common techniques of additional defence mechanism are Intrusion Prevention systems, Anti-virus etc.
This approach is ubiquitous in most organisations as a defence from the unknown attacks as it's virtually impossible to formally ensure that a piece of software is free from any vulnerability and will remain so.
Approaches using Complexity sciences could prove quite useful complementary to the more traditional ways. The versatility of computer systems make them unpredictable, or capable of emergent behaviour that cannot be predicted without "running it" [11]. Also running it in isolation in a test environment is not the same as running a system in the real environment that it is supposed to be in, as it's the collision of multiple events that causes the apparent emergent behaviour (recalling holism!).
4.3 Diversity over Uniformity
Robustness to disturbances is a key emergent behaviour in biological systems. Imagine a species with all organisms in it having the exact same genetic structure, same body configuration, similar antibodies and immune system - the outbreak of a viral infection would have wiped out complete community. But that does not happen because we are all formed differently and all of us have different resistance to infections.
Similarly some mission critical Cyber systems especially in the Aerospace and Medical industry implement "diversity implementations" of the same functionality and centralised 'voting' function decides the response to the requester if the results from the diverse implementations do not match.
It's fairly common to have redundant copies of mission critical systems in organisations, but they are homogenous implementations rather than diverse - making them equally susceptible to all the faults and vulnerabilities as the primary ones. If the implementation of the redundant systems is made different from the primary - a different O/S, different application container or database versions - the two variants would have different level of resilience to certain attacks. Even a change in the sequence of memory stack access could vary the response to a buffer overflow attack on the variants [12] - highlighting the central 'voting' system that there is something wrong somewhere. As long as the input data and the business function of the implementation are the same, any deviations in the response of the implementations is a sign of potential attack. If a true service-based architecture is implemented, every 'service' could have multiple (but a small number of) heterogeneous implementations and the overall business function could randomly select which implementation of a service it uses for every new user request. A fairly large number of different execution paths could be achieved using this approach, increasing the resilience of the system [13].
Multi variant Execution Environments (MVEE) have been developed, where applications with slight difference in implementation are executed in lockstep and their response to a request are monitored [12]. These have proven quite useful in intrusion detection trying to change the behaviour of the code, or even identifying existing flaws where the variants respond differently to a request.
On similar lines, using the N-version programming concept [14]; an N-version antivirus was developed at the University of Michigan that had heterogeneous implementations looking at any new files for corresponding virus signatures. The result was a more resilient anti-virus system, less prone to attacks on itself and 35% better detection coverage across the estate [15].
4.4 Agent Based Modelling (ABM)
One of the key areas of study in Complexity science is Agent Based Modelling, a simulation modelling technique.
Agent Based Modelling is a simulation modelling technique used to understand and analyse the behaviour of Complex systems, specifically Complex adaptive systems. The individuals or groups interacting with each other in the Complex system are represented by artificial 'agents' and act by predefined set of rules. The Agents could evolve their behaviour and adapt as per the circumstances. Contrary to Deductive reasoning[†] that has been most popularly used to explain the behaviour of social and economic systems, Simulation does not try to generalise the system and agents' behaviour.
ABMs have been quite popular to study things like crowd management behaviour in case of a fire evacuation, spread of epidemics, to explain market behaviour and recently financial risk analysis. It is a bottom-up modelling technique wherein the behaviour of each agent is programmed separately, and can be different from all other agents. The evolutionary and self-learning behaviour of agents could be implemented using various techniques, Genetic Algorithm implementation being one of the popular ones [16].
Cyber systems are interconnections between software modules, wiring of logical circuits, microchips, the Internet and a number of users (system users or end users). These interactions and actors can be implemented in a simulation model in order to do what-if analysis, predict the impact of changing parameters and interactions between the actors of the model. Simulation models have been used for analysing the performance characteristics based on application characteristics and user behaviour for a long time now - some of the popular Capacity & performance management tools use the technique. Similar techniques can be applied to analyse the response of Cyber systems to threats, designing a fault-tolerant architecture and analysing the extent of emergent robustness due to diversity of implementation.
One of the key areas of focus in Agent Based modelling is the "self-learning" process of agents. In the real world, the behaviour of an attacker would evolve with experience. This aspect of an agent's behaviour is implemented by a learning process for agents, Genetic Algorithm's being one of the most popular technique for that. Genetic Algorithms have been used for designing automobile and aeronautics engineering, optimising the performance of Formula one cars [17] and simulating the investor learning behaviour in simulated stock markets (implemented using Agent Based models).
An interesting visualisation of Genetic Algorithm - or a self-learning process in action - is the demo of a simple 2D car design process that starts from scratch with a set of simple rules and end up with a workable car from a blob of different parts: http://rednuht.org/genetic_cars_2/
The self-learning process of agents is based on "Mutations" and "Crossovers" - two basic operators in Genetic Algorithm implementation. They emulate the DNA crossover and mutations in biological evolution of life forms. Through crossovers and mutations, agents learn from their own experiences and mistakes. These could be used to simulate the learning behaviour of potential attackers, without the need to manually imagine all the use cases and user journeys that an attacker might try to break a Cyber system with.
5. Conclusion
Complexity in Cyber systems, especially the use of Agent Based modelling to assess the emergent behaviour of systems is a relatively new field of study with very little research done on it yet. There is still some way to go before using Agent Based Modelling becomes a commercial proposition for organisations. But given the focus on Cyber security and inadequacies in our current stance, Complexity science is certainly an avenue that practitioners and academia are increasing their focus on.
Commercially available products or services using Complexity based techniques will however take a while till they enter the mainstream commercial organisations.

Wednesday, 2 July 2014

What Are Security Best Pratices? Why Follow Them?

Everyone should be concerned about computer security. It determines whether your confidential information is safe from cyber thieves. Computers with weak defenses can endanger your financial health and your family's personal safety.
The number of computer criminals and attacks continues to grow and so does the sophistication. Cyberspace is becoming increasingly dangerous. You must take steps to protect yourself. You can do so by implementing what is known as "security best practices".
What are security best practices? The phrase refers to procedures; awareness of processes and habits that you routinely perform to "harden" your computer. Let's examine a few. 1. Use robust passwords - Your password should consist of at least 11 characters and include one uppercase letter and one special character. Avoid using common, pop culture words, birthdays of families and friends, the name of your pet, or other easy terms that could be easily discovered.
2. Always lock you machine - When you leave your computer unattended lock the workstation. Otherwise your machine would be accessible to anyone who is nearby.
3. Avoid downloading apps, screen savers and software from unknown sources. Malicious hackers frequently use malware embedded inside desirable products and offer them free. Once you have downloaded the software it can borough into your computer system and wreak havoc. Your computer may even become a "bot" and attack others.
4. Avoid opening email attachments from unknown senders - Malicious software could be installed on your system.
5. Double-check requests for information that you receive from a company with whom you do business. It could be a "phishing attack". Cyber criminals are skilled and can present to you a screen that appears to be from a trusted source. Crackers have duplicated a fake request for information from PayPal, for example, to gain personal information under false pretenses.
6. Avoid questionable websites that focus on gambling, porn or get rich quick schemes. Many of these sites will automatically scan your computer for known vulnerabilities and, once found, exploit them. Your system will be compromised.
7. Install an antivirus software package and use it. There are a number of excellent products on the market. Antivirus software looks for virus signatures and blocks them.
8. Change your wireless router's password from the factory setting. Certain routers ship with a default password that may be known to hackers. Anyone who is within range trying of your signal can intercept it and access your network.
9. Avoid sharing media with your computer. Malicious software could be downloaded onto your machine from a friend or associate's USB drive, for example, without your knowledge.
10. Perform a "white hat hack" on your system. Such a procedure can identify any vulnerabilities that exist. Gibson Research has an excellent and free program.
11. Keep your software updated. Install recommended patches from the publisher. Consider automating the process. Malicious computer users are up-to-date on vulnerabilities and know what to attack.
12. Install and use a firewall. There are both hardware and software firewalls. You can block specific senders when using a firewall.
13. Terminate your Internet connection when you finish your work. The Internet is one of the biggest attack venues. Disable your connection to the Internet and reduce the attack surface that nefarious hackers can use.
14. Encrypt your critical information. A number of free or inexpensive encryption programs are published, such as PGP (Pretty Good Privacy).
15. Consider using more than one way method to access your computing resources. A password is one level of authentication (something you know). Consider using a token (which you possess). Use a fingerprint reader (something you are).
16. Be discrete when using social media. Cyber criminals prowl sites of this type for scraps of information that can be used in exploits against you.