Biometric Security and its importance in the future

Biometric security could play an important role in securing future computer systems. Biometric security provides identification of users through something the user is; through measurement of physical characteristics such as fingerprints, retinal patterns and even DNA (biometric, Online Computing Dictionary). Authentication can be achieved in a variety of ways. One of the most fundamental and frequently used methods used today are passwords or pin codes. This can be categorised as something the user knows. Another category is something the user has; this method usually involves issuing physical objects such as identity badges or physical key (Pfleeger, 2003) A relatively new method of authenticating is through biometrics. This report will discuss the limitations and strengths of biometric security. Furthermore this report will compare biometrics to the other qualities that authentication mechanisms use; something the user knows and something the user has. It will not delve into the description of the biometric measures instead discussing the benefits and problems with biometric security. When a biometric security system is implemented several things are required. The user is scanned into the system and the main features of the object scanned are then extracted. A compact and expressive digital representation of the user is stored as a template on the database. When a person attempts to enter the system they are scanned and the main features of the object scanned are then extracted and converted into a digital representation. This file is then compared to the templates on the database. If a match is found the user is granted access to the system. (Dunker, 2004).  A disadvantage of the template style design which is what most biometric devices use.  It allows an attacker to gain entry into the system by intercepting and capturing the template file and then gaining authentication by entering the file into the communications line spoofing the system into being an authorised user.

Another drawback of this design is it requires personal data such as DNA, thumb prints and other sensitive data saved as template files on database computers. Which creates a greater security risk and privacy issues such as, who has access to this data, these questions and other similar questions would have to be covered if biometric system was implemented.

Biometrics allow for error when scanning the user to provide better functionality and usability for its users. A FAR (false acceptance rate, which is the probability of accepting an unauthorised person user) and FRR (false rejection rate which is the probability of incorrectly rejecting a genuine user) are made to avoid the inconvenience of being a genuine user but being denied access. Assuming these rates are set correctly this allows biometric devices to differentiate between an authorised person and an impostor. (Itakura, Tsujii, 2005, October)

Biometric devices could create a more ambiguous and user friendly environment for its users. Lost or stolen cards and passwords can cause major headaches for support desks and its users. This problem is irradiated in biometric security since it is practically impossible for a user to forget or leave their hand or eye at home. Also other forms of identification methods which rely on the user remembering a password or a user carrying an object such as a smart card are easier to compromise compared to biometrics. For example approximately 25% of ATM card users write the PIN on their ATM card thus making the PIN security useless. (Dunker, 2004) Since biometric devices measure unique characteristic of each person, they are more reliable in allowing access to intended people. Resources can then be diverted into other uses, since they are not being wasted on the policing of purchased tickets or resetting passwords. A example from our local area would be the recent upgrade of the Transperth system to smart cards means security guards can focus on keeping people safe instead of checking tickets and issuing fines.

Biometric security is not a new form of security; signatures have provided a means security for decades. But the measuring of human characteristics such as finger prints and iris scanning using computer systems is a new security method. Because this new form of biometrics is in its preliminary stages, common development issues occur. Implementation of biometrics due to expense and lack of testing in real world situations means biometrics can not be used today. Although once these “teething stages” are overcome biometrics could become a powerful method in security. (Dunker, 2004)

Imagine a scenario were you are your own key to everything. Your thumb opens your safe, starts your car and enables access to your account records. This could seem very convenient. However once biometric security is attacked you can’t exactly change what your finger print or change your DNA structure. And since your biometric data is not a secret as such, as you touch objects all day and your iris scan can be collected from anywhere you look. A large security risk is created if someone steals your biometric information as it remains stolen for life. Unlike conventional authentication methods you can not simply ask for a new one. (Schneier, 1999)

Biometrics could become very useful but unless handled properly are not to be used as keys, as keys need to be secret, have ability to be destroyed and renewed, at the present stage biometrics do not have these qualities. Although still in its primitive stages a proposal for biometric authentication based on cryptosystem keys containing biometric data by Yukio itakura and Shigeo Tsujii enables biometric devices to be secure and more reliable when used as a key. This system works by generating a public key from two secret keys, one generated from the hash function of the biometric template data another secret key is created from a random number generator, as seen below (Itakura, Tsujii, 2005, October)

In conclusion biometric devices are defiantly a viable option in the future. But as discussed have several issues that need to be dealt with before real world installation will occur. Biometric devices give its users ambiguity and trouble free authentication but also at present time have certain security loop holes that need to be dealt with.

1.1 References

Itakura, Y., Tsujii, S. (2005, October) Proposal on a multifactor biometric authentication method based on cryptosystem keys containing biometric signatures. International Journal of Information Security. Heidelberg (4)4, 288

Jain, A., Hong, L., Pankanti, S. (2000, Feb). Biometric identification Association for Computing Machinery. Communications of the ACM. New York (43)2, 90

Pfleeger. C. P & Pfleeger, S.L. (2003) Security in Computing 3rd Ed, Upper Saddle River, New Jersey, Prentice Hall Professional Technical

Schneier, B. (1999, Aug) The uses and abuses of biometrics Association for Computing Machinery. Communications of the ACM. New York (42)8, 136

Weinstein, L. (2006, April) Fake ID; batteries not included Association for Computing Machinery. Communications of the ACM. New York: (49)4, 120

The Internet RFC 789 Case Study 4 – Faults and Solutions

Events like the ARPAnet crash are considered major failures the network went down for quite some time. With the management tools and software of today, the ARPAnet manager may have been able to avoid the crash completely if not at least detect and correct much more efficiently than what they did.

Drop Bits

The three updates created through IMP 50 were not checked thoroughly from protection against dropped bits. Increased planning, organising and budgeting would have been valuable to the network. Managers realised that not enough resources allocated and although resources were scarce CPU cycles and memory were allocated to different methods. Instead only the update packets as a whole were checked for errors. Once an IMP receives an update it stores the information from the update in a table. If it requires a re-transmission of the update then it merely sends a packet based on the info in the table. This means that for maximum reliability the tables would need to be checksummed also. Again this would not appear to be a cost effective option as checksumming large tables requires a lot of CPU cycles (Rosen, 1981).

Apart from checksumming, the hardware had parity checking disabled. This was because the hardware had parity errors when in fact there was not any. This is a common security problem having fail safe measures installed but just disabling them because they don not work correctly, instead the system should have been correctly set so it gave errors message correctly.

More checksumming may be detected the problem, but still this is not to say that checksumming will always be free of cracks for bits to fall through. Another option would have been to modify the algorithm used, but again this could have fixed one problem but allowed others to rise.

Performance and Fault management

The crash of ARPAnet was not specifically a technical fault instead it was a number of faults which added together to bring the network down. No algorithms or protections strategy’s can be assured to be fail safe. Instead the managers should have aimed to reduce the likely hood of a crash or failure and in the even of a crash have detection and performance methods in place that would detect if something was wrong earlier. Detection of high-priority processes that consume a given (high ie 90) percentage of system resources would allow updates to occur but still allow the up/down packets to be sent.

If the ARPAnet managers had been able to properly control the network through network monitoring tools which give reports on system performance it might have enabled them to respond to the problem before it became unstable and unusable. Network monitoring can be active, ie routinely test the network by sending message or passive, collecting data and recording it in logs) Either type might have been a great asset as the falling performance could have been detected allowing them to avoid fire fighting the problem ie trying to repair after the damage is done (Dennis, 2002). Having said this system resources as already mentioned were scarce and the sending and recording of data requires memory and CPU time that might not be available, instead some overall speed might have needed to be sacrificed to allow for network monitoring.

Other network management tools such as alarms could have also allowed the problem to be corrected much more efficiently by alerting the staff as soon as a fault occurred. Alarms software would not only have alerted staff earlier but would make it easier to pinpoint the cause of the fault, allowing for a fix to be implemented quickly.

Network Control

The lack of control over the misbehaving IMP’s proved to be a great factor in the down time. The IMP’s were a number of kilometres away from each other. Fixes and patches were loaded on to the machines remotely which too several hours because of the networks slow speeds. The lack of control of the networks allocation of resources we detrimental to the networks slow recovery. Even in a down state network moinoring tool would have been able to download software to the network device configure parameters, and back up, changing and altering at the mangers decision (Duck & Read, 2003).

Planning and Organising

The lack of proper planning, organising and budgeting was one of the main factors which cause the network to fail. The ARPAnet managers were aware of the lack of protection of dropped bits, but due to constraints in costs and hardware and a it wont happen to us attitude were prepared to disregard it.

Better forecasting and budgeting may have meant they would be able to put in place more checking, which could have picked up the problem straight away, only the packets were checked (checking the tables that the packets were copied onto was not considered cost-effective) this left a large hole in their protection. Obviously it was not considered a large enough problems considering it probably will not happen. Documenting the fact there was no error checking on table might have also reduced the amount of time it took them to correct the error (Dennis, 2002)

The RFC789 case study points out hey knew that bit dropping could be a future problem. If this is true then procedures should have been documented for future reference as a means for solving the problem of dropped bits. Planning could have severely cut, down time of the network.

Correct application of the 5 key management tasks and the network managing tools available today would have made the ARPAnet crash of 1980 avoidable or at least easier to detect and correct.

Better planning and documentation would have allowed the manger to look ahead acknowledging any gaps in their protection. They could have prepared documentation and procedures to highlight they do not have protection on certain aspects of the network.

Once the crash had occurred, uncertain decisions were made as to what the problem could possibly be. Planning, organising and directing could have helped resolve the situation in a more productive method than the fire fighting technique used.

The Internet RFC 789 Case Study 3 – How the ARAPA Crash occurred

An interesting and unusual problem occurred on October 27th, 1980 in the ARPA network. For several hours the network was unusable but still appeared to be online. This outage was caused by a high- priority process executing and consuming more system resources then it should. The ARPAnet had IMP’s (Interface Message processors) which where used to connect computers to each other suffered a number of faults. Restarting individual IMP’s did nothing to solve the problem because as soon as they connected to the network again, IMP’s continued with the same behaviour and the network was still down.

It was eventually found that there was a bad routing updates. These updates are created at least 1 per minute by each IMP and contain information such as the IMP’s direct neighbours and the average packet per second across the line. The fact they could not keep their lines up was also a clue and it suggested that the IMP’s were unable to send the line up/down protocol because of heavy CPU utilisation. After an amount of time the lines would have been declared down because the lines up/down protocol was not able to be sent.

A core dump (log files) showed that all IMP’s had routing updates waiting to be processed and it was later revealed that all updates came from the one IMP, IMP 50.

It showed that IMP 50 had been malfunctioning before the network outage, unable to communicate properly with its neighbour IMP 29, which was also malfunctioning, IMP 29 was dropping bits.

The updates which were waiting to be processed by IMP 50 had a pattern this was as follows: 8, 40, 44, 8, 40, 44….. This was because of the way the algorithm determine what the most recent update was. 44 was considered more recent then 40, 40 was considered more recent then 8 and 8 was considered more recent then 44. Thus this set of updates formed an infinite loop, and the IMP’s were spending all their CPU time and buffer space processing this loop. Accepting the updates because the algorithm meant that each update was more recent then the last, this was easily fixed by ignoring any updates from IMP 50; but what had to be found is how did IMP 50 manage to get three updates into the network at once?

The answer was in IMP 29, which was dropping bits. When looking at the 6 bits that make up the sequence numbers of the updates we can see a problem

8 – 001000

40- 101000

44- 101100

If the first update was 44, then 40 could easily have been created by an accidental dropped bit and again 40 could be turned into 8 by dropping another bit. Therefore this would make three updates from the same IMP that would create the infinite loop.

The Internet RFC 789 Case Study 2 – Network Management

Network managers play a vital part in any network system, the organisation and maintenance of networks so they remain functional and efficient for all users. They must plan, organise, direct, control and staff the network to maintain speeds and efficiency for all users. Once these tasks are completed the four basic functions of a network manager will be complete these are the network; performance, fault management, provide end user support and manage the ongoing costs associated with maintaining networks.

Network Managing Tasks

The five key tasks in network management as described in Networking in an Internet Age by Alan Dennis (2002, p.351) are:

Careful planning of the network which includes the following; forecasting, establishing network objectives, scheduling, budgeting, allocating resources and developing network policies.

Organising tasks which includes developing organisational structure, delegating, establishing relationships, establishing procedures and integrating the small organisation with the larger organisation.

Directing tasks- initiating activities, decision making, communicating, motivating

Controlling tasks establishing performance standards, measuring performance, evaluating performance and correcting performance.

Staffing tasks interviewing people, selecting people, developing people

It is vital that these tasks are carried out neglect in one area can cause problems later on down the track. For example bad organisation could mean an outage lasts double what it should, or bad decision making when creating the topology of the network and what communication methods to use could mean the network is not fast enough for the organisations needs even when running at full capacity.

Four Main Functions of a network manager

The functions of a network manager can be broken down into four basic functions

Configuration management; performance and fault management, end-user support and cost management. Sometimes the tasks that a network manager will perform can cover more than one of these functions, such as documentation the configuration of hardware and software, performance reports, budgets and user manuals. The five key tasks of a network manager must be done in order to cover the basic functions of a manger as this will keep the network working smoothly and efficiently.

Configuration management

Configuration management is managing a networks hardware and software configuration and documentation. It involves keeping the network up to date, adding and deleting users and the constraints those users have as well as writing the documentation for everything from hardware to software to user profiles and application profiles.

Keeping the network up to date involves changing network hardware and reconfiguring it, as well as updating software on client machines. Innovative software called Electronic software distribution (ESD) is now available allowing managers to install software remotely on client machines over the network without physically touching the client computer saving a lot of time (Dennis, 2002)

Performance and Fault Management

Performance and fault management are two functions that need to be continually monitored in the network. Performance is concerned with the optimal settings and setup of the network. It involves monitoring and evaluating network traffics, and then modifying the configuration based on those statistics. (Chiu & Sudama 1992)

Fault management is preventing, detecting and rectifying problems in the network, whether the problem is in the circuits, hardware or software (Dennis, 2002) Fault management is perhaps the most basic function, as users expect to have a reliable network whereas slightly better efficiency in the network can go unnoticed in most cases.

Performance and fault management rely heavily on network monitoring which keeps track of the network circuits and the devices connected and ensures they are functioning properly (Fitzgerald & Dennis 1999).

End User Support

End user support involves solving any problems that users encounter whilst using the network. Three main functions of end user support is resolving network faults, solving user problems and training end-users. These problems are usually solved by going through troubleshooting guides set out by the support team. (Dennis, 2002)

Cost Management

Costs increase as network services grow this is a fundamental economic principle (Economics Basics: Demand and Supply, 2006) Organisations are committing more resources to their networks and need an effective and efficient management in the place to use those resources wisely and minimise costs.

In cost management the TCO (total costs of ownership) is used to measure how much it costs for a company to keep a computer operating. It takes into account the costs of repairs, support staff that maintain the network, software and upgrades as well as hardware upgrades. In addition to these costs it also calculates wasted time, for example the cost to the store manager, whilst his staff learn a newly implemented computer system. This inclusion of wasted time is widely accepted however many companies dispute whether it should be included. NCO (network cost of ownership) focuses on everything except wasted time. It exams the direct costs rather than invisible costs such as wasted time.

The Internet RFC 789 Case Study 1

The ARPANET (Advanced Research Project Agency NETwork) was the beginning of the Internet; a network of four computers which was put together by the U.S. Department of Defence in 1969. On the 27th of October, 1980 there was an unusual occurrence within ARPANET. For several hours the network appeared to be unstable, due to a high priority processes that was executing to the detriment of the system. It later expanded to a faster and more public network called NSFNET (Network Science Foundation Network) This network then grew into the internet as we know it today. On October 27th, 1980 the ARPAnet crashed for several hours, due to high priority processes that were executing exhausting system resources and causing down time within the system. (Rosen, 1981)

With today’s network management tools the system failure could have been avoided. Network manager responsibilities such as planning, organizing, directing, controlling and staffing (Dennis, 2002) would have allowed the situation to be handled correctly had these tools been available.  The case study RFC789 by Rosen summarises that the main problems the managers experienced were the initial detection that a problem existed and the control of the problematic software/hardware. Assuming they were available, the managing responsibilities would have allowed for a much quicker and efficient recovery of the system. However if careful planning and organising had been carried out when the system was implemented the crash might have been avoided completely.

Transmitting a message in computers through the OSI Layer

The Open Systems Interconnection (OSI) is a network model it is a framework of standards containing three subsections with a total of seven inner layers. The OSI is a network model and provides a set way for computers and devices to ‘talk’ to each other as a means of avoiding compatibility issues. During the 1970s, computers started to communicate with each other without regulation. It was not until the late 1970s that the speed of this transmission started to increase and standards were created. These standards consist of the group of three application layers the two internetwork layers and the two hardware layers.

When a message is transmitted from one computer to another through these seven layers protocols are wrapped around the data, the layers in the network use a formal language or protocol that is a set of instructions of what the layer will do to the message, these protocols are labelled or encapsulated onto the data. You could think of the protocols as layers of paper with a message that only the individual layer understands. Each layer handles other aspect of the connection these will be discussed below.

The first layer is the application layer it controls what data is submitted and deals with communication links such as establishing authority, identifying communication partners and the level of privacy. It is not the interface of what the user sees, the client program creates this. When a user clicks a web link the software on the computer which understands HTTP (such as internet explorer and Netscape communicator) transfers it into a HTTP request message.

The presentation layer may perform encryption and decryption of data, data compression and translation between different data formats. This layer is also concerned about displaying formatting and editing user inputs and outputs. A lot of requests such as website requests do not use the presentation layer also there is no software installed at the presentation layer and is therefore rarely used.

The session layer as the name suggests deals with organisation of the session. The layer creates the connection between the applicants, enforces the rules for carrying session and if the session does fail the layer will try to reinstate the connection. When computers communicate they need to be in synchronisation so that if either party fails to send information the session layer provides a synchronisation point so the communication can continue.

The transport layer ensures that a reliable channel exists between the communicating computers. The layer creates smaller easy to handle packets of data ready for transmission in the data link layer, it will also translate the address into a numeric address ready for better handling on the lower levels. The protocols that the transport layer uses must be kept the same in all computers. It is in this layer that protocols such as Transport Control Protocol (TCP) is used this protocol allows computers running different applications and environments to communicate effectively.

The website request now has been encapsulated with two different protocols, HTTP and TCP and almost ready to move around in the network. The network layer routes the data from node to node around the network as multiple nodes in the network exist and will avoid a computer if it not passing packets on. Any computer connected to the Internet must be able to understand TCP/IP, as it is the internetwork layers that enable the computer to find other computers and deliver messages to them.

The IP packet, containing the TCP and HTTP protocols all inside one another is now ready for the data link layer. The data link layer manages the physical transmission in next layer. The data link layer decides when to transmit messages over the devices and cabling. The data link layer also allocates stop and start markers onto the message and detects and eliminates any errors that occur during transmission. This is because the next layer sends data without understanding its meaning. A protocol called and Ethernet frame is wrapped around the message and passed onto the physical layer.

It is in the physical layer that the Ethernet frame (and the other protocols inside Ethernet frame) is transferred into a digital signal consisting of a series of ones and zeros (Binary) and through cabling the website request message is sent to the website server. When the server receives the web request message the whole process is reversed. The Ethernet frame is “unpacked” going back through each layer until it reaches the application layer and the message is read. The process is then started again as the web page requested is sent back in another message, to the person requesting it.

Reference:

Carr, H. H. & Synder, C. A. (2007) Data Communications & network security. United States of America: McGraw-Hill/Irwin pg 124-129

Dennis, A. (2002). Networking In The Internet Age Application Architectures. United States of America: John Wiley and Sons, Inc

Dostálek, L., & Kabelová. A. (2006). Understanding TCP/IP. Retrieved August 6, 2006 from http://www.windowsnetworking.com/articles_tutorials/Understanding-TCPIP-Chapter1-Introduction-Network-Protocols.html

A repeater, bridge router and gateway

The repeater, bridge, router and gateway are all pieces of network equipment that work at various levels of the OSI model performing different tasks. The repeater network device exists in the physical layer of the OSI model and is the cheapest of all the mentioned devices. A repeater can be thought of as a line extender as connections on mediums such as 10BaseT and 100BaseT become weak beyond distances of 100 meters. The repeater receives a signal in an analog environment and replicates it to form a signal that matches the old one. In a digital environment the repeater receives the signal and regenerates it. Using a repeater in a digital network can create strong connections between the two connecting joins since any distortion or attenuation is removed. Unlike routers repeaters are restricted to linking identical network topology segments ie a token-ring to a token ring segment. Repeaters amplify whatever comes in and extends the network length on one port and sends out to all other ports (there is no calculation to find the best path to forward packets). This means that only one network connection can be active at a time.

A bridge is an older way of connecting two local area networks or two segments (subnets) of the same data link layer. A bridge is more powerful than a repeater as it operates on the second layer (data link) of the OSI network model. Messages are sent out to every address on the network and accepted by all nodes. The bridge learns which addresses are on which network and develops a routing or forwarding table so that subsequent messages can be forwarded to the right network. There are two types of bridge devices; a transparent hub bridge and a translating bridge. A translating bridge will connect two local area networks (LAN) that use different data link protocols. By translating the data into the appropriate protocol ie from token ring to Ethernet network. A transparent hub bridge will perform the same functions as a translating but will only connect two LANs that use the same data link protocol.

Routers are used in the majority of home networks today and are placed at the gateways of networks. They are used to connect two LAN’s together (such as two departments) or to connect a LAN to an internet service provider (ISP). Routers use headers and forwarding tables like a bridge to determine the best path for forwarding the packets. Routers are more complex than bridges and use protocols such as internet control message protocol (ICMP) to communicate with each other and to calculate the best route between two nodes. A router differs as it ignores frames that are not addressed to the router and use algorithms and protocols that allow them to send packets to the best possible path. A router operates at the third OSI layer (network layer) and can be dynamic or static. Once a static routing table is constructed paths do not change. If a link or connection is lost the router will issue an alarm but will not be able to change the path of traffic automatically unlike dynamic routing. Routers are slower than bridges but routers are more powerful as they can split and reassemble frames receiving them out of order also they can choose the best possible route for transmission, these extra features make routers more expensive than bridges.

Gateways connect networks with different architectures by performing protocol conversion at the application level. Gateway is the most complex device operating at all seven layers of the OSI model. Gateways are used to connect LAN’s to mainframes or connect a LAN to a wide area network (WAN) Gateways can provide the following things:

Connect networks with different protocols

Terminal emulation so workstation can emulate dumb terminals (have all computer logic on a server machine)

Provide error detection on transmitted data monitoring traffic flow.

File sharing and peer to peer communications between LAN and host.

Reference:

Carr, H. H. & Synder, C. A. (2007) Data Communications & network security. United States of America: McGraw-Hill/Irwin pg 124-129

Dennis, A. (2002). Networking In The Internet Age Application Architectures. United States of America: John Wiley and Sons, Inc

Dostálek, L., & Kabelová. A. (2006). Understanding TCP/IP. Retrieved August 6, 2006 from http://www.windowsnetworking.com/articles_tutorials/Understanding-TCPIP-Chapter1-Introduction-Network-Protocols.html

Hardware and Data Security: Portable storage devices

Portable storage devices are a threat to data confidentiality and this threat is not recognised in the majority of organisations. Vice president Eric Ouellet of research for security at Gartner Inc. in Stamford said in a recent article that as little as 10% of enterprises have policies that deal with removable storage devices. This low recognition is not due to inabilities of controlling the problem as there are solutions available. (Mearian, March 2006) Data confidentiality can be explained quite simply as the access of data by the predetermined authorized people or systems. Many organisations spend hundreds and sometimes hundreds of thousands of dollars on network and computer security and therefore data confidentiality. Innocently or intentionally guests, employees and visitors who have access to any workstation can breach and hence create a threat to data confidentiality quickly and furtively. Through the use of portable storage devices data confidentiality could be breached in an organisation in a number of ways these include physical theft of a storage device in order to retrieve data i.e. hard drive and copying the data with the aid of various devices such as a flash drive (Pfleeger, 2003)

Physical theft

Hard drives

Physical theft of storage devices would be the most obvious breach in data confidentiality. Since many of today’s systems are backed up onto portable storage devices themselves, physical theft of such as device creates a direct threat to any organisation. Encryption of portable storage devices makes stolen information and the device useless to thieves. This also ensures forensic retrieval is not carried out after a hard drive has been thrown out or lost/stolen.

Forensic retrieval can be used to retrieve data from magnetic media since data can potentially still be retrieved even if it is overwritten or formatted.

Various devices

Flash drives

There are many portable storage devices available, the most popular being USB (Universal Serial Bus) drives or flash drives. Whilst there are different versions of flash drives which use different connection types such as firewire, this report will focus on the more common and universal USB connection. In today’s society increasing demand for ubiquitous computing, is causing devices to become smaller and have more memory capacity. With USB flash drives now around 10cm and smaller and capable of between 8MB (megabytes) and 64GB (gigabytes) storage. Retrieval of sensitive data would be extremely easy, assuming the attacker had unrestricted physical access as well as virtual access such as passwords and USB ports are not disabled. Through USB’s larger memory sizes, compared to older technologies such as floppy drives which only have 1.44MB storage, a potential theft is able to store copious amounts of data on such a device. Large databases of sensitive information such as hospital and government records could be copied on to these devices with ease. USB devices have a limited number of write erase cycles and write operations gradually slow as the device ages. Running applications from a flash drive, although viable, to breach data confidentiality is not the best option since running software or an operating system means undertaking a lot of read write cycles and a better option would be to use a portable hard drive, because of this policies need to be made to restrice the execution of software of external hard drives. (USB Flash Drive, Wikipedia 2006)

Key Logger

Not only could sensitive files be copied (assuming unrestricted access) other devices such as key loggers, which store key strokes inputted on a keyboard. Storage devices like these could be used to gain access to confidential data at a later date through logged passwords and access methods. Furthermore malware such as virus, spyware, adware could be loaded from the portable storage devices, either unintended or intentional which would lead to data attacks.

CD/DVD drives

Other devices include CD/DVD burners and external hard drives. Although these devices are less portable since they are much larger making them harder to hide and a user could easily be caught breaching data confidentiality by a security administrator or staff member.

However one could argue that in order for organisations to go about there daily proceedings they would need CD/DVD burners, thumb drives and external hard drives. In this scenario utilising software such as Device Shield, software developed by Layton technology which allows the administrator to gain full control of every port, drive and individual devices, ensuring efficiency of the organisation is not compromised. Device Shield also captures history of actions attempting to access blocked devices/ports etc creating a tracing route if confidentiality is breached. Device Shield and similar software which is available could be used in conjunction with policies referring to portable storage devices to create a secure working environment. (Device Shield: Protection against the threat from within, 2006)

References

Robb, D. (October, 2006) Backups gone badly retrieved October 16, 2006 from http://www.computerworld.com/action/article.do?command=viewArticleBasic&taxonomyName=&articleId=266212&taxonomyId=019&intsrc=kc_li_story

Latamore, G. B. (October, 2006) How to Back Up your PDA retrieved October 16, 2006 from http://www.computerworld.com/action/article.do?command=viewArticleBasic&taxonomyName=&articleId=265905&taxonomyId=019&intsrc=kc_li_story

No author, (2006) Sanctuary device control retrieved October 16, 2006 from http://www.securewave.com/sanctuary_usb_endpoint_security_software.jsp?gclid=CNCKkNPa_4cCFUpkDgodhkDnFw

Bolan C. (2006) Hardware Security and Data Security, Edith Cowan University, retrieved October 15, 2006 from MYECU lecture slides

Mearian, L. (March, 2006) IT Managers See Portable Storage Device Security Risk retrieved October 14, 2006 from http://www.computerworld.com/hardwaretopics/storage/story/0,10801,109680,00.html

Pfleeger. C. P & Pfleeger, S.L. (2003) Security in Computing 3rd Ed, Upper Saddle River, New Jersey, Prentice Hall Professional Technical

No Author, (2006) Device Shield: Protection Against The Threat From Within retrieved October, 15, 2006 from http://www.deviceshield.com/pages/deviceshield.asp?crtag=google&gclid=CP7jntHa_4cCFUdtDgodTkbyHg

Wikipedia, (2006) USB Flash Drive retrieved October 16, 2006 from http://en.wikipedia.org/wiki/USB_Flash_Drive

Network Security

Packet spoofing or IP spoofing is the act of faking the source of a packet. A security attack like this impedes the network security. Packet spoofing breaks the three qualities that a secure system has; confidentiality, integrity and availability. Confidentiality is keeping information access to authorised parties. Integrity ensures that a system can only be modified by authorised parties and in authorised ways. Availability is ensuring that access to a network is not prevented, authorised parties should are able to access the system at appropriate times. All attacks mentioned breach at least one of these qualities of a secure system. (Pfleeger, 2003) When a file such as a photo is sent over a network, both a home or internet network, the photo is split into small pieces and information of how to handle the files are encapsulated called protocols. The header of the packet amongst other things contains the order value or algorithm in which the packets where sent. Packets will probably arrive out of order and must the packets must be placed back together using the order sent value.

Packet spoofing is possible because of the vulnerabilities in the network protocols. A few examples of network spoofing are masquerade, a smurf and SYN flood which are denial of service attacks, and attacking confidentiality, session hijacking.

Internet Protocol (IP)

Internet protocol is a network protocol from the OSI model, on layer 3. IP has no information, contained in the header of the network packet, regarding its transactions state and whether the packet has properly reached its destination. This vulnerability enables the source and destination IP address to be altered. By forging the source and destination IP address so it contains a different address an attacker can make it appear that a packet was sent by a different machine.

Transmission Control Protocol (TCP)

TCP uses a connected design to send data; and participants build a 3-way handshake.

The TCP header is different to the IP header but can still be manipulated using software. The TCP packet header contains amongst other things the sequence and acknowledgement numbers. The data contained in these ensures packet delivery by determining whether or not a failed packet needs to be resent. This is done by the sequence number which is the number of the first byte in the current packet whereas the acknowledgement number contains the value of the next expected sequence number. This confirms for both the client and server that the proper packets were received.

Connection is established by a client who must find an open port on the server. This is done by sending a SYN (synchronise) this is a synchronisation of sequence numbers on two connecting computers. In response the server replies with a SYN-ACK and the client then sends back an ACK back to the server. This ensures there is acknowledgement of the connection.

Integrity attacks; Masquerade

Masquerade

In a masquerade a host pretends to be another. A common masquerade attacks are having alterations of domain names and websites. For example bank.org and bank.com could be two different and separate websites. Bank.org could be a legitimate bank, but bank.com could be a carbon copy of the original bank.org website and could be used to collect sensitive data and information. By using different links and passing the connection to the original site whilst collecting the victims’ data. Through this technique an attacker could have multiple avenues such as access to computer systems by obtain login names and passwords, alter change, steal money and therefore breach the integrity of the network. (Pfleeger, 2003)

Availability Denial of service attacks; Smurf attack and SYN flood

TCP ensures delivery of packets through a 3 way handshake, availability of a network can not be ensured and there are different types of Denial of service attacks. All of these attacks send a large amount of messages to the system which causes it to not function. And in both Smurf and a SYN flood the original source of the flood can not be traced as the attacker will spoof the messages making them appear from another machine.

Smurf

The smurf attack uses spoofed broadcast ping message to flood at target system. A large amount of Internet control message protocol (ICMP) or traffic ‘ping’ to IP broadcast addresses are sent. Some devices actually multiply the traffic and will send an ICMP echo request replying to the original ping message. “Smurfable” networks have greatly reduced nowadays due to network management although users using old technologies are still capable of being “smurfed’. (Smurf Attack, Wikipedia 2006)

SYN flood

Similar to a smurf attack a SYN flood is when an attacker sends a large amount of SYN requests to a target system. As discussed a TCP connection uses a three-way handshake by sending a succession of acknowledgments and acceptance messages. Sending a large amount of SYN messages the server will not receive its needed ACK acknowledgement message needed to continue the connection. The SYN message floods the network and hence makes it unavailable. (SYN Flooding, Wikipedia 2006)

Confidentiality; Session Hijacking

Session Hijacking

Session Hijacking refers to the exploitation of a valid session key to gain unauthorised access to information or service in a network. Although session keys are normally randomised and encrypted to prevent session hijacking a third party ( the attacker) will intercept traffic between two systems. The attacker then has access to the system, monitoring information and collecting data. A similar attack called man-in-middle attack is when the hijacking usually starts at the start of the session between the two systems. The attack uses the public key and decrypts the data and then encrypts it back to it original form to pass on to the receiver. (Pfleeger, 2003)

Reference

Pfleeger. C. P & Pfleeger, S.L. (2003) Security in Computing 3rd Ed, Upper Saddle River, New Jersey, Prentice Hall Professional Technical

Tanase, M. (2003) IP Spoofing: An Introduction retrieved October 15, 2006 from http://www.securityfocus.com/infocus/1674

No author, (2006) SYN flood Retrieved October 15 2006 from http://en.wikipedia.org/wiki/SYN_flood

No author, (2006) Smurf Attack Retrieved October 15 2006 from http://en.wikipedia.org/wiki/Smurf_attack

No author, (2006) Transmission Control Protocol: Connection establishment Retrieved October 15 2006 from http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Connection_establishment

Wide Area Networks and VPN

Virtual Private Network, (VPN), is a private communications network, of two or more computers, which uses encryption, to provide a secure connection through the internet; although sometimes VPN services can be created via third-party vendors who own physical lines and clients pay to use them. Using third-party vendors is not, the majority of the time, very cost effective.

Types of VPN

There are three types of VPNs

Intranet VPN-

Allows connectivity between remote locations of a single company and allows organisations to create LAN to LAN connections

Extranet VPN-

When two close organisations or a business and its customers want to share data and information, which allows all of the connected LANs to work in a shared environment

Remote access VPN-

Is when a user can enter the VPN remotely without using a VPN device but instead goes via his/her own Internet service provider (ISP) and authenticates him/her self as seen in the diagram below.

(Dennis, 2003, pg 207)

How it works:

A VPN connection will be setup containing these qualities;

Connection:

Each user must have some type of connection to the internet whether it is a simple dialup connection or a faster T-Carrier Service such as T4 giving effective speeds of 218mbps (Dennis, 2002)

Authentication:

Since VPNs are placing private data on a public network, the internet, and users are accessing it remotely, authentication measures must be used in-order to combat potential threats to the data, these authentication techniques can be summarised into three categories:

Something you know, eg. a login name

Something you have, eg. a physical card key or

Something you are eg. a fingerprint pattern

Encryption:

VPNs are very similar to private packet switched networks as both try to keep data private. Encrypting data is the only way in ensuring the privacy of the information being sent modern encryption algorithms such as symmetric DES, AES, RC5 Blowfish or asymmetric algorithms such RSA or a more secure combination of both. (Module 4: Crypto 1, 2005)

Tunnels and Encapsulation:

VPNs enable its users to create permanent virtual circuits (PVC) that are called tunnels, these virtual circuits are defined for frequent and consistent use by the network, and do not change unless changed by the network administrator/manager. The VPN devices send and receive packets through internet tunnel. The VPN encapsulates the data with different protocols and frames which provides the information to the receiving VPN on how to process the new packet. These protocols go over the existing protocols that a piece of data would normally need to be transferred over the internet these are as follows Peer-to-Peer (PPP), Internet protocol (IP), Transmission Control Protocol (TCP), and Simple Mail transfer Protocol (SMTP). These protocols act like wrapping paper with written addresses and instructions of how to handle that data. The VPN furthers this encapsulation for sending over a tunnelled network by placing the VPN protocol Layer-2 tunnelling protocol (L2TP). This is then wrapped in another IP address since it is sent to the address of the VPN device. The final protocol is synchronous optical network (SONET) this is because each circuit on the internet, T1 SONET OC-48 etc has its own data link protocol so the VPN device surrounds the OP packet with the appropriate frame for the appropriate circuit that the final packet will be travelling on. On the receivable end the receiving VPN device simply strips off or decrypts the protocols and receives the packet. (Dennis, 2002)

Advantages

Improve productivity for a business or organisation.

Reduce transit time and transportation costs for remote users, such as airplane tickets and petrol costs. Simplify network topology and security in some scenarios.

Low costs comparative to other choices.

Disadvantages

Even though VPNs have dedicated tunnels it doesn’t mean that it is dedicated to an individual user, it simply means that it has a specific address that each packet must follow. This essentially means that if there is a lot users online the network will be become bottlenecked because unlike other networks it can not choose another address or path to follow since it is inside the ‘tunnel’.

(What is a virtual private network, no date)

References

Dennis, A. (2002). Networking In The Internet Age. United States of America: John Wiley and Sons, Inc

Tyson, J. (N.D.) How Virtual Private Networks work. Retrieved October 2, 2006 from http://computer.howstuffworks.com/vpn.htm

Howe, D. (1999) The Free On-line Dictionary of Computing, Retrieved October 3, 2006 from http://dictionary.reference.com/search?q=virtual%20private%20network

Wikipedia (September, 2006) Virtual Private Network, Retrieved October 3, 2006 from http://en.wikipedia.org/wiki/VPN

No author, (No Date) What is a virtual private network?, Retrieved October 3, 2006 from http://www.ciscopress.com/content/images/1587051796/samplechapter/1587051796content.pdf

No author, (2005) Module 4: Crypto 1, Retrieved October 3, 2006, Computer security lecture slides. Edith Cowan University from http://myecu.ecu.edu.au/webapps/portal/frameset.jsp?tab=courses&url=/bin/common/course.pl?course_id=_35490_1