Mesh Networks: Part 2

Military Perspective

Aside from efforts to tame mesh network technology for commercial deployment, the U.S. Government has spent significant time, money, and resources on the research, development, and field deployment of mesh networks for tactical military operations.  With any mesh network deployment, the addition or deletion of network nodes can alter the dynamic network topology, emphasizing the need for efficient network organization, link scheduling, and routing to contend with varying distance and power ratios between links. A military environment, however, imposes additional complications by enforcing low probability of intercept and/or low probability of detection requirements, which in turn pose stringent power and transmission requirements on every network node [4].

Tactical military operations must also contend with varying degrees of mobility that occur within the military’s echelon of four Divisions per Corp, four Brigades per Division, three Battalions per Brigade, four Companies per Battalion, and three Platoons per Company [13].  In this particular hierarchy, the often unpredictable nature of battle can dictate the need to merge and reconfigure sections of missing forces, disrupting the communication paths from node to node within Battalions, Companies, or other command structures. And while some engineers argue that alternatives to mesh networking exist to support communications in these battlefield conditions, others highlight the mesh network capability for instantly configurable, decentralized, redundant, and survivable communications in frontline battle areas or during amphibious or airborne operations where a clustered, ad hoc network configuration might consist of people, planes, ships, and tanks. In this military environment, mesh networks must contend with the military’s requirement for preservation of security, latency, reliability, intentional jamming, and recovery from failure [1], [4].

The Joint Tactical Information Distribution System (JTIDS) provides one example of a repeater-based, full mesh military network architecture that uses airborne relay to perform base station functions such as routing, switching, buffering multiple packet streams, and radio trunking. Developed for air-to-air and air-to-ground communications, JTIDS consists of up to 30 radio nets each sharing a communications channel on a time division multiple access (TDMA) scheme with most nodes in the network containing minimal hardware and processing power. In this configuration, the loss of any node within a radio net would have no negative impact on communications connectivity [1].

In another example, the Army’s Communications Electronics Command oversees ITT Industries’ development of the Soldier Level Integrated Communications Environment (SLICE). Designed for voice communications and troop mapping functions, SLICE represents the latest in military mesh network capabilities. Originally conceived as the DARPA Small Unit Operations Situational Awareness System, SLICE supports simultaneous networking of voice, video, and data transfer with a waveform and media access protocol that yields effective communications in urban canyons and dense jungle environments. In its present form, SLICE consists of a backpack-size computer with a headset display and built-in microphone. By 2005, ITT expects SLICE to shrink to the size of a PDA.  With respect to SLICE, JTIDS, or any other military radio architecture, the theme of digitized battlefield communications describes the warfighter landscape with requirements for wearable, ruggedized personal computers capable of flawless performance under harsh conditions [14], [15, [16].

Final Thoughts

With low transmission power requirements and a multi-hop architecture, mesh networks increase the aggregate spectral capacity of existing nodes, providing greater bandwidth across the network. And since mesh networks transmit data over several smaller hops instead of spanning one large distance between hops, mesh network links preserve signal-to-noise ratios and decrease reliance on bandwidth-pinching forward error correction techniques [17]. In terms of scalability, mesh networks can accommodate hundreds or thousands of nodes with control of the wireless system distributed throughout the network, allowing intelligent nodes to communicate with one another without the expense or complication of having a central control point. Furthermore, these networks can be installed in a manner of days or weeks without the necessity of planning and site mapping for expensive cellular towers. As with other peer-to-peer router-based networks, mesh networks offer multiple redundant communications paths, allowing the network to automatically reroute messages in the event of an unexpected node failure. Thanks in part to standards efforts underway in the Internet Engineering Task Force (IETF) MANET Working Group, the design and standardization of algorithms for network organization, link scheduling, and routing will help facilitate the commercial acceptance of mesh network technology.

Despite their potential to provide a more sophisticated WLAN alternative, mesh networks must effectively address security issues with end-device and router introduction, user data integrity, device control and authentication, and network authentication. Aside from security issues, the RF-independent, self-forming, and self-healing characteristics these networks display come at  the expense of complex and power intensive computer processing. Even in static environments with all nodes stationary, mesh network topologies remain dynamic due to variations in RF propagation and atmospheric attenuation. With mobile nodes, a mesh network’s constantly shifting topology dictates the need for dynamic routing allocation, resource management, and quality of service management – all of which must be precisely choreographed to ensure optimum performance and reliability. Other skeptics contend that as ad hoc multi-hop networks grow, performance tends to deteriorate due in part to excessive traffic control overhead required to maintain quality of service along a path with multiple hops besieged by inconsistencies in routing and connectivity as nodes are added and dropped. Also, the network must handle multiple access and collision problems associated with the broadcast nature of RF communications. Regardless of these technical hurdles, researchers at Intel continue to push the research and development envelop in an effort to design a 100 Mbps mesh network where every network element (PC, PDA, mobile phone, etc.) could act as a data relay and link itself to all the devices in an intelligent network [10], [12], [17], [19].

With the ability to deploy a wide-spread coverage network without towers, mesh networks pose a viable alternative to traditional cellular architectures. Labeled as a potentially disruptive fourth-generation technology, QDMA-based mesh networks aren’t alone in their quest for the ultimate radio communications system capable of operating in unlicensed spectrum. Though technologically disparate from QDMA-based networks, ultra wideband (UWB) mesh networks present one alternative to MeshNetworks, Inc. proprietary QDMA-based software, thanks in part to recent FCC rulings approving limited usage of UWB devices. Several companies are championing the development of UWB networks, which promise data rates of 100 Mbps at very low power levels over a wide bandwidth from 1 to 10 GHz. By employing time-modulated digital pulses in lieu of continuous sine waves, mesh networks with UWB technology can send signals at very high rates in wireless communication environments that suffer from severe multipath, noise, and interference. Whether UWB mesh networks or QDMA-based mesh networks will prevail remains to be seen. Some analysts give the edge to UWB as an open standard, which is steadily gaining support in commercial and military markets. Either way, the continued development of mesh networks for military and commercial markets holds promise for a radical shift in the way we view the world of wireless communications [18], [20].

References

1      “Alternative Architectures for Future Military Mobile Networks,” Obtained April  7, 2003 from URL: http://www.rand.org/publications/MR/MR960/MR960.chap3.pdf

2      Poor, Robert, “Wireless Mesh Networks,” Sensors [on-line], February 2003. http://www.sensorsmag.com/articles/0203/38/main.shtml.

3      Braunschweig, Carolina, “Wireless LANs Could Turn Into a Big Mesh,” Private Equity Week [on-line], February 3, 2002. http://www.ventureeconomics.com/vec/1031551158703.html

4      “Project: Wireless Ad Hoc Networks,” NIST.  Obtained April 8, 2003 from URL: http://w3.antd.nist.gov/wctg/manet/

5      “QDMA and the 802.11b Radio Protocol Compared,” MeshNetworks: Technology, [on-line]. Obtained April 9, 2003 from URL: http://www.meshnetworks.com/pages/technology/qdma_vs_80211.htm

6      Blackwell, Gerry, “Mesh Networks: Disruptive Technology?” 802.11 Planet [on-line].  Obtained April 8, 2003 from URL: http://www.80211-planet.com/columns/article.php/961951.

7      Black, Uyless (1993). Computer Networks: Protocols, Standards, and Interfaces. Second Edition. New Jersey: Prentice Hall.

8      Stroh, Steve, “MeshNetworks – From the Military Battlefield to the Battlefield of Modern Mobile Life,” Shorecliff Communications [on-line], Vol. 2, No. 2, February 2001. http://www.shorecliffcommunications.com/magazine/print_article.asp?vol=10&story=85

9      Morrissey, Brian, “The Next 802.11 Revolution,” Internet News [on-line], June 13, 2002. http://www.internetnews.com/wireless/article.php/136561

10   Rubin, Izhak, and Patrick Vincent, “Topological Synthesis of Mobile Backbone Networks for Managing Ad Hoc Wireless Networks,” Electrical Engineering Department, University of California Los Angeles, 2001.

11   Krane, Jim, “Military Networks Trickling into Civilian Hands,” The Holland Sentinel [on-line], December 8, 2002. http://www.thehollandsentinel.net/stories/120802/bus_120802072.shtml

12   Fowler, Tim, “Mesh Networks for Broadband Access,” IEE Review, January 2001.

13   Graff, Charles et al., “Application of Mobile IP to Tactical Mobile Internetworking,” IEEE Magazine, April 1998.

14   “ITT Industries Awarded $44 Million to Develop Advanced Soldier Communications System,” PR Newswire [on-line], November 25, 2002. http://www.cnet.com/investor/news/newsitem/0-9900-1028-20696617-0.html

15   “Mesh Networks Keep Soldiers in the Loop,” Associated Press [on-line], January 27, 2003. http://www.jsonline.com/bym/Tech/news/jan03/113806.asp

16   Omatseye, Sam, “The Connected Soldier,” RCR Wireless News, March 17, 2003.

17   Krishnamurthy, Lakshman et al, “Meeting the Demands of the Digital Home with High-Speed Multi-Hop Wireless Networks,” Intel Technology Journal, Volume 6, Issue 4 [on-line], November 15, 2002. http://developer.intel.com/technology/itj/index.htm

18   Smith, Brad, “Smell the Coffee: Disruptive Technologies on the 2002 Horizon,” Wireless Internet Magazine, January 7, 2002. http://www.wirelessinternetmag.com/news/020107/020107_opinion_brad.htm

19   Ward, Mike, “Promise of Intelligent Networks,” BBC News [on-line], February 24, 2003.  http://news.bbc.co.uk/2/hi/technology/2787953.stm

20   Barr, Dale, “Ultra-Wideband Technology,” Office of the Manager, National Communications System Technical Notes, Volume 8, Number 1, February 2001

Mesh Networks: Part 1

Abstract

Conceived by the U.S. Military, mobile ad hoc networks, commonly known as mesh networks, provide end-to-end Internet Protocol (IP) communications for broadband voice, data, and video service combined with integrated geographical location logic designed to function in a mobile wireless environment. Unlike 802.11 wireless local area networks (WLANs) and point-to-multipoint digital cellular networks, mesh networks accommodate a more dynamic operational environment where their radio frequency (RF)-independent, self-forming, and self-healing properties meld the best of both worlds between WLAN and cellular systems. This paper examines the concept of mesh networks with a look at recent commercial and military development of what some consider a disruptive, next-generation wireless communications technology.

Introduction

Loosely speaking, mesh networks form a wireless Internet where any number of host computing nodes can route data point-to-point in an intricate web of decentralized IP links built upon many of the routing features first employed by earlier packet radio networks [4]. Borne from a heritage of 1960s and 1970s packet data radios designed to provide reliable communications for connectionless, non-real-time traffic, today’s mesh networks have evolved to provide multicast IP traffic with real-time requirements [1]. In essence, mesh networks extend the concept of packet data radio communications by using sophisticated digital modulation schemes, traffic routing algorithms, and multi-hop architectures that challenge the laws of physics by using minimal

transmission power to increase data throughput over greater distances. With mesh networks, any node within the network can send or receive messages and can relay messages for any one of its hundreds or thousands of neighboring nodes, thus providing a relay process where data packets travel through intermediate nodes toward their final destination. In addition, automatic rerouting provides redundant communication paths through the network should any given node fail [2].  This ability to reroute across other links not only provides increased reliability but extends the network’s reach and transmitting power as well. This resilient, self-healing nature of mesh networks stems from their distributed routing architecture where intelligent nodes make their own routing decisions, avoiding a single point of failure. Because mesh networks are self-forming, adding additional nodes involves a simple plug-and-play event [3]. And because mesh networks don’t rely on a single access point for data transmissions, users of this technology can extend their communication reach beyond a typical WLAN. Furthermore, mesh networks and their low power, multi-hopping ability allow simultaneous transmissions to reach nearby nodes with minimal interference [17].

Achieving this self-forming, self-healing utopia with minimal power and signal interference involves the implementation of sophisticated routing logic within the software and hardware to account for minimum latency, and maximum throughput, as well as provide for maximum security and reliability [7].

As with all radio frequency (RF) communication systems, mesh networks must contend with noise, signal fading, and interference; however, unlike other RF systems, mesh networks deal with noise, signal fading, and interference through an air interface protocol originally designed to provide reliable battlefield communications.  Known as quad division multiple access (QDMA), this air interface provides the driving force behind mesh network capabilities. Conceived by Military Commercial Technologies (MILCOM) and a communications division of ITT Industries, QDMA allows mesh networks to facilitate higher throughput without sacrificing  range – or extending transmission range without sacrificing throughput. QDMA supports low-power, high-speed broadband access in any sub-10 GHz frequency band, providing non-line-of-sight node linking to dramatically increase signal range without sacrificing throughput. Geared toward wide area mobile communications, QDMA compensates for wild fluctuations in signal strength with powerful error correction abilities and enhanced interference rejection that allows multi-megabit data rates – even from a mobile node traveling at 100 mph and beyond. And with shorter distances between network nodes, the resulting decrease in interference between clients provides for more efficient frequency reuse. Furthermore, QDMA offers highly accurate location capabilities independent of the satellite-based global positioning system (GPS) [2], [4], [5], [6].

Commercial Deployments

Since the inception of QDMA and the subsequent commercialized version of this technology, venture capital firms have invested more than $100 million since 2001 for continued design and development of mesh networks that could ultimately compete with IEEE’s 802.11b [3].  One firm, appropriately named MeshNetworks, has adopted the QDMA technology with direct sequence spread spectrum (DSSS) modulation in the 2.4 GHz industrial, scientific, and medical (ISM) band, providing 6 Mbps burst rates between two terminals. Backed by almost $40 million in venture funding from 3Com Ventures, Apax Partners, and others, MeshNetworks signed its first customer, Viasys Corporation, in November 2002. Eventually, MeshNetworks plans to offer their networking capability in the 5 GHz unlicensed national information infrastructure (UNII) band [8].  For now, MeshNetworks, headquartered in Maitland, Florida, is testing a 2.4 GHz prototype in a five-square-mile test network around its Orlando suburb with an FCC experimental license to build a 4000-node nationwide test network [6]. To maintain Internet connectivity, MeshNetworks relies on multi-hop routing between nodes mounted on buildings, light poles, vehicles, and end-user devices [17]. Aside from designing prototype routers, relays, and PDA-size client devices, MeshNetworks plans to offer a software overlay solution for 802.11b clients in existing networks, effectively extending the range and link robustness of existing Wi-Fi networks through mesh-style multi-hopping [6].  Furthermore, MeshNetworks recently announced a deal with auto-parts manufacturer Delphi to test the feasibility of mesh networks in a telematics environment [9].  MeshNetworks competitors include FHP Wireless, which recently announced its formal launch date in March of 2003, and Radiant Networks from Cambridge, U.K., which has deals in place with British Telecom, Mitsubishi, and Motorola [3].

Interestingly, each of these potential mesh network providers will face a similar network coverage dilemma, a sort of catch-22 where the ability to expand network coverage hinges on the deployment of new subscribers whose mobile nodes will act as router/repeaters for other nodes. In this scenario, requirements for expanded coverage dictate the need for more subscribers – but the service provider can’t solicit new subscribers until the coverage extends to the new subscribers’ area.  To resolve this, MeshNetworks and Radiant Networks supply ‘seed nodes’ mounted on telephone poles or streetlights for initial coverage and redundancy with the level of required seeding determined by specific business objectives [10], [12].

Biometrics and E-Medicine: Marvel or Mayhem: Part 2

Case Studies

According to recent statistics cited by the First Consulting Group, a Long Beach, California consulting organization that specializes in health care consulting, only 3% to 5% of healthcare provider organizations have deployed biometrics [8].  Despite these statistics, several successful biometric pilots and full-scale implementations exist across the country. Consider Washington Hospital Center in Washington, D.C., a 975-bed not-for-profit hospital that implemented an iris-scanning system to increase security for their integrated medical record system – or Lourdes Hospital in Paducah Kentucky, which implemented NEC Technology’s HealthID finger-scan system in 1998 and currently stores 15,000 to 20,000 fingerprints in a patient and physician database [8]. In another example of a successful biometric implementation, Moffitt Cancer Center, a 160-bed research hospital in Tampa, Florida,  tested 60 biometric devices in early fall of 2002 with full rollout of 1,000 devices expected by June 2003 [11].  In the fall of 2000, the 660-bed Jackson-Madison County General Hospital implemented Identix fingerprint technology for  315 employees and affiliated physicians [9].  In April of 2002, the 281-bed Columbus Children’s Hospital in Ohio deployed a comprehensive program that requires more than 1000 doctors, nurses, and pharmacists accessing patient medical records and entering medicine orders by computer to authenticate via fingerprint scan [12]. North Florida Medical Centers in Tallahassee implemented biometric security solutions deployed to more than 100 users during a 6 to 8 month period [10].  Lastly, Children’s Hospital in Dallas plans to implement a single-sign-on application with iris scanning, fingerprint biometrics, or a combination of the two in 2003 [13].

In each of these case studies, biometrics were deployed with a specific implementation approach based on the appropriate solution methodology designed to meet each healthcare provider’s needs. With this approach, biometrics provide an effective means to address HIPAA mandates for secure access, storage, maintenance, and transmission of identifiable healthcare information between patients and hospital staff.  Combined with proper user training, IT support, and appropriate fallback measures, biometric technologies can successfully integrate with people and policy criteria.  And while many successful biometric deployments exist today, challenges lie ahead. From a technology perspective, biometric finger-scan devices remain susceptible to dust and dirt accumulation on the capture device itself.  Excessively dry or oily skin can also disrupt a finger-scan system and produce inaccurate readings. Voice authentication systems, though great for certain telecommunication applications, perform poorly in noisy environments. From a people perspective, inconsistent usage, poor training, or simple reluctance to use the biometric system can negatively impact a biometric deployment. From a policy perspective, no biometric system can guarantee 100% successful enrollments within the user population, dictating the need for secure, accurate, and reliable fallback procedures. For many healthcare organizations, the cost of meeting HIPAA requirements through the use of biometric applications remains a strong deterrent with some full-scale biometric implementations costing hundreds of thousands of dollars [11, 12]. Others argue that the cost of a biometric deployment pales in comparison to the legal fees incurred from attorneys hired to review privacy and security plans [14].

Conclusion

Do biometrics provide the ultimate cure for compliance with HIPAA security requirements?  Of course not – much the same way no specific technology resolves all the issues encountered in a complex enterprise security infrastructure with multiple work stations, user groups, software applications, and a litany of other variables to contend with.  Biometrics do, however, provide a robust, secure, and highly reliable means of user authentication. Biometrics also offer unprecedented logging and audit trail capabilities. When used in conjunction with single-sign-on applications, biometrics free healthcare providers from the hassles of the login, logout merry-go-round.  In the end, a poorly planned, arbitrary application of biometrics can do more harm than good, but a well-defined, well-designed application of this technology can provide a mature, scalable foundation from which to satisfy HIPAA security requirements.

Biometrics and E-Medecine: Marvel or Mayhem: Part 1

This paper explores the application of biometric technology as a viable approach to fulfill the security standards mandated by the Health Insurance Portability and Accountability Act (HIPAA). Through analysis of hospital and healthcare organization case studies, this paper examines the ability of biometrics to provide a safe, secure, and reliable means of user authentication via desktop and Internet applications. In addition, an unbiased look at the impact of biometrics with respect to people, policies, and existing security infrastructures provides valuable insight for healthcare industry leaders grappling with HIPAA security requirements.

Introduction

Much has been published in recent months regarding the use of biometrics as a skeleton key solution designed to free the healthcare industry from the shackles of security compliance standards mandated by the Health Insurance Portability and Accountability Act (HIPAA). And while it’s true that biometrics provide a viable alternative to more traditional user authentication mechanisms like PINs, passwords, and magnetic swipe cards, HIPAA remains technology neutral, placing emphasis on when and why a security solution must be implemented rather than on how.  So why all the hype surrounding biometrics and their potential to satisfy HIPAA security requirements? The answer is complicated and remains at large without a better understanding of the various components that constitute an overall healthcare security infrastructure, a complex paradigm encompassing the confidentiality of patient records, as well as electronic access to patient information via multiple applications and platforms.

A secure, reliable, and inherently flexible healthcare security infrastructure contains the following four components described by the inner loop in Figure 1: authentication, authorization, digital signatures, and network security [1].  These four components stem from a public key infrastructure (PKI) designed to govern electronic transactions and provide a framework for securely delivering healthcare information across the Internet.

With hospitals and healthcare organizations required to provide patients with secure access to medical data over landline, wireless, and Internet applications, biometrics play a critical role in the user authentication space, addressing the question, “Are you who you claim to be?” And since a person’s biometric trait cannot be lost, stolen, or in most cases forged, biometrics provide stronger authentication security over passwords or token ID systems alone. In a sense, biometric authentication constitutes the first line of defense, followed by security authorization, which must determine whether or not a person has access privileges to a particular system. Digital signatures for Internet transactions handle non-repudiation, or the ability to guarantee that the authenticated individuals actually participated in the transaction. Network security provides the information assurance umbrella to protect the security system from unauthorized use as well as provide confidentiality of communication through encryption methods. Together, these four components of authentication, authorization, digital signatures, and network security form a sort of security nucleus with biometric authentication technology at the core and a PKI environment surrounding it.  In turn, biometrics within the PKI environment provide significant support for the five overarching HIPAA requirements.  The first of these requirements address electronic transactions, which dictate the need for standardized code sets for encoding data elements involved in the electronic transaction of healthcare claims, health care payment and remittance advice, benefit coordination, and other transactions. Privacy of individually identifiable health information establishes regulations that include consent, authorization notices, disclosure audits, and grievance procedures. Security rules define standards intended to protect confidentiality, integrity, and availability of healthcare information through technology neutral and technology scalable means. Administrative procedures dictate rules for access, whereas network security governs rules for logical network access and physical access controls for data rooms, equipment control, disaster recovery, and general facility access [15].

Technology, Policy, and People

With biometrics at the core, the deployment of biometric application software within a PKI environment can positively impact each of the five general HIPAA regulations by securing entire networks and all associated applications running across the healthcare continuum, including applications for computerized physician order entry systems, time and attendance logs, user audit trails, patient identification, data access, and more. But technology alone will not satisfy HIPAA compliance requirements, and healthcare organizations who embrace technology as a silver bullet solution to their HIPAA woes are in for a harsh ride when they realize 75% of HIPAA governs policies and procedures [7]. Unlike password guidelines, however, biometric policies dictate that you can’t share a finger or an eyeball when it’s time to authenticate on the system.  And in most cases, unless you’re prone to playing with live grenades or staring at the sun with a magnifying glass, you can’t lose your biometric attribute the way you lose a magnetic swipe card or personal identification number scribbled on the notepad in your drawer.  Unfortunately, too much attention is placed on the technology for technology sake and not enough on researching and establishing relevant security policies that define access privileges, fallback procedures, equipment maintenance schedules, and so on.

Along with security policy, people play a critical role in defining, implementing, and enforcing an effective biometric authentication solution. Regardless of the chosen technology and policy application, the appropriate personnel must define the security policy and ensure that users obey the rules and procedures described therein.  End users decide whether a particular technology suits their tastes or not.  Some end users find fingerprinting distasteful because of the negative connotation associated with law enforcement applications. Others find iris-scanning too invasive and perpetuate false concerns about potential damage to their eyes.   People, not technology or policy, make judgments about their personal comfort level with a given technology or system. If users are uncooperative, the biometric system can fail. If people neglect to follow directions when authenticating, the system will produce significant errors. If users don’t understand the importance of obtaining a quality enrollment image or the importance of consistent biometric presentation, the system will produce inconsistent results.  The technical and non-technical issues involving people are plentiful, and no authentication system, biometric or otherwise, will completely eliminate the need for some form of human intervention.

With an understanding of the roles that technology, policy, and people play in the overall establishment and execution of a security infrastructure with biometrics at the core, we are better equipped to answer the initial question regarding the use of biometrics to satisfy HIPAA security requirements.  In essence, biometrics provide a vehicle that can work equally well for physicians, nurses, administrative staff, and patients – all of whom must coexist in a dynamic healthcare environment shaped by technology, policies, and people.  Biometrics not only provide an effective means of user authentication, but also an effective means of integrating disparate information systems that communicate over wireline, wireless, and Internet paths both locally within a hospital setting and remotely at end user locations. The integration of biometrics with other technologies and the appropriate people and polices go a long way toward fulfilling HIPAA security requirements.  From a patient care perspective, biometrics allow multiple users to share a workstation while preserving the authentication and audit trails for each user [7].  In turn, physicians and nursing staff can focus more attention on patient care and less time on logging in and logging out of various applications. Furthermore, biometrics facilitate patient admission, speed access to prior medical records, and eliminate duplicate medical records [7].  In short, biometrics provide an effective means of managing access to patient records, preventing unauthorized use of system resources, and ensuring higher levels of information security.  It should be noted, however, that biometrics are not without fault and must be properly introduced to meet a particular healthcare provider’s needs, a facet of this technology often overlooked in a system integrator’s haste to deploy a quick-fix solution.

 

Lie Detection Systems: Part 2

1. Facial Thermography

Facial thermography represents a safe, non-invasive technology that measures skin surface temperature on a real time basis.  Like VSAs, facial thermography has an advantage over polygraph systems in that its non-invasive nature lends itself to covert applications.  Recently, Doctors B. M. Gratt and E. A. Sickles from the University of California at Los Angeles used microwave receivers to measure thermal radiation emitted from a human face and analyze blood flow differences between different regions of the face.  Last year, Doctors Levine, Palvidis, and Cooper refined the concept of facial thermography to explore the fact that specific activities are associated with characteristic, facial thermal signatures.

One 25-person pilot study conducted in 2002 between DoDPI and outside researchers examined the possible utility of a new thermal imaging device that measures the radiant energy emitted from an examinee’s face.  The published report, which focused on thermal imaging as an adjunct or potential alternative to traditional polygraph measurements, claimed that thermal imaging results achieved higher accuracy than the polygraph.  Although according to the 2003 National Research Council report on polygraph testing, the pilot study conducted by DoDPI failed to provide acceptable scientific evidence to support facial thermography as a viable method for detecting  deception.

1.1 Conclusions

Similar to other veracity studies done on emerging lie detection methods, studies based on facial thermography draw conclusions based on small sample sizes, uncontrolled environments, uncooperative subjects, inconsistent judging criteria, and other variables that detract from the scientific basis of which successful results are often cited.  Unless new research can provide acceptable scientific evidence to support facial thermography as a viable alternative to the polygraph, the concept of using thermal imaging as a method for lie detection will likely remain an adjunct to traditional polygraph measurements.

2. Functional Brain Imaging

Functional brain imaging looks at brain function more directly than polygraph testing by using positron emission tomography (PET) and magnetic resonance imaging (MRI), which employ strong magnetic fields to induce brain tissue molecules to emit distinctive radio signals used to monitor blood flow and oxygen consumption in the brain.  Within the context of MRI, the detection of blood-oxygen-level-dependent (BOLD) signals has garnered the name functional magnetic resonance imaging (fMRI).  Research studies are focused on using fMRI to analyze knowledge and emotion characteristics theorized to link deception to physiological brain activity.  In addition, other research areas have focused on combining PET and fMRI with simultaneous measurements of skin conductance response.  Scientists are quick to point out, however, that applied fMRI studies completed thus far have similar limitations to earlier polygraph studies.  Furthermore, they point out that fMRI analysis is expensive, time-consuming (2-3 hours per examine), and highly sensitive to subject motion during the brain scan.  To overcome some of these potential deficiencies for veracity applications, some researchers suggest the use of an electroencephalograph (EEG), which directly measures the electrical output of the brain rather than attempting to map brain activity from blood flow patterns.  One EEG study conducted by Jennifer Vendemia, a researcher from the University of South Carolina, suggests that predictable patterns of energy fluctuating brain activity occur when people lie.  The correlation between brain activities and lying are nothing new, but the fact that researchers continue to explore this path makes the possibility of brain imaging a potential candidate to supplement or eventually replace traditional polygraph techniques.

2.1 Conclusions

Psychology professor John Gabrieli predicts that within ten years research advances in neurotechnology could yield brain scanners in schools and airports.  One Iowa company called Brain Fingerprint Laboratories claims it has developed technology that can identify specific brain wave patterns that people emit when they are looking at or discussing something they have already seen.  Furthermore, psychiatrist Daniel Langleben from the University of Pennsylvania School of Medicine has found that increased activity in several brain regions is visible in an fMRI scan when people lie.  However, Doctor Langleben also contends that lying is a complex behavior and that it is likely to be linked to a large number of unknown brain sites.

Lie Detection Systems: Part 1

1. Introduction

Techniques for lie detection have existed for decades through the use of interviews, interrogations, and other means based on little scientific merit.  Today, modern lie detection techniques rely on measurable physiological responses, which serve as indicators of deception.  This White Paper explores the concept of veracity by summarizing the following physiological techniques used for either overt or covert lie detection scenarios:

  • Polygraph
  • Voice Stress Analyzers
  • Facial Thermography
  • Functional Brain Imaging

2. Polygraph

Considered one of the best know and widely utilized lie detection techniques in the U.S. and other countries like Israel, Japan, and Canada, the polygraph provides U.S. law enforcement and intelligence agencies with a tool that combines interrogation with physiological measurements obtained during the polygraph examine.  By recording a person’s respiration, heart rate, blood pressure, and electrical conductance at the surface of the skin, the polygraph examination applies to overt scenarios where a subject is asked a series of yes/no questions while wired sensors relay data about the person’s physiological attributes.  In addition to these traditional measurements of involuntary and somatic activity, other physiological events can be recorded non-invasively, including cardiac output, total peripheral resistance, skin temperature, and vascular perfusion in coetaneous tissue beds.

Trained polygraph practitioners emphasize that the polygraph instrument itself measures levels of deception indirectly, by measuring physiological responses that are believed to be stronger during acts of deception than at other times.  Collectively, these patterns of physiological responses to relevant questions asked by an investigator are recorded on an analog or digital chart and require human interpretation from the polygraph examiner.  Aside from interpretation of the polygraph chart, which can also be somewhat automated by computer algorithms, other factors that assist or inhibit the polygraph instrument’s ability to accurately perform a lie detection function include the potential for influence from drugs or alcohol, examiner’s expectations about the examinee’s truthfulness, and adverse physiological responses that have no direct correlation to the examinee’s intent to deceive.

Aside from its use as a diagnostic tool to test for deception where truth or deception decisions are made based on charts that are analyzed and scored, the polygraph has other practical applications such as:

  • Eliciting admissions from people who believe or are influenced to believe that the polygraph machine will accurately detect their attempts at deception.
  • Testing the level of cooperation with an investigative effort through suspicion or detection of countermeasures used by the examinee during polygraph testing.

Overall, polygraph examines are considered to be an effective tool for lie detection when combined with information from other sources used to judge truthfulness or deception (i.e., pretest interviews, comparison question testing, observation of examinee’s demeanor, etc.).

2.1 Commercially Available Polygraphs

Digital quality, commercially available polygraph systems consist of either a complete hardware/software integrated system on a laptop PC or a stand-alone data acquisition system that can be connected to an existing computer.  Some systems include scoring algorithm software and/or peripheral hardware used for motion sensing or to measure additional physiological parameters.  The following list describes a few of the commercially available polygraph products:

  • LX4000 – manufactured by Lafayette Instrument.  Records, stores, and analyzes physiological characteristics derived from respiration, galvanic skin response, and blood volume/pulse rate.
  • 4-6 Channel S/Box Package – from Axciton Systems, Inc. provides a customized polygraph system designed to accommodate 4, 5, or 6, channel physiological parameters.
  • Computerized Polygraph System (CPS) – manufactured by Stoelting Polygraphs.  Claims to be the only computerized polygraph system containing a scoring methodology based on verified criminal data from a major government law enforcement agency.

2.2 Conclusions

As the subject of hundreds of controlled, scientific studies regarding polygraph effectiveness, a final and concrete determination of the polygraph’s accurateness still hinges on research information contained in classified national security documents as well as proprietary information about computer scoring algorithms or other trade secrets that equipment vendors will not divulge.  The American Polygraph Institute cites a 70% accuracy rating among polygraph skeptics with a 90% accuracy rate among proponents.  A 2003 report conducted by the National Academy of Sciences, which examined 57 previous polygraph studies to quantify the accuracy of polygraph testing within the scope of personnel security screening concluded, “The inherent ambiguity of the physiological measures used in the polygraph suggests that further investments in improving polygraph technique and interpretation will bring only modest improvements in accuracy.”  This study also pointed out that polygraph countermeasures deployed by major security threats could seriously undercut the value of polygraph security screening.  Nonetheless, this same study concluded that the polygraph technique is the best tool currently available to detect deception and assess credibility.

3. Voice Stress Analyzers

Touted as a lower cost, less invasive lie detection method, commercially available voice stress analyzers (VSAs) have been in use since the early 1970’s through efforts between private industry and the U.S. Army.  Based on the presumption that liars experience more stress than truth-tellers, a VSA works by measuring microtremors associated with laryngeal muscles used during voiced excitation.  The microtremor are defined as inaudible vibrations that speed up uncontrollably in the human voice during an act of deception.  The level of microtremor maintains an inverse relationship to a person’s stress level where more stress denotes less tremor.  Slow microtremors occur at rates between     3-5 Hz while more rapid tremors can occur at 6-12 Hz.  Microtremors can be affected by numerous variables, including age, stress, drugs, alcohol, medical illness, brain disorders, and multiple sclerosis.  Major issues surrounding VSA validity and accuracy remain focused on how stress impacts the laryngeal muscles during normal speech production and whether VSA speech processing algorithms can effectively extract and quantify the existence of microtremor information.  Proponents of VSAs point out several benefits of using their equipment in lieu of more traditional polygraph techniques, namely:

  • Applicability to covert scenarios.
  • Less training time required to learn and operate.
  • No academic prerequisites for training.
  • 30-50% less time to administer the testing regiment.
  • Voice recordings can be processed as well as live speech.
  • Lower cost of ownership.

3.1 Commercially Available VSAs

Commercially available VSAs use some form of speech signal processing to extract excitation information related to microtremors.  The following VSAs provide a sample of these commercially available products:

  • Psychological Stress Evaluator (PSE) – patented in the 1970’s by Allan D. Bell and marketed through Dektor Counterintelligence and Security, Inc.
  • Truster – developed by an Israeli company named Makh-Shevet.
  • Computerized Voice Stress Analyzer (CVSA) – developed in the late 1980s by the National Institute for Truth Verification (NITV), which claims their system is in use by more than 500 law enforcement agencies.
  • Lantern – developed by Diogenes Group, Inc.
  • Vericator (formerly known as Truster Pro) – manufactured by Trustech Ltd. Integritek Systems, Inc.
  • VSA Mark 1000 – manufactured by CCS International, Inc. and marketed as a covert electronic lie detection system.

3.2 Conclusions

One technical report conducted in 1999 by ACS Defense, Inc. and the U.S. Air Force Research Laboratory in Rome, N.Y., concedes that information from previous studies of speech under stress combined with their own Air Force evaluations and experiments using commercial VSAs suggests that a speaker’s voice characteristics change when the speaker is under stress.  However, as stated previously in other studies, a variety of factors in addition to stress can reflect changes in the human speech production process, including the presence or absence of microtremors.  In its final conclusions, the Air Force study determined that the level and degree to which changes in muscle control associated with speech production impart more or less fluctuation in the speech signal cannot be conclusively determined.  In other words, focusing on the absence or presence of microtremors alone does not conclusively define the accuracy of VSAs.  Furthermore, the study recommends that several speech features may be needed to accurately capture the subtle differences in how speaker’s convey their stress in various speech scenarios.

Another study conducted in 2000 by the Department of Defense Polygraph Institute (DoDPI) and the U.S. Army Walter Reed Hospital, also concluded that the relationship between microtremors and a speaker’s deception might not be experimentally sound and that the use of microtremor analysis to detect deception is nothing better than chance.  In addition, a 2002 study, conducted by the DoDPI research division staff to investigate the NITV’s CVSA, provided no evidence to support the CVSA for its ability to identify stress-related changes in voice.  Lastly, a 2002 VSA literature review conducted by the National Research Council revealed that VSA accuracy rates from commercially available systems remain at or below chance probability levels.  Still, despite the doubt from many researchers and published reports citing a lack of scientific evidence to support industry claims, the commercially available CVSA system, which retails for about $10,000, claims an accuracy rate of 98%.

Biometrics Demystified: Part 4

1.   Biometric Standards Organizations

Although a lot has been written about the lack of standards and testing for biometric technologies, much has changed in recent years with a surging interest in defining interoperability requirements for biometric applications.  Recent standards efforts aimed at creating application programming interfaces (APIs) will allow for simple substitution of biometric technologies within a given network environment along with streamlined integration of biometric technologies across various software applications.

1.1 NIST-ITL and CBEFF Standard

A division of NIST, the ITL performs testing, testing methods, and proof-of-concept implementations in an effort to help end-users and the biometric industry accelerate the deployment of standards-based security solutions based in part on the Government’s Homeland Defense Initiative.  In conjunction with the Biometric Consortium, the NIST-ITL initiated the Common Biometric Exchange File Format (CBEFF) project to establish a universal biometric template, which allows different systems to access and exchange diverse types of biometric data in a standardized format.  To date, CBEFF has been finalized and exists as a file header format with fields that define common elements for exchange between biometric devices and systems.  CBEFF also provides forward compatibility for technology improvements. CBEFF does not, however, provide device or matching interoperability.  On January 1, 2001, the NIST published the CBEFF specification as NISTIR 6529.

1.2 BioAPI Consortium

First introduced in 1998, the BioAPI Consortium developed a widely accepted API for biometric technologies.  Derived from various biometric industry leaders as well as non-biometric companies like IBM, HP, and Compaq, the BioAPI Consortium works with biometric solution developers, software developers, and systems integrators to leverage existing standards and develop an OS-independent standard that can serve various biometric technologies.  Unlike the CBEFF, BioAPI does not define how the biometric device captures the data, but rather, how applications communicate with biometric devices and how the data is manipulated and stored.  Written in the C programming language, BioAPI defines the application programming interface and service provider interface that define capabilities such as enrollment, verification, identification, capture, process, match, and store.  The Consortium published Version 1 of the BioAPI Specification in March 2000.  BioAPI Version 1.1 of the Specification and Reference Implementation was released in March 2001.  Recently, the U.S. Army announced that future Army procurements of biometric devices will require BioAPI compliance.

1.3 BAPI

Unlike the consortium-based BioAPI, BAPI was developed and owned by I/O Software, a biometric middleware vendor.  I/O Software has licensed BAPI to Microsoft, who plans to incorporate biometric authentication as a core component of its future OS.  I/O Software has also licensed elements of BAPI to Intel for inclusion into Intel’s PC security platform.  At present, BAPI remains a competing element against BioAPI but looks to be more prevalent in the Windows/Intel market than in U.S. Government applications, which have established BioAPI as their API of choice.

1.4 INCITS Technical Committee M1

In November 2001, the Executive Board of INCITS established the Technical Committee M1 to ensure a high priority, focused, and comprehensive approach in the U.S. for the rapid development and approval of formal national and international generic biometric standards.  M1’s mission involves accelerating the deployment of significantly better standards-based security solutions for purposes such as homeland defense and other government and commercial applications based on biometric personal authentication. At present, the BioAPI Specification Version 1.1 has successfully completed INCITS fast track processing and attained approval for maintenance under Technical Committee M1 on February 13, 2002.  An augmented version of CBEFF is next on the list for fast-track processing in the near future.  In addition, the Technical Committee M1 is reviewing contributions of draft project proposals for the standardization of biometric templates while seeking to develop active liaisons with other INCITS Technical Committees such as B10 – Identification Cards and Related Devices, L3 – Coding of Audio, Picture, Multimedia, and Hypermedia Information, and T4 – Security Techniques.

 

Additional standards currently under development by Technical Committee M1 include:

  • Application Profile: Verification and Identification of Transportation Workers;
  • Application Profile: Personal Identification for Border Crossing;
  • Application Profile: Biometric Verification in Point-of-Sale Systems;
  • Finger Pattern-Based Interchange Format;
  • Finger Minutiae Format for Data Interchange;
  • Face Recognition Format for Data Interchange;
  • Finger Image Interchange Format;
  • Iris Image Format for Data Interchange.

1.5 ANSI ASC X9

The ANSI Accredited Standards Committee (ASC) X9 develops, establishes, publishes, maintains, and promotes standards for the financial services industry in order to facilitate delivery of financial products and services.  The development of X9.84 Biometric Information Management and Security stemmed from the need to maintain confidentiality with biometric data.  X9.84 ensures the integrity and authenticity of biometric data by defining requirements for integrating biometric information such as fingerprint, iris scan, or voice print in a financial services environment where customer identification and employee verification are of paramount importance.

 


2.   Industry Associations

2.1 Biometric Consortium

The Biometric Consortium was established in 1992 by the U.S. Department of Defense and aims to create standards which can be used to test biometric technologies for the benefit of all government agencies.  The goals of the Biometric Consortium include:

 

  • Promote the science and performance of biometrics;
  • Create standardized testing and establish the National Biometric Evaluation Laboratory;
  • Promote information exchange between government, industry, and academia;
  • Address the safety, performance, legal, and ethical issues of biometric technologies;
  • Advise agencies on the selection and application of biometric devices.

 

The Biometric Consortium sponsors two working groups: one concerning CBEFF and another co-sponsored by the NIST known as the Biometrics Interoperability, Performance, and Assurance Working Group.  This latter group seeks to broaden the utilization, acceptance, and information sharing of biometric technologies among users and private industry supporters.  This group also supports the advancement of technically efficient and compatible biometrics technology solutions on a national and international basis by addressing required issues and efforts beyond the scope of current and on-going developments already undertaken by other national or international organizations.

2.2 BioSEC Alliance

Founded in 1999 by BioNetrix, the BioSEC Alliance forms a multi-vendor initiative dedicated to promoting enterprise authentication solutions.  The BioSEC Alliance promotes a range of biometric and non-biometric authentication technologies to suit various organizations’ requirements.

2.3 International Biometric Industry Association (IBIA)

The IBIA is a nonprofit trade association founded in 1998 to advance, advocate, defend, and support the collective international interests of the biometric industry.  Though not directly involved in standards development, the IBIA’s group of biometric developers, vendors, and integrators has used its influence to alter several pieces of recent government legislation, including the Identity Theft and Deterrence Act and the Electronic Signatures in Global and National Commerce Act.

Biometrics Demystified: Part 1

SUMMARY

Unlike traditional authentication methods that rely on something you know – like a password or passphrase, or something you have – like a smart card or token, biometric applications rely on something you are: a human being with robust and distinguishable physical traits.  Because a person’s unique trait (iris, retina, fingerprint, voice, etc.) cannot be lost or stolen, biometric applications, when used in conjunction with traditional user authentication mechanisms, provide higher levels of security over traditional authentication methods alone.  Biometrics Demystified describes the field of biometrics as it exists today with an overview of how a typical biometric system works and how various biometric technologies provide a viable alternative to more traditional user authentication methods.

1.      Biometric System Elements

1.1 Identification

Identification attempts to answer the question, “Who are you?”  The Integrated Automated Fingerprint Identification System (IAFIS) established and administered by the Federal Bureau of Investigation (FBI) provides a well-known example of a biometric identification system.  With IAFIS, the FBI maintains the largest collection of fingerprint records at over 40 million ten print records.

1.1.1 Positive versus Negative Identification

Positive identification systems attempt to match a user’s biometric template with a match template stored in a database of enrollment data.  In these systems, the user will claim an identity by providing a name or a PIN before submitting their biometric sample.  Positive identification prevents multiple users from claiming a single identity.  Biometric systems deployed for positive identification include hand geometry, finger scan, voice recognition, iris scan, retinal scan, and facial scan.  In contrast, negative identification systems ensure that a user’s biometric data is not present in a given database, thus preventing a single user from enrolling more than once.  In this scenario, no reliable non-biometric alternatives exist. Welfare centers offer one example where a user could benefit from enrolling more than once to gain multiple benefits under different names.  Only two biometric systems are currently deployed for negative identification, namely finger scan and retinal scan.

1.2 Verification

In contrast to identification, verification, or one-to-one matching, attempts to pair a user’s biometric sample against his or her enrollment data.  In this mode, the user first claims their identity by entering a password, user ID, voice command, or other form of identification before processing the biometric sample.  Verification begs the question, “Are you who you claim to be?” For the most part, any biometric authentication system provides a good example of a verification system where users must identify themselves to the system and then verify that identity through a given biometric sample.  In general, verification systems (one-to-one) are faster and more accurate than identification (one-to-many) systems and require less computational power.

1.3 Enrollment

By relying on a user’s physical characteristics, biometric authentication attempts to match a user’s unique physical trait against a newly captured biometric sample of that user’s trait. By definition, enrollment describes the process by which a user’s biometric sample is initially acquired, processed, and stored in the form of a biometric template.  Depending on the system, a user may be required to present their biometric sample several times to achieve a successful enrollment.  Aside from the template creation, a system administrator creates a username or password associated with the user upon enrollment.  Enrollment effort can vary between biometric systems.  Often more than two attempts are required for fingerprint and voice systems where obtaining a good quality enrollment image can be heavily dependent on user behavior and familiarity

1.4 Presentation

Though both processes are similar, a distinction is made between presentation and enrollment, where presentation describes the process by which a user returns to a biometric application they have previously enrolled in and provides a biometric sample to the acquisition device.  The presentation process can last as little as one second or more than a minute, depending on the specific biometric technology deployed.

1.5 Data Collection

Data collection begins with the measurement of a user’s biometric characteristic (fingerprint, iris image, voice print, etc.).  At this stage, an assumption is made that the user’s biometric characteristic remains distinctive and repeatable over time.  The presentation of the user’s biometric characteristic to the biometric sensor introduces a behavior aspect to the biometric process.  The output from the sensor, which relies on the input from the user, derives itself from three factors:

 

  1. The biometric measurement;
  2. The way the measurement is presented by the user;
  3. The technical characteristic of the sensor.

 

Changes to any one of these three factors can negatively affect both the distinctiveness and the repeatability of the measurement, thus degrading the overall accuracy.

1.6 Data Storage

The data storage subsystem can vary as much as the biometric application itself.  Depending on the nature of the biometric authentication function, (comparing one-to-one biometric samples versus comparing one-to-many), the data storage function might reside on a smart card or in a central database.  In most cases, the data storage functions remains the same, involving the storage of a single or multiple users’ templates.  Another function entails the storage of raw biometric data, or “images,” which allows the biometric system to reconstruct corrupted templates from a user’s biometric data before the data enters the signal processing subsystem.  The storage of raw data allows the system vendor to make changes to the system data without the need to re-collect or “re-enroll” data from all users.

1.7 Templates

A biometric acquisition device in the form of a fingerprint reader or an iris scanner, for instance, attempts to capture an accurate image of the user’s biometric sample.  A second process converts the raw biometric into a small data file called a template. Some important characteristics of templates include:

  • Templates consist of a vendor’s mathematical representation of a user’s biometric sample derived from feature extractions of the user’s sample.
  • Templates are proprietary to each vendor and each biometric technology.  There is no common biometric template format; therefore, a template created in one vendor’s system cannot be used with another vendor’s system.  Since November, 2001, the International Committee for Information Technology Standards Technical Committee M1 has worked to establish common file formats and application program interfaces that address these template concerns.
  • No two templates are alike, even when created from the same biometric sample. For example, two successive placements of a user’s finger generates entirely different templates.
  • Template sizes vary from less than 9 bytes for voice print to more than 1000 bytes for  a facial image.
  • Templates can be stored in a local PC, a remote network server, smart card, or in the acquisition device itself.
  • Biometric data describing a user’s fingerprint or hand geometry, for example, cannot be reconstructed from biometric templates since the templates themselves consist of distinct features drawn from a biometric sample.
  • Enrollment templates stored in a one-to-many database may suffer from data corruption issues over time.

1.7.1 Match Template versus Enrollment Template

An important distinction exists between enrollment templates and match templates.  An enrollment template is created when a user first submits their biometric sample.  This enrollment template is then stored for future biometric template comparisons. In contrast, a match template is created during subsequent identification or verification attempts, where the match template is compared to the original enrollment template, and generally discarded after the comparison takes place.

1.8 Signal Processing

The signal processing subsystem performs its function in four phases: segmentation, feature extraction, quality control, and pattern matching.

1.8.1 Segmentation

Segmentation describes the process of removing unnecessary background information from the raw extracted data.  One example would be distortion in a voice channel; another example would be distortions produced by shadows or lighting affects for a facial scanning system.

1.8.2 Feature Extraction

With feature extraction, the signal processing must retrieve an accurate biometric pattern from the data and sensor characteristics as well as noise and signal loss imposed by the transmission process.  Given a quality image of the biometric pattern, the signal processing system preserves the distinct and repeatable data points while discarding data points deemed non-distinctive or redundant. Consider speech authentication, for example, where a voice verification engine might focus solely on the frequency relationship of vowels that depend on the speaker’s pronunciation and not on the word itself.  Think of feature extraction as non-reversible compression.  In other words, the original biometric sample cannot be reconstructed from the extracted biometric features.

1.8.3 Quality Control

Quality control involves a determination about whether or not the signal received from the data collection system before, during, or after feature extraction arrives with acceptable quality.  If the system determines the signal quality is insufficient, then the system will request a new sample from the data collection system.  This partially explains why biometric users may be asked to enroll their biometric characteristic more than once, potentially invoking a failure-to-enroll error.  Subsequent sections of this report explore the concept of enrollment in more detail. For now, understand that enrollment refers to storing a user’s biometric sample, or “template,” in a portable or centralized database.

1.8.4 Pattern Matching

The pattern matching process compares the user’s presented biometric feature (that has undergone the data collection, feature extraction, and quality control processes) with the user’s “previously enrolled” biometric feature stored in a database.

1.9 Biometric Matching

The concept of biometric matching speaks to the heart of biometric authentication and the accuracy associated with biometric technologies.  Biometric authentication deals in degrees of certainty and does not offer a 100% guarantee that a user’s biometric template will match a stored template in a given database. Instead, biometrics rely on a three step process built upon a given biometric product’s standards for scoring, threshold, and decision.  In this process, a user’s biometric template is assigned a specific value or score, which the biometric system compares to a pre-determined threshold setting used to decide whether the user’s template should be accepted or rejected.

 

By definition, the threshold represents a predefined number established by a system administrator for the purpose of establishing the necessary degree of correlation needed for the system to render a match/no match decision. If the user’s template score exceeds the threshold, it “passes,” and the system responds with a match.  The converse implies that the user’s template score “fails,” prompting the system to render a no match decision.  As with scoring, thresholds vary widely depending on the user’s security requirements and the specific biometric system deployed.

 

A decision simply represents the result of the comparison between the score and the threshold.  In addition to match and no match decisions, some biometric systems can also register an inconclusive decision based upon the system’s inability to match a user’s verification template with a poorly enrolled template.

 

Since no industry standardized scale exists to identify a uniform scoring methodology, vendors utilize their own proprietary scoring methodology to process templates and generate numeric values that can range from 10 to 100 or –1 to 1.  Recall that no two templates are exactly the same.  This partially explains why no biometric system can render a match/no match decision with 100% certainty.

1.10 Decision

The decision subsystem implements a predetermined system policy that dictates specific threshold criteria used to base a match / no-match decision, which ultimately leads to an accept/reject decision for the user.  The system policy should strive for a balance between stringent security settings and user-friendliness. In other words, a decision subsystem programmed to 99% accuracy might correctly reject 99% of all unauthorized users but also fail to accept a large percentage of legitimate, authorized users.  The converse is also true where a loosely defined decision will make the biometric system easy to use but also grant access for an unacceptable percentage of unauthorized users.

1.11 Transmission

Some biometric systems collect biometric data at one location and store the data at another.  This scenario requires a transmission channel to facilitate the information exchange.  With large amounts of data involved, (i.e., a large number of users and/or large file sizes), data compression techniques may be required to conserve bandwidth and storage space.  The process of compression and expansion can lead to quality degradation in the restored signal, depending on the nature of the biometric sample and the compression technique deployed.