ISSN 0352-9045 Informacije Journal of Microelectronics, Electronic Components and Materials Vol. 42, No. 1 (2012), March 2012 Revija za mikroelektroniko, elektronske sestavne dele in materiale letnik 42, številka 1 (2012), Marec 2012 UDK 621.3:(53+54+621+66)(05)(497.1)=00 ISSN 0352-9045 Informacije MIDEM 1-2012 Journal of Microelectronics, Electronic Components and Materials VOLUME 42, NO. 1(141), LJUBLJANA, MARCH 2012 | LETNIK 42, NO. 1(141), LJUBLJANA, MAREC 2012 Published quarterly (March, June, September, December) by Society for Microelectronics, Electronic Components and Materials -MIDEM. Copyright © 2012. All rights reserved. | Revija izhaja trimesečno (marec, junij, september, december). Izdaja Strokovno društvo za mikroelektroniko, elektronske sestavne dele in materiale - Društvo MIDEM. Copyright © 2012. Vse pravice pridržane. Editor in Chief | Glavni in odgovorni urednik Marko Topič, University of Ljubljana (UL), Faculty of Electrical Engineering, Slovenia Editor of Electronic Edition | Urednik elektronske izdaje Kristijan Brecl, UL, Faculty of Electrical Engineering, Slovenia Associate Editors | Odgovorni področni uredniki Vanja Ambrožič, UL, Faculty of Electrical Engineering, Slovenia Slavko Amon, UL, Faculty of Electrical Engineering, Slovenia Danjela Kuščer Hrovatin, Jožef Stefan Institute, Slovenia Matjaž Vidmar, UL, Faculty of Electrical Engineering, Slovenia Andrej Žemva, UL, Faculty of Electrical Engineering, Slovenia Editorial Board | Uredniški odbor Mohamed Akil, ESIEE PARIS, France Giuseppe Buja, University of Padova, Italy Gian-Franco Dalla Betta, University of Trento, Italy Ciprian Iliescu, Institute of Bioengineering and Nanotechnology, A*STAR, Singapore Malgorzata Jakubowska, Warsaw University of Technology, Poland Marc Lethiecq, University of Tours, France Teresa Orlowska-Kowalska, Wroclaw University of Technology, Poland Luca Palmieri, University of Padova, Italy International Advisory Board | Časopisni svet Janez Trontelj, UL, Faculty of Electrical Engineering, Slovenia - Chairman Cor Claeys, IMEC, Leuven, Belgium Denis Donlagic, University of Maribor, Faculty of Elec. Eng. and Computer Science, Slovenia Zvonko Fazarinc, CIS, Stanford University, Stanford, USA Leszek J. Golonka, Technical University Wroclaw, Wroclaw, Poland Jean-Marie Haussonne, EIC-LUSAC, Octeville, France Marija Kosec, Jožef Stefan Institute, Slovenia Miran Mozetič, Jožef Stefan Institute, Slovenia Stane Pejovnik, UL, Faculty of Chemistry and Chemical Technology, Slovenia Giorgio Pignatel, University of Perugia, Italy Giovanni Soncini, University of Trento, Trento, Italy Iztok Šorli, MIKROIKS d.o.o., Ljubljana, Slovenia Headquarters | Naslov uredništva Uredništvo Informacije MIDEM MIDEM pri MIKROIKS Stegne 11, 1521 Ljubljana, Slovenia T. +386 (0)1 513 37 68 F. + 386 (0)1 513 37 71 E. info@midem-drustvo.si www.midem-drustvo.si Annual subscription rate is 100 EUR, separate issue is 25 EUR. MIDEM members and Society sponsors receive current issues for free. Scientific Council for Technical Sciences of Slovenian Research Agency has recognized Informacije MIDEM as scientific Journal for micro-electronics, electronic components and materials. Publishing of the Journal is cofinanced by Slovenian Book Agency and by Society sponsors. Scientific and professional papers published in the journal are indexed and abstracted in COBISS and INSPEC databases. The Journal is indexed by ISI® for Sci Search®, Research Alert® and Material Science Citation Index™. | Letna naročnina je 100 EUR, cena posamezne številke pa 25 EUR. Člani in sponzorji MIDEM prejemajo posamezne številke brezplačno. Znanstveni svet za tehnične vede je podal pozitivno mnenje o reviji kot znanstveno-strokovni reviji za mikroelektroniko, elektronske sestavne dele in materiale. Izdajo revije sofinancirajo JAKRS in sponzorji društva. Znanstveno-strokovne prispevke objavljene v Informacijah MIDEM zajemamo v podatkovne baze COBISS in INSPEC. Prispevke iz revije zajema ISI® v naslednje svoje produkte: Sci Search®, Research Alert® in Materials Science Citation Index™. Po mnenju Ministrstva za informiranje št.23/300-92 se šteje glasilo Informacije MIDEM med proizvode informativnega značaja. Design | Oblikovanje: Snežana Madič Lešnik; Printed by | tisk: Biro M, Ljubljana; Circulation | Naklada: 1000 issues | izvodov; Slovenia Taxe Percue | Poštnina plačana pri pošti 1102 Ljubljana Informacije IMIDEM Journal of Microelectronics, Electronic Components and Materials Vol. 42, No. 1 (2012) Content Vsebina Original papers Izvirni članki Editorial 2 Uvodnik D. A. Shnawah, M. F. M. Sabri, I. A. Badruddin: The Limited Reliability of Board-Level SAC Solder Joints under both Mechanical and Thermo-mechanical Loads 3 D. A. Shnawah, M. F. M. Sabri, I. A. Badruddin: Spoji pri mehaničnih in termo-mehaničnih obremenitvah G. Bizjak, M. B. Kobav, B. Luin: Quality Control of Automotive Switches with help of Digital Camera 11 G. Bizjak, M. B. Kobav, B. Luin: Kontrola kakovosti avtomobilskih stikal s pomočjo digitalne kamere M. R. I. Faruque, M. T. Islam, N. Misran: A New Design of Split Ring Resonators for Electromagnetic (EM) absorption Reduction in Human Head 18 M. R. I. Faruque, M. T. Islam, N. Misran: Nova oblika prekinjenih obročnih resonatorjev za zmanjševanje elektromagnetne absorpcije v človeški glavi V. Gradišnik: Characterization of A-Si:H P-I-N Photodiode Response 23 V. Gradišnik: Karakterizacija odziva a-Si:H P-I-N fotodiode T. Korošec, S. Tomažič: An Adaptive-Parity Error-Resilient LZ'77 Compression Algorithm 29 T. Korošec, S. Tomažič: Na napake odporen zgoščevalni algoritem LZ'77 s prilagodljivo pariteto J. Koselj, V. B. Bregar: Influence of parameters of the flanged open-ended coaxial probe measurement setup on permittivity measurement 36 J. Koselj, V. B. Bregar: Vpliv parametrov merilnega sistema z odprto koaksialno sondo na meritve dielektričnosti F. Pavlovčič: Kinetics of Discharging Arc Formation 43 F. Pavlovčič: Kinetika nastanka razelektritvenih oblokov L. Pavlovič, J. Trontelj in D. Kostevc: 300 GHz microbolometer double-dipole antenna for focal-plane-array imaging 56 L. Pavlovič, J. Trontelj in D. Kostevc: 300 GHz mikrobolometrska antena z dvojnim dipolom za slikanje s poljem v goriščni ravnini M. Petkovšek, P. Kosmatin, C. Zevnik, D. Vončina, P. Zajec: Measurement system for testing of bipolar plates for PEM electrolyzers 60 M. Petkovšek, P. Kosmatin, C. Zevnik, D. Vončina, P. Zajec: Merilni sistem za testiranje bipolarnih plošč PEM elektrolizne celice D. Strle: Design and Modeling High Performance Electromechanical I-A Modulator 68 D. Strle: Načrtovanje in modeliranje elektro-mehanskega I-A modulatorja z visoko ločljivostjo M. J. T. Marvast, H. Sanusi, M. A. M. Ali: A 4.1-bit, 20 GS/s Comparator for High Speed Flash ADC in 45 nm CMOS Technology 73 M. J. T. Marvast, H. Sanusi, M. A. M. Ali: 4.1-bitni 20 GS/s komparator za hitrobliskovni ADC v 45 nm CMOS tehnologiji Front page : Array of photovoltaic modules Naslovnica : Polje fotonapetostnih modulov 1 Editorial | Uvodnik Last year we celebrated twenty five years, a quarter of century, of the MIDEM Society - Society of Microelectronics, Electronic Components and Materials. There are two activities of our society that made it non-erasable in the history: the MIDEM conferences, 3-day international conferences taking place every September at different locations across picturesque Slovenia and the journal Informacije MIDEM - Journal of Microelectronics, Electronic Components and Materials. Both conference total number (48th in 2012) and journal volume number (42th in 2012) exceed the quarter-centennial history of our society, since we have ancestors in SSOSD (Federal Professional Committee of Electronic Components at ETAN (Yugoslav Association of Electronics, Telecommunication, Automation and Nuclear Engineering. The SSOSD Committee started to inform its members by publishing the newsletter called »Informacije SSOSD« in August 1969. In October 1977 the committee has been renamed to a section of ETAN as SSESD (Professional Section for Electronic Components, Microelectronics and Materials), which continued to publish »Informacije SSESD« until January 1986. In 1986 a new society was born, MIDEM (Strokovno društvo za mikroelektroniko, elektronske sestavne dele in materiale), registered in Ljubljana on 23 April 1986 as the first yugoslav profesional society related to Microelectronics, Electronic Components and Materials. It took over the destiny and continuation of the newsletter with scientific and professional articles under the name »Informacije MIDEM« (ISSN 0352-9045). In 1987 Dr. Iztok Šorli became its Editor-in-Chief and thanks to him and members of Editorial Board the »Informacije MIDEM« evolved into scientific-professional journal with peer review articles. Based on high scientific quality and professional publishing process our journal has been accepted on the list of SCIE journals. The first impact factor by JCR (Journal Citation Reports) was 0,088 in 1998. Dr. Iztok Šorli has not been only successful Editor-in-Chief, but also excellent Technical Editor for 25 years. In recognition to his merits and distinguished contribution the Society MIDEM awarded him with a prize at the MIDEM Academy on occasion of 25th Anniversary of the MIDEM Society in May 2011. On behalf of all former Editorial Board Members, all reviewers and authors I would express our sincere gratitude and admiration for his tremendous work, great effort and devotion. Although he stepped down from EiC position, he stays in the Advisory Board of our journal. In the last six months we have renewed both the jornal's International Advisory Board and Editorial Board. The renewed Editorial Board has five associate editors responsible for fields of Technology & Materials, Electronics (including Microelectronics), Sensors & Actuators, Power Engineering and Communications. Our Editorial Board has been expanded with distinguished researchers from Europe who will together with associate editors assure fair peer review process of increasing number of original scientific manuscripts. Scientists and professionals are also invited to submit review manuscripts related to the journal scope or any important news relevant to above mentioned fields to be considered for publication. Informacije MIDEM also celebrated in 2011 its 25th anniversary of the journal design that called for renewal. I sincerely hope you like the new outlook and design. For me personally, this is the first Editorial which I have the honour to write. I have served as Editorial Board Member for 10 years and I believe Informacije MIDEM is in very good health and spirit, and we can look forward to its prosperous future. I envision the role of the whole Editorial Board as the unique opportunity to work with so many highly dedicated authors and reviewers for benefit of our readers and community with one common goal: advancing science and engineering! Marko Topič Editor-in-Chief P.S. All papers published in Informacije MIDEM (since 1986) can be access electronically at http://midem-drustvo.si/journal/home.aspx. 2 Original paper •ije ÏMIDEM Iournal of M Informacije | Journal of Microelectronics, Electronic Components and Materials Vol. 42, No. 1 (2012), 3 - 10 The Limited Reliability of Board-Level SAC Solder Joints under both Mechanical and Thermo-mechanical Loads Dhafer Abdulameer Shnawah *, Mohd Faizul Mohd Sabri, Irfan Anjum Badruddin Department of Mechanical Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia Abstract: The trend of miniaturization, light weight, high speed and multifunction are common in electronic assemblies, especially, for portable electronic products. In particular, board-level solder joint reliability, in term of both mechanical (e.g., drop impact) and thermo-mechanical (e.g., thermal cycling) loads is of great concern for portable electronic products. The transition to lead-free solder happened to coincide with a dramatic increase in portable electronic products. Sn-Ag-Cu (SAC) is now recognized as the standard lead-free solder alloy for packaging interconnects in the electronics industry. Hence, this study review the reliability of board-level SAC solders joints when subjected to drop impact and thermal cycling loading conditions from the viewpoints of mechanical and micro-structural properties of the bulk solder. The finding presented in this study indicates that the best SAC composition for drop performance is not necessarily the best composition for optimum thermal cycling reliability, thus the SAC solder alloys are limited in their potential applications in the electronic industries. This contribution has its value in giving information on possible developments and the suitability for the usage of SAC solder in portable electronic devices. Keywords: SnAgCu solders, silver content, mechanical properties, microstructure properties, thermal cycling, drop impact O •• • 1 ' v '1 • . J ' '/ Spoji pri mehaničnih in termo-mehanicnih obremenitvah Izvleček: Trend miniaturizacije, nižja teže, višje hitrosti in večopravilnosi so običajni v elektronskih sestavih, posebej pri prenosnih elektronskih izdelkih. Zanesljivost spajkanega spoja v smislu mehanične (npr. vpliv padca) in termomehanične (termično ciklanje) obremenitve je poglavitnega pomena pri prenosnih elektronskih izdelkih. Prehod na spajko brez svinca se je zgodil istočasno z drastičnim povečanjem prenosnih elektronskih izdelkov. Zlitina Sn-Ag-Cu (SAC) danes predstavlja običajno spajko brez svinca v industriji elektronike. Pričujoča raziskava podaja pregled zanesljivosti spajkanih spojev pri obremenitvah zaradi padcev ali termičnih ciklanj v smislu mehaničnih in mikrostrukturnih lastnosti spajke. Predstavljene ugotovitve nakazujejo, da najboljša SAC zlitina pri vplivih padcev ni nujno tudi najbolj zanesljiva pri termičnem ciklanju, zaradi česar so SAC zlitine omejene na določene aplikacije v elektronski industriji. Ta prispevek podaja informacijo o možnih razvojih in ustreznostih uporabe SAC zlitin v prenosnih elektronskih izdelkih. Ključne besede: SnAgCu spajke, vsebnost srebra, mehanične lastnsoti, mikrostrukturne lastnosti, termično ciklanje, vpliv padca * Corresponding Author's e-mail: dhafer_eng@yahoo.com 1. Introduction Currently, the trend in portable electronic products such as the handheld computer, mobile phone, Personal Digital Assistant (PDA) and digital camera toward miniaturization and multi-functionality, has led to electronic packages with higher density and smaller dimension, e.g. the smaller solder interconnections of Ball Grid Array (BGA) and Chip Scale Package (CSP) (Zhang, Ding et al. 2009). For portable electronic prod- ucts, board-level solder joint reliability due to drop impact load and thermal cycling load is of great concern. The mechanical loads resulting from mishandling during transportation or customer usage may cause solder joint failure, which leads to malfunctioning of the product. Normally, mobile phones are designed to withstand a number of accidental drops to the floor from a height of 1.5 m, without resulting in major mechanical or functional failures (Tee, Ng et al. 2003). Besides, during use, portable electronic products are subject 3 D. A. Shnawah et al; Informacije Midem, Vol. 42, No. 1 (2012), 3 - 10 to temperature cycles load, induced by environmental temperature change and power on-off cycles. A typical temperature cycling test condition of -40°C to 125°C is required to ensure a reliable package performance (Tee, Ng et al. 2003; Zhang, Ding et al. 2009). There are few publications related to integrated design analysis of board-level solder joint reliability, with consideration of both drop impact load and thermal cycling load performance simultaneously. Overall solder joint reliability is determined by the combination of service environment and system design. The service environment will determine the temperature extremes which the product must endure, the frequency of power on / off cycling, and the possibility of specific mechanical shocks (for example, drop impact) stresses. Where the system design is concerned, a series of factors that include component and substrate physical properties, solder joint geometry, bulk solder alloy microstructure and mechanical properties, the nature of the IMC layer formed and their structure at the solder joint / pads interfaces are important. Cost limitations add additional constraints, forcing hard choices to be made (Tu 2007). The robustness of a solder joint subjected to both temperature cycles load and drop impact load is influenced by a complex combination of bulk solder alloy properties and IMC layer properties (Grafe, Garcia et al. 2008). For bulk solder alloys, eutec-tic tin-lead, with its long established history, has been replaced with the complexity of a multitude of new and unfamiliar lead-free alloys. SAC is now recognized as the standard lead free solder alloy for packaging interconnects in the electronics industry. However, Sn-Ag-Cu alloys are not enough to meet high solder joint reliability under different loading conditions. This study present the direct correlation between mechanical and microstructure properties of SAC bulk solder alloy and the reliability of SAC solder joints in term of both drop impact and thermal cycling loading conditions. 2. Sn-Ag-Cu lead-free solders series Of the many lead-free solder series proposed in the last decade or so, Sn-Ag-Cu (SAC) series alloys have emerged as the most widely accepted as shown in Figure 1 Soldertec's survey shows that the most popular SAC are the near eutectic SAC alloys (Nimmo 2002), which consist of 3.0-4.0% of Ag and 0.5-1.0% of copper as shown in Figure 2. The melting point of these near eutectic SAC alloys is 217oC, which is lower than the 96.5Sn-3.5Ag binary eutectic alloy at 221oC. In the SAC system, the addition of Cu both lowers the melting temperature and improves the wettability (K. Nim-mo 2004). Figure 3 is the top view (2-D) of the ternary phase diagram of Sn-Ag-Cu (Ma and Suhling 2009). The area indicated in the red box is the near eutectic region. Most of the SAC alloy compositions currently in the market are within this region. SnAgCu □ Figure 1: The market share of different lead-free solders (Nimmo 2002). 3.7Ag0.7Cu ■ 3.9Ag0.6Cu □ 4Ag0.5Cu ■ □ 3.5Ag0.75Cu □ 3.0Ag0.5Cu □ other □ undecided Figure 2: Survey of the market share of different types of SAC alloys (Nimmo 2002). Figure 3: Sn-Ag-Cu ternary phase diagram (Ma and Suhling 2009). 4 D. A. Shnawah et al; Informacije Midem, Vol. 42, No. 1 (2012), 3 - 10 3. Microstructure characteristics and mechanical properties of the Sn-xAg-Cu bulk solder It is well known that, to a large extent, the microstructural characteristics of an alloy determine its mechanical performance (Allen 1969; R.F Smallman 1999; Hosford 2005). The microstructure development of a solder joint is affected by the alloy system and process conditions during solder joint formation (Moon, Boettinger et al. 2000; Pang, Tan et al. 2001; Anderson 2007). Therefore, understanding the micro-structural characteristics of the SAC ternary system is essential to understanding the mechanical performance and reliability of Sn-Ag-Cu solders. These properties provide design and manufacturing engineers with the necessary information when deciding on a solder alloy for their specific application. Silver is an essential element in SAC composition which can have different effects on the solder joint reliability, depending on the loading conditions. Currently, a wide variety of SAC solders containing different levels of silver, and maintaining a Cu level to manage substrate dissolution, such as Sn-1Ag-0.5Cu (SAC105) and Sn-3Ag-0.5Cu (SAC305), have been studied and are in use in the electronics industry for a wide range of applications. Kariya and his coworkers investigated the as-soldered microstructure or initial microstructure of Sn-xAg-Cu shown in Figure 4 (Kariya, Hosoi et al. 2004). They found that the microstructures of Sn-x Ag-Cu alloy consisted of a p-Sn matrix with disper-soids of fine AgSn and coarsened CuSn inter-metal- ~J3 6 5 lic compounds (IMC particles) (Terashima, Kariya et al. 2003; Kariya, Hosoi et al. 2004). The size of the IMC particles varied from submicron to several microns for all alloys, which is a common feature of the Sn-xAg-Cu alloys (Kariya and Plumbridge 2001). Volume fraction of the Ag3Sn IMC particles in the microstructure tends to increase with increasing silver content. In the microstructure of SAC105, relatively large primary Sn grains and the Ag3Sn IMC particles appeared sparsely within the matrix (Terashima, Kariya et al. 2003; Kariya, Hosoi et al. 2004). The SAC205 had cell-like primary Sn grains, and the grains were decorated with the very fine Ag3Sn IMC particles. In the SAC305, the Ag3Sn IMC particles formed a network structure around the primary Sn grains, and the size of the Sn grains was larger than that of the 2Ag solder. In the SAC405 alloy, the Ag3Sn IMC particles compounds were finely dispersed within the matrix, and the inter-particle distance was smaller than that in other alloys (Terashima, Kariya et al. 2003; Kariya, Hosoi et al. 2004). Sr-1 ,OAg-0 5Cu . " .i . j j' . | v: Vf isiru ni.,«»« , taïtea. is»fU xt.eee b ¡VU I ftfH «11B9J iSkU xl.iM 1« ]BkV Xl.lM iei*n ®2l BBS Figure 4: The initial microstructure of Sn-xAg-Cu bulk solders (Kariya, Hosoi et al. 2004). The Ag3Sn and Cu6Sn5 IMC particles possess much higher strength than the bulk material in SAC solder (R. J. Fields ; Tsai, Tai et al. 2005), while primary Sn has the lowest elastic modulus and lowest yield strength among the constituent phases in SAC solder (Kim, Suh et al. 2007; Suh, Kim et al. 2007), as shown in Table 1 (Kim, Suh et al. 2007). Fine IMC particles in the Sn matrix can therefore strengthen the alloys (S. Ganesan 2006). The number or volume fraction of the Ag3Sn IMC particles in the microstructure of SAC solder alloy tends to increase with increasing silver content as shown in Figure 4 (Terashima, Kariya et al. 2003; Kariya, Hosoi et al. 2004); hence, high Ag content SAC solder (SAC305/SAC405) yields large numbers of Ag3Sn IMC particles and small size of primary Sn grains. Therefore this is expected to increase the elastic modulus and yield strength and reduce the ductility of the solder (Che, Luan et al. 2008; Che, Zhu et al. 2010), as shown in Table 2 (Henshall, Healey et al. 2009). On the other hand, low Ag content SAC alloy (SAC105) gives rise to more primary Sn phase (large Sn grains) and decreased number of Ag3Sn IMC particles as shown in Fig. 1. It is therefore expected to result in lower elastic modulus and lower yield strength than high Ag content SAC alloy (SAC 305) (Che, Luan et al. 2008; Che, Zhu et al. 2010)[39, 40], as shown in Table 2 (Henshall, Healey et al. 2009). Table 1: Key material properties of constituent phases in SAC alloys (Henshall, Healey et al. 2009) Phase E KIC HV [GPa] [MPaVm] [Kg/mm2] Cu6Sn5 93.5 1.4 to 2.73 351-378 Ag3Sn 74.5 > Cu6Sn5 142-120 Sn 42-50 Very high 100 5 D. A. Shnawah et al; Informacije Midem, Vol. 42, No. 1 (2012), 3 - 10 Table 2: Mechanical properties of SAC alloys vs. eutectic SnPb (Henshall, Healey et al. 2009) Solder alloy Primary Sn Modulus (GPa) Modulus Reduction UTS (Mpa) Elon (%) SAC (4%Ag) none 53.3 basline 52.4 35 SAC (3%Ag) 10% 51.0 -4% 53.3 46 SAC (1%Ag) 35% 47.0 -12% 45.2 46 Sn-Pb n/a 40.2 -25% 50 4. Limitation of SAC solder joints A wide variety of SAC solders containing different levels of silver has been studied and is in use in electronics industry for a wide range of applications currently. The content level of silver in SAC solder alloys can be an advantage or a disadvantage depending on the application, package and reliability requirements (Henshall, Healey et al. 2009), e.g. the best level of silver content for drop performance is not necessarily the best level for optimum temperature cycling reliability (Syed, Scanlan et al. 2008). Hence, the SAC alloys are limited in their potential applications in the portable electronic products in which thermal cycling and drop/impact are the primary requirement for board level solder joint reliability. The SAC105 and SAC305 solder ball joints for BGA interconnections (board level package) were evaluated under temperature cycling and drop (JESD22-B111) tests. The data indicates that lower silver content solder balls perform better under drop conditions, while temperature cycling reliability suffers as silver content decreases (Syed, Scanlan et al. 2008). In other words, SAC105 solder balls show better performance than SAC305 under drop loading conditions (see Figure 5a). However, the trend is reversed for temperature cycle test (see Figure 5b). The data show how the level of silver content in SAC composition can have a different effect on the board level solder joint performance depending on the loading conditions. (a) Drop test (b) Temperature cycle test Figure 5: Drop and temperature cycling performance for NiAu finish packages. 5. Direct correlation between bulk SAC solder properties and drop impact reliability Currently, high Ag SAC alloys (SAC305/SAC405) are the most common material systems for the Pb-free board level solder joints in electronic industry. However, these alloys exhibit significantly low robust in terms 6 D. A. Shnawah et al; Informacije Midem, Vol. 42, No. 1 (2012), 3 - 10 of high strain rate response such as drop impact conditions due to the lack of the compliance of the bulk solder materials and complex IMC formation at the solder/metal interface (Kim, Suh et al. 2007; Suh, Kim et al. 2007). The root cause of the poor high strain rate response of high Ag SAC alloys (SAC305/SAC405) lies in the bulk alloy properties (Pang and Che 2006; Pand-her, Lewis et al. 2007). These high Ag alloys have high yield strength and elastic modulus and low acoustic impedance, resulting in a high bulk solder strength. Therefore under drop impact loading conditions, they more readily transfer stress to the interface IMC layers. The interface IMC layers formed during soldering are of low ductility and it is this interface that exhibits brittle fracture (Pandher, Lewis et al. 2007; Che, Luan et al. 2008; M. P. Renavikar 2008; Che, Zhu et al. 2010; Yu, Jang et al. 2010). The high yield strength and elastic modulus of the high Ag alloys is derived primarily from the precipitation hardening of the tin matrix by the Ag3Sn IMC particles and fine Sn grain size, result in a strong and stiff bulk solder (Che, Poh et al. 2007). The first approach to improve the drop impact performance of SAC alloys is to optimize the bulk solder properties by reducing the silver content to as low as Ag < 2 wt% (SAC105) (Syed, Kim et al. 2006; Pandher, Lewis et al. 2007). The low Ag alloys are found to have low yield strength and elastic modulus, and high ductility (Che, Poh et al. 2007; Che, Luan et al. 2008; Che, Zhu et al. 2010). This means low Ag alloys have high elastic compliance and high plastic energy dissipation ability during crack propagation which effectively toughens the crack tip and prolonges the time to reach the critical stress for the fracture under highstrain rate conditions (Kim, Suh et al. 2007). As a result low Ag alloys can dissipate more high strain rate energy through bulk solder deformation and reduce the dynamic stress transformed to interface IMC layers, resulting in good drop impact performance. The low strength properties of low Ag alloys can be attributed to large Sn grain size or more Sn primary phase (see Figure 6) and sparsely distributed Ag3Sn IMC particles in the bulk alloy matrix, which result in a soft bulk solder (Pandher, Lewis et al. 2007; M.P. Renavikar 2008; Yu, Jang et al. 2010). The shear strength for the SAC family of alloys is shown in Figure 7. Clearly lower Ag alloys have an advantage in potentially absorbing the effect of high strain rate deformation (Pandher, Lewis et al. 2007). The second approach is to optimize the interface reaction to improve the strength of the IMC layer via adding other elements in the SAC system which can effectively modify the interface reaction (Pandher, Lewis et al. 2007; Kittidacha, Kanjanavikat et al. 2008). wt.% Cu Figure 6: Sn-rich region of Sn-Ag-Cu temary phase diagram. Variation of Ag content (with fixed Cu content of 0.5%) is indicated by vertical line. The tie line is also shown for representative SAC alloys (Kim, Suh et al. 2007). Q F I I I I I I I I I I I I I I I I I I I I I I I I I I I I I J I I I I 0 12 3 Shear Strain (%) Figure 7: Mechanical (shear) properties of SAC alloys as a function of Ag content (Pandher, Lewis et al. 2007). 6. Direct correlation between bulk SAC solder properties and thermal cycling reliability The thermal mechanical strain and elevated temperature during thermo-mechanical loading induces re-crystallization in the highly strained region of the bulk solder, leading to the development of thermo-mechanical fatigue cracks in the re-crystallized regions along large angle grain boundaries (Mattila 2005; Frear, Ramanath-an et al. 2008). Hence, the degree of the coarsening indicates an accumulation of the strain or the stress imposed by the thermo-mechanical fatigue process. The optimal silver content in the Sn-x Ag-Cu alloy is of great significance for designing a solder that has greater 7 D. A. Shnawah et al; Informacije Midem, Vol. 42, No. 1 (2012), 3 - 10 thermo-mechanical fatigue resistance (Terashima, Kariya et al. 2003). One of the most detailed studies on thermo-mechanical fatigue failure rate of Sn-x Ag-Cu solder joints was conducted by Terashima et al. 2003 who found that increasing Ag content increases the fatigue resistance of SAC solder.Their results, summarized in Figure 8, show that the 1Ag solder had the fastest failure rate while the 4Ag solder had twice the cycles to first failure (No) as the 1 Ag solder. They observed that the 3Ag and 4Ag solder joints suppressed micro-structural coarsening, which degrades fatigue resistance, whereas a significant micro-structural coarsening occurs in the 1Ag and 2Ag solders because of the thermo-mechanical fatigue process. As compared with Figures 9 and 10, the Sn grains were coarsened, and the number of the Ag3Sn dispersions decreased drastically as a function of the number of thermal cycles compared with the initial microstructures for the 1Ag and the 2Ag solders. While grain coarsening was not significant in the 3Ag and the 4Ag solders, these alloys retained the fine Ag3Sn dispersions even after thermal cycling. Figure 9: The initial microstructure for each Sn-xAg-0.5Cu solder joint: (a) 1Ag, (b) 2Ag, (c) 3Ag, and (d) 4Ag (Terashima, Kariya et al. 2003). IL JB ro Ct a) 100 80 60 40 20 Thermal Cycle: 233 398K - Sn-xAg-0.5Cu f Cu pad /* ¥¥ A, • 1Ag TT /Zf 9 2Ag ♦ 3Ag mri * 4Ag 200 400 600 Thermal Cycle, N (Cycle) 80C Figure 8: Effect of thermal cycles on the failure rate of Sn-xAg-0.5Cu (x = 1, 2, 3, and 4) solder joints on the Cu pads (Terashima, Kariya et al. 2003). inî-Un * ■'-I™-, Ai iSi ■ i f ■ / 4 ; . i 1> 2 f * y " i i__?! - r ■ % iwi - j yaijr Figure 10: The microstructures at the center area for each (c) 3Ag, and (d) 4Ag (Terashima, Kariya et al. 2003). It has been reported that the SAC solder has a dispersion or precipitation strengthening mechanism (Kariya, Hirata et al. 1999; Zhang, Li et al. 2002). Thus, the dispersion morphology of the Ag3Sn IMC particles strongly affects the mechanical properties of the SAC solder. Namely, if the microstructure of an alloy has finely dispersed Ag3Sn particles like SAC405 solder alloy, the alloy and Sn grains may show high strength bulk solder because of the Orawan looping of dislocations. Moreover, if coarsening of an alloy is inhibited because of the Sn-xAg-0.5Cu solder joints after 600 cycles a) 1Ag, (b) 2Ag, finely dispersed Ag3Sn, a good fatigue resistance can be expected as a result of suppressing plastic deformation of the solder (Ye, Lai et al. 2001; Subramanian and Lee 2003; Terashima, Kariya et al. 2003; Kariya, Hosoi et al. 2004). Furthermore, if the Ag3Sn IMC particles form a eutectic network structure around Sn grains like SAC305 solder, a good fatigue resistance can be expected as a result of inhibiting micro-structural coarsening due to Zener pinning effect (Dieter 1981; Subramanian and Lee 2003; Liu, Lee et al. 2009; Terashima, 8 D. A. Shnawah et al; Informacije Midem, Vol. 42, No. 1 (2012), 3 - 10 Kohno et al. 2009). In other words, the micro-structural coarsening suppression (grain size stability) can be attributed to the pinning of Sn grains boundary by the fine Ag3Sn IMC particles due to the atomic matching, or coherency, between the lattices of the precipitates and the matrix. On the other hand, T. Kobayashi and his co-researchers studied the crack propagation morphology of SAC 105 solder joints subjected to a fatigue test. They observed that cracks were nucleated at the interface of primary P-Sn, where eutectic phase is not always observed. Hence the strength of the interface between primary P-Sn to be lower than that of the primary P-Sn interface, filled with the eutectic phase, resulting in poor fatigue resistance of SAC105 solder (Kobayashi, Kariya et al. 2007). 7. Conclusion The Ag3Sn and Cu6Sn5 IMC particles in the bulk SAC solder have much higher elastic modulus and yield more strength than the bulk material. Large amount of fine Ag3Sn and Cu6Sn5 IMC particles in the Sn matrix can therefore strengthen the bulk SAC solder. On the other hand, among the constituent phases in bulk SAC solder, primary Sn has the lowest elastic modulus and yield strength. The role of Ag and Cu in SAC alloys is a straightforward issue of Cu6Sn5 and Ag3Sn strengthening the Sn matrix. However, a Cu level is maintained to manage substrate dissolution. High Ag content SAC solder (SAC305/SAC405) produce large number of fine Ag3Sn IMC particles and small sized of Sn primary grains, which make the bulk solder exhibit high strength. This will help to suppress the plastic deformation during the thermo-mechanical fatigue process. Moreover, the large number of fine Ag3Sn IMC particles in SAC305 bulk solder forms a eutectic network structure around the primary Sn grain and suppresses the grain coarsening, which results in good thermo-me-chanical fatigue cracks resistance. However the stiff or strong bulk high Ag solder prevents the drop impact energy from dissipating through the bulk solder, thereby transferring more stress to the interface IMC layers which cause brittle fracture of the solder joint. Low Ag content will decrease the strength and elastic modulus of the solder, transferring less stress to the IMC layers. This is due to increasing the amount of primary Sn relative to the Ag3Sn and Cu6Sn5 phases in the low Ag alloy, which make the bulk solder more compliant. However, the low Ag alloy shows poor thermal cycling reliability due to fewer number of Ag3Sn IMC particles compared to high Ag alloy. The content level of silver in SAC solder alloys can be an advantage or a disadvantage depending on the application, package and reliability requirements. Hence, it is highly recommended to improve both the strength and ductility of the bulk SAC solder through SAC optimization for optimal solder ball attachments in portable electronic products. Acknowledgement The authors would like to acknowledge the financial support provided by the Institute of Research Management and Consultancy, University of Malaya (UM) under the IPPP Fund Project No.: PS117/2010B Reference 1. Allen, D. K. (1969). Metallurgy theory and practice. Homewood, IL USA, American Technical Publishers. 2. Anderson, I. E. (2007). "Development of Sn-Ag-Cu and Sn-Ag-Cu-X alloys for Pb-free electronic solder applications." Lead-Free Electronic Solders: 55-76. 3. Che, F., J. Luan, et al. (2008). Effect of silver content and nickel dopant on mechanical properties of SnAg-based solders, IEEE. 4. Che, F., J. Luan, et al. (2008). Effect of silver content and nickel dopant on mechanical properties of SnAg-based solders. ECTC, IEEE. 5. Che, F., E. C. Poh, et al. (2007). Ag Content Effect on Mechanical Properties of Sn-xAg-0.5 Cu Solders. ECTC, IEEE. 6. Che, F., W. Zhu, et al. (2010). "The study of mechanical properties of Sn-Ag-Cu lead-free solders with different Ag content and Ni doping under different strain rates and temperatures"' Journal of Alloys and Compounds. 7. Dieter, G. E. (1981). Mechanical metallurgy. TOKYO, McGRAW-HILL. 8. Frear, D., L. Ramanathan, et al. (2008). Emerging reliability challenges in electronic packaging. Annual International Reliability Physics Symposium, IEEE. 9. Grafe, J., R. Garcia, et al. (2008). "Reliability and Quality Aspects of FBGA Solder Joints." FORSCHUNG & TECHNOLOGIE 10: 2224-2234. 10. Henshall, G., R. Healey, et al. (2009). Addressing opportunities and risks of pb-free solder alloy alternatives, IEEE. 11. Hosford, W. F. (2005). Physical metallurgy. Boca Raton, FL USA, CRC Press, Taylor and Francis Group. 12. K. Nimmo (2004). Alloy selection. New York, Marcel Dekker. 13. Kariya, Y., Y. Hirata, et al. (1999). "Effect of thermal cycles on the mechanical strength of quad flat pack leads/Sn-3.5 Ag-X (X= Bi and Cu) solder joints." Journal of electronic materials 28(11): 1263-1269. 14. Kariya, Y., T. Hosoi, et al. (2004). "Effect of silver content on the shear fatigue properties of Sn-Ag-Cu flip-chip interconnects." Journal of electronic materials 33(4): 321-328. 9 D. A. Shnawah et al; Informacije Midem, Vol. 42, No. 1 (2012), 3 - 10 15. Kariya, Y. and W. Plumbridge (2001). Mechanical properties of Sn-3.0 mass% Ag-0.5 mass% Cu alloy. 16. Kim, D., D. Suh, et al. (2007). Evaluation of high compliant low Ag solder alloys on OSP as a drop solution for the 2nd level Pb-free interconnection, IEEE. 17. Kittidacha, W., A. Kanjanavikat, et al. (2008). Effect of SAC Alloy Composition on Drop and Temp cycle Reliability of BGA with NiAu Pad Finish. ECTC, IEEE. 18. Kobayashi, T., Y. Kariya, et al. (2007). Effect of Ni Addition on Bending Properties of Sn-Ag-Cu Lead-Free Solder Joints. ECTC, IEEE. 19. Liu, W., N. C. Lee, et al. (2009). Achieving high reliability low cost lead-free SAC solder joints via Mn or Ce doping. ECTC, IEEE. 20. M.P. Renavikar, N. P., A. Dani, V. Wakharkar, G. Ar-rigotti, V. Vasudevan, O. Bchir, A.P. Alur, C.K. Guru-murthy, R.W. Stage (2008). "Materials technology for environmentally green micro-electronic packaging." Intel® Technology Journal 12: 1-16. 21. Ma, H. and J. C. Suhling (2009). "A review of mechanical properties of lead-free solders for electronic packaging." Journal of materials science 44(5): 1141-1158. 22. Mattila, T. (2005). Reliability of high-density lead-free solder interconnections under thermal cycling and mechanical shock loading. Espoo, Finland, Helsinki University of Technology. 23. Moon, K. W., W. Boettinger, et al. (2000). "Experimental and thermodynamic assessment of Sn-Ag-Cu solder alloys." Journal of electronic materials 29(10): 1122-1136. 24. Nimmo, K. (2002). European Lead-free Technology Roadmap, Ver1: February 2002, Soldertec at Tin Technology Ltd. 25. Pandher, R. S., B. G. Lewis, et al. (2007). Drop shock reliability of lead-free alloys-effect of micro-additives, IEEE. 26. Pang, H., K. Tan, et al. (2001). "Microstructure and intermetallic growth effects on shear and fatigue strength of solder joints subjected to thermal cycling aging." Materials Science and Engineering: A 307(1-2): 42-50. 27. Pang, J. H. L. and F. Che (2006). Drop impact analysis of Sn-Ag-Cu solder joints using dynamic high-strain rate plastic strain as the impact damage driving force, IEEE. 28. R. J. Fields, S. R. L. "Physical and mechanical properties of intermetallic compounds commonly found in solder joints." Retrieved April 20, 2011, 2011, from http://www.metallurgy.nist.gov/mechanical_ properties/solder_paper.html. 29. R.F Smallman, R. J. B. (1999). Modern physical metallurgy and materials engineering. Oxford, Butterworth- Heinemann 30. S. Ganesan, M. P. (2006). Lead-free electronics. New York, Wiley-Interscience Publication. 31. Subramanian, K. and J. Lee (2003). "Physical metallurgy in lead-free electronic solder development." JOM Journal of the Minerals, Metals and Materials Society 55(5): 26-32. 32. Suh, D., D. W. Kim, et al. (2007). "Effects of Ag content on fracture resistance of Sn-Ag-Cu lead-free solders under high-strain rate conditions." Materials Science and Engineering: A 460: 595-603. 33. Syed, A., T. S. Kim, et al. (2006). Alloying effect of Ni, Co, and Sb in SAC solder for improved drop performance of chip scale packages with Cu OSP pad finish. ECTC, IEEE. 34. Syed, A., J. Scanlan, et al. (2008). Impact of package design and materials on reliability for temperature cycling, bend, and drop loading conditions. IEEE-ECTC, IEEE. 35. Tee, T. Y., H. S. Ng, et al. (2003). Design for enhanced solder joint reliability of integrated passives device under board level drop test and thermal cycling test, IEEE. 36. Terashima, S., Y. Kariya, et al. (2003). "Effect of silver content on thermal fatigue life of Sn-xAg-0.5 Cu flip-chip interconnects." Journal of electronic materials 32(12): 1527-1533. 37. Terashima, S., T. Kohno, et al. (2009). "Improvement of thermal fatigue properties of Sn-Ag-Cu lead-free solder interconnects on Casio's wafer-level packages based on morphology and grain boundary character." Journal of electronic materials 38(1): 33-38. 38. Tsai, I., L. J. Tai, et al. (2005). Identification of Mechanical Properties of Intermetallic Compounds on Lead Free Solder, IEEE; 1999. 39. Tu, K. (2007). Solder joint technology: materials, properties, and reliability, Springer Verlag. 40. Ye, L., Z. Lai, et al. (2001). "Microstructure investigation of Sn-0.5 Cu-3.5 Ag and Sn-3.5 Ag-0.5 Cu-0.5 B lead-free solders." Soldering & surface mount technology 13(3): 16-20. 41. Yu, A., J. W. Jang, et al. (2010). Improved reliability of Sn-Ag-Cu-In solder alloy by the addition of minor elements. ECTC, IEEE. 42. Zhang, B., H. Ding, et al. (2009). "Reliability study of board-level lead-free interconnections under sequential thermal cycling and drop impact." Microelectronics Reliability 49(5): 530-536. 43. Zhang, F., M. Li, et al. (2002). "Failure mechanism of lead-free solder joints in flip chip packages." Journal of electronic materials 31(11): 1256-1263. Arrived: 28. 04. 2011 Accepted: 26. 1. 2012 10 Original paper •ije ÏMIDEM Iournal of M Informacije | Journal of Microelectronics, Electronic Components and Materials Vol. 42, No. 1 (2012), 11 - 17 Quality Control of Automotive Switches with help of Digital Camera Grega Bizjak *, Matej Bernard Kobav, Blaž Luin Univerza v Ljubljani, Fakulteta za elektrotehniko Abstract: For the purpose of quality control at one Slovenian automotive parts producer a system for luminance measurement based on digital camera was developed. The system consists of black chamber in which the measured specimen and the camera are placed. They are positioned very close to each other with the distance between them limited with the minimal focus distance of the camera which is about few millimeters. Such a short distance assures that in spite of rather small symbol on a car switch there is still enough information on the picture that the luminance of the symbol can be calculated. Special software automatically takes the picture, transfers it into the computer and analyzes it according to the car manufacturer standard. First the boundaries of the symbol are located on the pictures and then the measurement areas are found. In the next step, the luminance of the measured area is calculated. On one symbol two or three measuring areas are defined. If the luminance of the measuring areas is in the allowed interval, the switch is adequate to be mounted in a car, if not, the switch must be discarded. The result of the analysis (adequate, not adequate) is clearly stated on the screen. As whole measuring process is automatic no special knowledge is needed to manage the measuring system. Keywords: quality control, automotive industry, digital image processing, luminance measuring with digital camera Kontrola kakovosti avtomobilskih stikal s pomočjo digitalne kamere Izvleček: V proizvodnji sestavnih delov za avtomobilsko industrijo je ustrezna končna kontrola izdelkov danes obvezna. V okviru končne kontrole pa je potrebno izdelek zanesljivo in čim hitreje preveriti. Zato je bil za slovenskega izdelovalca stikal za vgradnjo v avtomobile izdelan sistem za končno kontrolo svetlosti znakov na stikalih. Sistem temelji na uporabi digitalne kamere in ustrezne programske opreme. V članku je najprej predstavljena metoda meritve svetlosti osvetljenih simbolov na avtomobilskih stikalih. Namesto klasičnega merilnika svetlosti in črne sobe je pri opisanem postopku uporabljena digitalna kamera ter posebna črna komora. V komori sta merjeno stikalo in digitalna kamera nameščena na ustrezno kratki razdalji, ki omogoča ostro zajemanje slike, ustrezno razpoznavanje detajlom na sliki in izračun svetlosti simbola na stikalu. Z digitalno kamero najprej zajamemo sliko osvetljenega simbola in jo prenesemo na računalnik. S posebno programsko opremo se slika nato obdela, poiščejo se meje osvetljenega simbola in izračunajo se svetlosti referenčnih točk. Na podlagi izračunanih svetlosti se poda ocena o skladnosti testiranega vzorca z proizvajalčevim standardom. Rezultat analize (ustrezno, neustrezno stikalo) se jasno izpiše na zaslonu računalnika. Na posameznem simbolu se lahko upošteva dve ali več referenčnih točk. Celoten postopek od zajema slike preko določitve meja simbola, izračuna svetlosti in ocene ustreznosti je popolnoma avtomatiziran. Tako lahko z napravo rokujejo tudi delavci brez posebnega znanja o fotometriji in merjenju svetlosti. Kljub temu da je metoda za razliko od klasične meritve svetlosti precej hitrejša in lažje izvedljiva še vedno zagotavlja ustrezno zanesljivost. Poleg tega pa so tudi stroški za potrebna oprema precej nižji kot pri klasični merilni opremi. Tako sama naprava kot tudi programska oprema sta bili razviti v laboratoriju za razsvetljavo in fotometrijo na ljubljanski Fakulteti za elektrotehniko. Ključne besede: kontrola kakovosti, avtomobilska industrija, obdelava digitalnih slik, meritev svetlosti z digitalno kamero * Corresponding Author's e-mail:grega.bizjak@fe.uni-lj.si 1. Introduction For the purpose of quality control at one Slovenian automotive parts producer a system for luminance measurement based on digital camera was developed at Laboratory of Lighting and Photometry. The products are different switches for the car interior for a large European car manufacturer. Due to the strong demands of the company internal standardization the parts producer needs to have efficient quality control in order to preserve orders. To control the luminance of the symbol on switches, the Slovenian produces had G. Bizjak et al; Informacije Midem, Vol. 42, No. 1 (2012), 11 - 17 two possibilities. One was the use of a luminance meter in a dark room. This measuring procedure needs a lot of time and some special equipment like a dark room, photometric bench etc. are needed together with the luminance meter. It is also very difficult to measure luminance of very small surfaces, illuminated with almost monochromatic light, at a short distance. The second possibility was the use of a luminance measuring device, based on digital camera. Figure 1: Developed device for measurement of luminance of symbols on car switches. In close cooperation with the mentioned Slovenian automotive parts producer such a luminance measuring device was developed in the Laboratory of lighting and photometry. First, the prototype device was constructed based on the commercial digital camera with 5 million pixels and possibility to control it through a USB port. The device was tested in the production quality control and proved to be very useful although clumsy to use due to some construction details [3]. So the second device, described in this paper, was constructed. 2. Design of the Black chamber To prevent the external light sources to disturb the measuring results the measured switch and the camera were placed in a black chamber. Apart from this the chamber, build as a small cabined, also provides a power supply for the camera and switch sample and a USB connection for the camera. Inside the container is painted black to prevent light reflections. To enable easy exchange of the measured specimens, both the specimen and the camera are placed on the drawers. Figure 2: Measured specimen mounted on a stand in a drawer. Figure 3: Digital camera is positioned on the top of the measured specimen. The measured specimen of a car switch is mounted on a special stand in the drawer (Figure 2) and connected to the power supply. The digital camera is mounted in the second drawer on top of the measured specimen (Figure 3). The specimen and the camera are positioned and adjusted so, that the distance between the specimen and camera lens assures focused picture at the closest possible distance. The camera is connected to the power supply and to the computer with the USB cable. After specimen is mounted on the stand, the drawer is closed to provide a total darkness inside the chamber. 12 G. Bizjak et al; Informacije Midem, Vol. 42, No. 1 (2012), 11 - 17 3. Choice of the camera V} = {(x, y)}, i(x, y) - i(x- 1, y) > T, x = 0, x-> xm (1) There are many different digital cameras on the market. But to be suitable for our application, the camera should fulfil following criteria: • the light sensitivity should be adequate to luminance range of measured symbols; • the minimal focus distance should be short enough so the taken picture of symbol covers most of the canvas; • it must be possible to control the camera remotely from the computer via USB port. First requirement is that the camera is able to sense luminance range we were about to measure. The luminance of the symbols on car switches is normally in range from 2 to 60 cd/m2. Nearly every consumer camera fulfils this requirement. Ability to focus to a short distance of approximately 20 mm as the switch is relatively small significantly narrowed our choice of suitable cameras. The last important requirement is that the camera can be completely remotely controlled. This means that it should be possible to remotely set zoom, focus distance, aperture width and shutter speed. To sum up image should be remotely captured and transferred to the computer. Most of the remote controllable cameras use protocol PTP (picture transfer protocol), which is standardized as ISO 15740. Its default network transport media is USB (universal serial bus). The camera that fulfilled our requirements is a consumer model Canon G7. Apart from this one, there are many other suitable cameras on the market. 4. Segmentation of the captured image and calculation of the luminance After the sample has been put into the chamber and turned on, the image is captured and transferred to the software on computer. To be able to calculate the luminance of the symbol, first the boundaries of the illuminated symbol have to be found. This is accomplished by detecting the outer edges on all sides of the symbol. Therefore the procedure depends on the shape of the sign. In the following example we present a procedure for a nearly rectangular shaped sign like on the switch for heating the back window. In this case we have to detect four outer edges. A first point on a left vertical edge is found by searching from left to the right for the first point in a row at which an increase in light intensity is larger than a predefined threshold. where i(x, y) denotes light intensity of a point with coordinates (x, y) and T represents a predefined threshold. Subsequently points on right vertical edge, upper horizontal and lower horizontal edges are found: V, = {(x, y)}, i(x, y) - i(x + 1, y) > T , x = xmax, x-> 0 (2) H = {(x,y)}, i(x,y) - i(x,y- 1) > T , y = 0,y->ymax and (3) H, = {(x, y)}, i(x, y) - i(x, y + 1) > T , x = 0 , ymax-> 0 . (4) After we find the first point on one edge the direction of the edge is determined using simple linear regression model. For horizontal sets y = aH + pHx + eH, has been used and for vertical sets x = av + pvy + sv . (5) (6) The estimates of regression parameters for the equation (5) are as follows nSxY- nSx Sy , Sy - P H S a =--------------- a Hv nS„ - S„ S, H xx x x Y t^H^X n (7) and for the equation (6) as follows nSXY - nSx SY „ SY - pv Sx Pv = --------------- av =--------------, where (8) nSYY - SY SY n SXX = xi2 + x22 + - + xn2 syy=yi2+y22+••• + yn SXY = xy + x2y2 + ••• + x^n SX = xi + x2 + ••• + xn sy=yi + y2+••• + yn (9) (10) (11) (12) (13) After edges have been determined, corner points T (x, y) can be calculated by finding intersections of the edges. On this basis outer borders of the symbol on a switch are determined. The next step is positioning of the measuring points on the symbol. The measuring points are circular areas on the illuminated part of the symbol. They are placed according to the car producer standard in the corners or in the middle of the lines. The dimension (radius) of the measuring point is 80 % of the symbol line width. 13 G. Bizjak et al; Informacije Midem, Vol. 42, No. 1 (2012), 11 - 17 Figure 4: An image of the illuminated switch with edges and measuring points detected automatically and marked. When measuring points are known, it is necessary to calculate the average luminance of area, which is covered by the measuring point. For that purpose, first the sRGB values of each pixel in the measuring area need to be transformed to CIE Lab color space. This is done by following transformations [2]. • first from sRGB to RGB color space, • second from RGB to CIE XYZ color space and • third from CIE XYZ to CIE Lab color space. All of the sRGB values are converted into RGB values in the same manner. Let R, G and B denote sRGB color values and r, g and b denote RGB color values. Then the transformation among them is defined by the following equations: R g = 12.92 ' R+ 0.055 „ 1.055 G -, R < 0.04045 V.4 (14) , R > 0.04045 12.92 ' G + 0.055 5 515.0555555 B -, G < 0.04045 (15) , G > 0.04045 12.92 ' B + 0.055 5 515.0555555 -, B < 0.04045 (16) , B > 0.04045 With obtained RGB values, it is possible to calculate CIE XYZ values with reference white D65 from them using the following equation: X = r 0.412 0.213 0.019 Y g 0.357 0.715 0.119 (17) Z b _ 0.180 0.722 0.950 From CIE XYZ color space we calculate lightness component of CIE Lab color space with: L* = 116 3V7 - 16, Y > 0.008856 903.3Y, Y < 0.008856 (18) Finally the luminance of the pixel can be calculated. Relation between the pixel lightness L* and luminance L is defined by the following equation [1]: L = A(EV)eBIEV,L* . (19) Therefore our luminance value depends on camera parameter exposition value EV, which is described by the formula: EV = log2 f N ^ t (20) where N is relative aperture (f-number) and t is shutter speed. 5. Calibration of the device As one can see from the equation 19, the luminance of the symbol is calculated from the lightness (obtained from the picture data) with help of constants A and B. In the process of calibration the constants A and B should be set so, that the calculated luminance of the measuring point is the same as the luminance of the same point measured with a reference luminance meter. To define constants A and B properly at least two measuring point with different luminance need to be used. The constants are calculated by regressing average lightness L* of all pixels inside the measuring point with measured luminance L of the same point on the switch. In order to make calibration successful at least two calibration points with different luminance are required, but in practice we need more points for good accuracy. After constants A and B are properly adjusted, calibration is finished. As a reference instrument the normal luminance meter class L is used. For the purpose of calibration first the luminance of measuring points on some selected specimens are measured in a dark room of a laboratory r b 14 G. Bizjak et al; Informacije Midem, Vol. 42, No. 1 (2012), 11 - 17 with specimens and instrument mounted on an optical bench. Afterwards the same specimens are inserted into black chamber of the measuring device and the picture of the symbol is taken. With the calibration part of the software program, the lightness values of the selected points can be obtained from the picture and used for the calculation of both constants. The calculation is normally done by the same part of the program, so the user just need to select the points and input the previously measured luminance of these points. The user should input at least two points, but he/she can also input more points to get better values for A and B. Beside this possibility both constants can also be calculated using some other tools or software and inputted in the program manually. To obtain better measurement result with the device, the constants need to be set for each type of the switch (symbol) separately. The constants are saved in a software configuration data and need to be set just once although we recommend the user to repeat the calibration procedure after certain time interval to compensate possible changes in camera CCD sensor. Figure 5: Part of the software for calibration of the measurement system. After both constants are set, the software and so also the device is ready for use. 6. Software Program From the user's point of view the software should be simple and intuitive to use. We kept in mind that users probably will be neither computer nor lighting experts. To accomplish this graphical user interface has been developed which allows users to be able to choose functions by clicking. As mentioned before, the software itself has two functions: measurement and calibration. Use of the software part for the calibration was already described in previous chapter. .(Sto4 • ® BŠ5?] MtJiiura Oatflb*«» ConRouratlon EÜ3 varnoitno itara varntilnD novo m" 0.60 ttT w" 0.55 W) G m a5l> 0.45 0.40 Figure 5: Calculated trap energy levels for PD responses to simultaneous voltage and red, green, blue and red-green-blue light pulses presented in Fig. 1, respectively. The time intervals corresponds to one in which the changes in response shapes occurs and are exponential in nature. Analysing the measured PD responses it is observed that at small voltage pulse amplitude, less than 0.5 V, in initial time interval, when voltage and light pulses suddenly go on, the voltage (electric field) influence prevails the illumination one. Dangling bond and tail states at energies between 0.4 and 0.45 eV for R, G, B and RGB light illumination, capture the photogenerated free carriers in very short initial time interval. The space charge region reduces and with decreasing electric field, reduces the current. In the second time interval, the exponential rise of photocurrent is present for low voltage pulse amplitude and for all light illuminations used in our experiment. The capture of electrons through the dangling bond at energies below 0.5 eV at p+-i interface follows the electric field reduction, and consequently the photocurrent reaches the local maximum, or photovoltage corresponding to negative potential, NP. In following time interval present only at pulse durations over 0.5 ms the photocurrent and space charge reduce as recombination takes place through the deeper dangling bonds. For small pulse duration, these photocurrent decreases are not evident. With increased pulse duration, photocurrent increase as an overshoot is present. In the very short next time interval, after the voltage and light pulses are turned off, the current suddenly increases as a consequence of electric field increase, occurs thermal electron emission from shallow states " • R ■ G _ * B : RGB ▲ À O • - â - m ô » 9 : é . D - ▲ fi i.i.i i , i - 12 3 4 5 6 Time interval 26 V. Gradišnik; Informacije Midem, Vol. 42, No. 1 (2012), 23 - 28 about 0.4 eV and recombination of remaining photogenerated free carriers is negligible. After that follows the long current fall, overshoot, as the consequence of free carriers electrons and holes capture via the deeper states at energies around 0.5 eV. In the last time interval, a long current tail arises from detrap-ping of carriers from the deep level between 0.5 and 0.7 eV, before steady state. The two energy levels are activated for blue and red light illumination at the end of the tail. The calculated activation energies of dangling bonds from PD responses (Fig. 2) to simultaneous blue light and voltage pulses with duration (Tp) of 0.5 ms for voltage pulse amplitudes from 0.1 to 0.5 V are shown in Fig. 6. Increasing the voltage pulse amplitude, results in activation of deeper energy levels. At 0.4 V, the two energy levels are involved in response. References > 0.6 1 _ 1 dV = i ' i 0.1 V Blue 1 - _ ■ dV = 0.2 V • dV - 0.3 V A dV = 0.4 V dV = 0.5 V • I ■ - • o • • O ■ M ■ - 0 a H 1 i T i 3 4 Time interval Figure 6: The calculated trap energy levels of PD responses to simultaneous blue light and voltage pulses of 0.5 ms duration (Tp) for voltage pulse amplitude (dV) from 0.1 to 0.5 V. p 4. Conclusion The characteristic shape of a-Si:H p-i-n photodiode response to simultaneous voltage and light pulses at low bias voltages are ascribed to activation of the dangling bond states energy levels. The parameters (amplitude, duration and waveform), threshold voltage and latency of PD response on simultaneous voltage and light pulses are analysed in dependence of voltage pulse amplitude or applied electric field and on duration of excitation pulses. Further investigation is necessary in order to obtain the parameters value for desirable characteristic response shape and to develop new colour recognition sensors and method for semiconductor material characterization. 1 W.Fuhs, Recombination and transport through localized states in hydrogenated amorpohus and microcrystalline silicon, J. Non-Cryst. Solids, no. 354, 2008, pp. 2067-2078. 2 S.R. Dhariwal, S.Rajvanshi, Theory of amorphous silicon solar cell (a): numerical analysis, Sol. Energy Mater. Sol. Cells, no. 79, 2003, pp. 199-213. 3 S.R. Dhariwal, S.Rajvanshi, Theory of amorphous silicon solar cell (b): a five layer analytical model, Sol. Energy Mater. Sol. Cells, no. 79, 2003, pp. 215-233. 4 S.R. Dhariwal, M. Smirty, On the sensitivity of open-circuit voltage and fill factor on dangling bond density and Fermi level position in amorphous silicon p-i-n solar cell, Sol. Energy Mater. Sol. Cells, no. 90, 2006, pp. 1254-1272. 5 E. A. Schiff, H. T. Grahn, R. I. Devlen, J. Tauc, S. Guha, "Picosecond Photocarrier Transport in Hydrogenated Amorphous-Silicon p-i-n Diodes," IEEE Trans. Electron Devices, vol. ED-36, pp 27812784, Dec. 1989. 6 R.V.R. Murthy, V. Dutta, Underlying reverse current mechanisms in a-Si:H p+-i-n+ solar cell and compact SPICE modeling, J. Non-Cryst. Solids, no. 354, 2008, pp. 3780-3784. 7 S.A. Mahmood. M. Z. Kabir, Modling of transient and steady-state dark current in amorphous silicon p-i-n photodiodes, Current Appl. Phys., no. 9, 2009, pp. 1393-1396. 8 C. Main, Interpretation of photocurrent transients in amorphous semiconductors, J. of Noncrystalline Solids, 299-302, 2002, 525-530. 9 C. Main, S. Reynolds, I. Zrinščak, A. Merazga, Comparison of AC and DC constant photocurrent methods for determination of defect densities, J. of Non-Crystalline Solids, 338-340, 2004, 228-231 10 A. G. Kazanskii, K. Yu Khabarova, E. I. Terukov, Modulated photoconductivity method for investigation of band gap states distribution in silicon-based thin films, J. Non-Cryst. Solids, no.. 352, 2006, pp. 1176-1179. 11 V. Gradišnik, Observed similar behaviour of a-Si:H p-i-n photodiode and retina response, In Proc. of the 7th IASETD International Conf. on Biomedical Engineering, BioMed 2010, Innsbruck, Austria: IASTED, 2010, pp. 176-180. 12 S. G. Rosolen, F. Rifaudiere, J.-F. Le Gargasson, M. G. Brigell, Recommendations for a toxicological screening ERG procedure in laboratori animals, Doc. Ophthalmol., 110(1), Oct. 2005, 57-66. 13 V. Gradišnik, M. Pavlovic, B. Pivac, and I. Zulim, Study of the color detection of a-Si:H by transient response in the visible range, IEEE Trans. Electron Devices, 49(4), 2002, 550-556. 27 V. Gradišnik; Informacije Midem, Vol. 42, No. 1 (2012), 23 - 28 14 V. Gradisnik, M. Pavlovic, B. Pivac, and I. Zulim, Transient response times of a-Si:H p-i-n color detector, IEEE Trans. Electron Devices, 53(10), 2006, 2485-2491. 15 T. M. O'Hearn, S. R. Sadda, J. D. Weiland, M. Maia, E. Margalit, and M. S. Humayun, Electrical stimulation in normal and retinal degeneration (rd 1 ) isolated mouse retina, Vision Research, 46(19), 2006, 3198-3204. 16 A. Merazga, H. Belgacem, C. Main, S. Reynolds, Transient photoconductivity, density of tail states and doping effect in amorphous silicon, Solid State Communications, 112, 1999, 535-539. Arrived: 14. 03. 2011 Accepted: 26. 1. 2012 28 Original paper ije IMIDEM Journal of M Informacije | Journal of Microelectronics, Electronic Components and Materials Vol. 42, No. 1 (2012), 29 - 35 An Adaptive-Parity Error-Resilient LZ77 Compression Algorithm Tomaž Korošec * and Sašo Tomažič University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia Abstract: The paper proposes an improved error-resilient Lempel-Ziv'77 (LZ'77) algorithm employing an adaptive amount of parity bits for error protection. It is a modified version of error resilient algorithm LZRS'77, proposed recently, which uses a constant amount of parity over all of the encoded blocks of data. The constant amount of parity is bounded by the lowest-redundancy part of the encoded string, whereas the adaptive parity more efficiently utilizes the available redundancy of the encoded string, and can be on average much higher. The proposed algorithm thus provides better error protection of encoded data. The performance of both algorithms was measured. The comparison showed a noticeable improvement by use of adaptive parity. The proposed algorithm is capable of correcting up to a few times as many errors as the original algorithm, while the compression performance remains practically unchanged. Key words: Lempel-Ziv'77 coding, joint source-channel coding, multiple matches, error resilience, adaptive parity, Reed-Solomon coding Na napake odporen zgoščevalni algoritem LZ'77 s prilagodljivo pariteto Izvleček: V prispevku je predlagan izboljšan na napake odporen Lempel-Ziv'77 (LZ'77) algoritem, ki za zaščito proti napakam uporablja prilagodljivo število paritetnih bitov prek posameznih kodiranih podatkovnih blokov. Gre za modifikacijo na napake odpornega algoritma LZRS'77, ki za zaščito posameznih podatkovnih blokov uporablja konstantno število paritetnih bitov prek celotnega kodiranega podatkovnega niza. Za zapis paritetnih bitov je izkoriščena redundanca zakodiranih podatkov. Maksimalno konstantno količino paritetnih bitov tako narekuje del niza z najnižjo redundanco, medtem ko prilagodljiva pariteta bolje izkorišča redundanco, ki je na voljo v posameznih delih kodiranega niza in je lahko tako v povprečju bistveno večja. Predlagan algoritem posledično omogoča boljšo zaščito proti napakam. Meritve zmogljivosti obeh algoritmov so pokazale znatno povečanje odpornosti na napake pri uporabi novo predlaganega algoritma. Slednji je sposoben popraviti do nekaj krat več napak kot obstoječi algoritem, pri čemer kvaliteta zgoščevanja ostane praktično nespremenjena. Ključne besede: Lempel-Ziv'77 kodiranje, združeno izvorno-kanalsko, večkratno ujemanje niza, odpornost na napake, prilagodljiva pariteta, Reed-Solomon kodiranje * Corresponding Author's e-mail: tomaz.korosec@fe.uni-ljsi 1. Introduction Lossless data compression algorithms, such as the Lem-pel-Ziv'77 (LZ'77) [1] algorithm and its variations, are nowadays quite common in different applications and compression schemes (GZIP, GIF, etc.). However, one of their major disadvantages is their lack of resistance to errors. In practice, even a single error can propagate and cause a large amount of errors in the decoding process. One possible solution for this problem is to use a channel coding scheme succeeding the source coding, which adds additional parity bits, allowing error correction and detection in the decoding process. However, such a solution is undesirable in bandwidth- or storage-limited systems, where the amount of bits required to carry some information should be as small as possible. A separate use of source and channel coding is not optimal, since it does not utilize inherent redundancy left by the source coding. This redundancy could be exploited for protection against errors. Therefore, joint source-channel coding seems to be a better solution. Several joint source-channel coding algorithms have been proposed in the past, e.g., [2], [3], and [4]. The redundancy left in LZ'77 and LZW encoded data and the possibility of using it to embed additional information has been considered and investigated in [5], [6], [7], and [8]. 29 G. Bizjak et al; Informacije Midem, Vol. 42, No. 1 (2012), 11 - 17 The LZRS'77 algorithm, proposed in [8], exploits the redundancy left by the LZ'77 encoder to embed parity bits of the Reed-Solomon (RS) code. Embedded parity bits allow detection and correction of errors with practically no degradation of the compression performance. However, due to the limited redundancy left in the encoded data, the ability to detect and correct errors is limited to a finite number of successfully corrected errors. To successfully correct e error bits, 2e parity bits should be embedded. In the above-mentioned scheme, the number of parity bits embedded in each encoded block is constant and equal for all blocks, thus e is limited by the block with the lowest redundancy. In this paper, we propose an improvement to LZRS'77. Instead of keeping e constant, we change it adaptive-ly in accordance with the redundancy present in the encoded blocks. In this way, we increase the average number of parity bits per block and thus also increase the total number of errors that can be successfully corrected. We named this new algorithm LZRSa'77, where 'a' stands for adaptive. The paper is organized as follows. In Section 2, we briefly describe the LZRS'77 algorithm, which is the basis of the proposed adaptive-parity algorithm LZRSa'77 described in Section 3. Experimental results comparing both algorithms are presented in Section 4. Some concluding remarks are given in Section 5. 2. Protection Against Errors Exploiting LZ'77 Redundancy The basic principle of the LZ'77 algorithm is to replace sequences of symbols that occur repeatedly in the encoding string X = (X1, X2, X3, ...) with pointers Y = (Y1, Y2, Y3,...) to previous occurrence of the same sequence. The algorithm looks in the sequence of past symbols E = (X,, X2,..., X-1 ) to find the longest match of the prefix (X.,X.+1,...,X.+l1 ) of the currently encoding string S = (X., XM ,..., XN ). The pointer is written as a triple Yk = (pk, lk, sk ), where pk is the position (i.e., starting index) of the longest match relative to the current index i, lk is the length of the longest match, and sk = X+ is the first non-matching symbol following the matching sequence. The symbol sk is needed to proceed in cases when there is no match for the current symbol. An example of encoding the sequence at position i that matches the sequence at position j is shown in Fig. 1. To avoid overly large values of position and length parameters, the LZ'77 algorithm employs a principle called the sliding window. The algorithm looks for the longest matches only in data within the fixed-size window. Figure 1: An example of a pointer record for a repeated part of a string in the LZ'77 algorithm. The sequence of length l = 6 at position j is repeated at position i, i.e., the current position. Often, there is more than one longest match for a given sequence or phrase, which means more than one possible pointer. Usually, the algorithm chooses the latest pointer, i.e., the one with the smallest position value. However, selection of another pointer would not affect the decompression process. Actually, the multiplicity of matches represents some kind of redundancy and could be exploited for embedding additional information bits almost without degradation in the compression rate. A small decrease in compression performance could be noticed only in case when pointers are additionally Huffman encoded, as for example in GZIP algorithm, specified in [9]. With appropriate selection of one among M possible pointers, we can encode up to d = Llog2Mj additional bits. These additional bits can be encoded with proper selection of pointers with multiplicity M > 1, as shown in Fig. 2. The algorithm LZS'77 that exploits the above-described principle in LZ'77 scheme was proposed and fully described in [5], [6], [7], and [8]. Since different pointer selection does not affect the decoding process, the proposed algorithm is completely backward compatible with the LZ'77 decoder. Past symbols Current _—_j- position Figure 2: An example of the longest match with multiplicity M = 4. With a choice of one of four possible pointers, we can encode two additional bits. The additional bits can be utilized to embed parity bits for error detection and correction. In [6] and [8], a new algorithm called LZRS'77 was proposed. It uses the additional bits in LZ'77 to embed parity bits of RS code originally proposed in [10]. In LZRS'77, an input string X is first encoded using the standard LZ'77 algorithm. Encoded data Y are then split into blocks of 255-2e bytes, which are processed in reverse order starting with the last block. When processing block Bn, 2e parity bytes of block B+ are computed first using RS(255, 255-2e) code and then those bytes are embedded in the point- 30 T. Korošec et al; Informacije Midem, Vol. 42, No. 1 (2012),29 - 35 ers of block Bn using the previously mentioned LZS'77 scheme. Parity bits of the first block can be stored at the beginning of the file if we also wish to protect the first block. Otherwise, to assure backward compatibility with the LZ'77 decoder, protection of the first block should be omitted. In the decoding process, the procedure is performed in the opposite order. The first block is corrected (only in the case when the first block is protected as well) using parity bits appended at the beginning of the file. Then it is decompressed using the LZS'77 decompression algorithm, which reconstructs the first part of the original string and also recovers parity bits of the second block. The algorithm then corrects and decompresses the second block and continues in this manner till the end of the file. The desired maximum number of errors e to be effectively corrected in each block during the decoding process is given as an input parameter of the algorithm. This number is upward-limited by the ability to embed bits in the pointer selection, i.e., by the redundancy of the encoded data. In the LZRS'77 algorithm, e is constant over all blocks; thus its value is limited by the block with the lowest redundancy. So e could be an arbitrary value between zero and maximum allowable one. 3. The LZRSa'77 Algorithm with Adaptive Parity A constant e over all encoding blocks, as in LZRS'77, is not optimal, since redundancy in different parts of data string can differ significantly. If there is just one part of the string that has very low redundancy, it will dictate the maximum value of e for the whole string. Such low-redundancy blocks are usually at the beginning of the encoded data, since there are not yet many previous matches that would contribute to redundancy. Better utilization of overall redundancy would be possible with an adaptive e, changing from one block to another according to availability of redundancy bits in each block. In that case, low-redundancy parts of the string would affect the error protection performance just of these parts, whereas the rest of the string could be better protected according to its redundancy availability. As a result, the value of e is still upward-limited by the overall redundancy but its average value can be higher, resulting in better resistance to errors. On the basis of the above-described assumptions, we propose an improved version of the LZRS'77 algorithm, named LZRSa'77, where 'a' refers to adaptive e. The input string X is first encoded using the standard LZ'77 algorithm, when the multiplicity Mk of each pointer is also recorded. The encoded data is then divided into blocks of different lengths, according to the locally available redundancy. Firstly, 255-2e1 bytes are put in the first block B,, where e, is given as an input parameter of the algorithm. Then, the number of parity bytes 2e2 of the second block B2 is calculated, where e2 is given as: e2 = X L log2 M. J / 16 (1) If, for example, the number of additional bits that could be embedded in the pointers multiplicity of the first block ( X L log2 M. J ) is 43, then the number of parity bytes of the second block would be 2e2 = 2L43/16J = 4. Number '16' provides for proper bits-to-bytes recalculation, since the algorithm operate with the integer value of bytes as the RS coding does. According to the obtained value, the second block length is 255-2e2 = 251 bytes. The process is then repeated until the end of the input data is reached. We obtain b blocks of different lengths 255-2en. After dividing all the data into blocks of different lengths, the process of RS coding and embedding of parity bits is performed. Embedding of parity bits is realized by adjusting the pointer values. The blocks are processed in reverse order, from the very last to the first, as with the LZRS'77 algorithm. The number of parity bytes 2en for RS coding varies for each block. The sequence of operations of the encoder is illustrated in Fig. 3. First ptiase Calculate e, t \ Calculate e, t V Calculate c. t \ B, (255-2e, bytes) B, (255-2fij byles) Bi (255-2e5 bytes) B (255-2et bytes> Store Adjust Adjust Adjust RS, pointers RS1 pointers RS, pointers _l t_I t_I T RS. Second phase Figure 3: The sequence of operations on the compressed data as processed by the LZRSa'77 encoder. Here RSn are parity bytes of the block Bn. As mentioned above, the desired error correction capability of the first block e, is given as an input parameter of the algorithm, whereas en for all the other blocks are obtained from the redundancy of their preceding blocks and are as high as the redundancy permits. As in the LZRS'77 algorithm, parity bits of the first block are appended at the beginning of the encoded data, or omitted if we want to preserve backward compatibility 31 T. Korošec et al; Informacije Midem, Vol. 42, No. 1 (2012),29 - 35 with the standard LZ'77 decoder. In the last case, e1 is equal to zero. The decoding process is similar to that used in the LZRS'77 decoding algorithm. Each block Bn is first error-corrected using 2en parity bytes known from the previous block Bn-1, then decoded using the LZS'77 decoder to decompress part of the original string and obtain 2en+1 parity bytes of the next block. The amount of parity bits is used to determine the length of the next block Bn+1, whereas the parity bits themselves are used to correct the block. The process is continued to the last block. A high-level description of the encoding and decoding algorithms is shown in Fig. 4. LZRSa'77_Encoder (X, ei) let b, j1, 0 [P, p] LZ'77_COMPRESS(X) while j < |P| do append (/+ 1)...(/+ 255- 2eb) bytes of Pto Bb let j *—j + 255 - 2eb let b«- b +1 evaluate efc by counting possible pointers in p for Bn for n b.....2 do let RSn<— RS_EN CO D E R(B„, e„) embed bytes RSn in the block S„.i using LZS'77 let RSi RS_ENCODER(Bi,ei) let S <— (Si, 62,..., Bb) return ei, RS1, B LZRSa'77_Decoder (ei, RSu B) D *— empty string let B, «- first 255 - bytes of B let/«— 255- 2ei + 1 let /7 <— 2 if RS_DeC0der(B1 + RSi, e,) = errors then correct Si append LZ'77_DECOMPRESS(S1) to D while j< |B| do recover RSn from the pointers in Bn. 1 using LZS'77 let e„ half a number of RS„ bytes let Bn <— next 255 - 2en bytes of B from index j on let j <-j+ 255 - 2e„ lei n *- n +1 if RS_Decoder(B„ + RS„, e„) = errors then correct B„ append LZ'77_DECOMPRESS(Bn) to D return D Figure 4: The error-resilient LZ'77 algorithm with adaptive parity 2en. Here X is the input string, e1 is the maximum number of errors that can be corrected in the first block, P is the LZ'77 encoded string of pointers, p is a vector of possible positions for each pointer, Bn are blocks of encoded data of variable length 255-2en, RSn are RS parity bytes of the block Bn, and D is the recovered string. 4. Experimental Results To evaluate the performance of the proposed algorithm, we performed several tests with different files from the Calgary corpus [11], a commonly used collection of text and binary data files for comparing data compression algorithms. We implemented our proposed algorithm in the Matlab 6.5.1 Release 13 program tool. For the basic LZ'77 encoding, the LZ'77 algorithm with a sliding-window length of 32 kilobytes was used. It was implemented in Matlab as well. Maximum length of pointers was chosen to be 255 bytes. In the experiment, we first compared the maximal value of constant e (emax) and average value of an adaptive e (E(en)) in different test strings. For this purpose, we encoded different files from the Calgary corpus using the LZRS'77 and LZRSa'77 algorithms. For maximal constant e observation, we performed tests only on strings of 10.000 bytes length, since the lowest-redundancy parts proved to be in the first blocks of the encoded strings, because there are not so many past symbols yet. Thus, different string lengths practically do not affect the maximal e, as long as the beginning of the string is the same. For this reason, we rather performed tests on different substrings of the same length within each file, starting at different positions. Average maximal e (E(emax)) averaged over all tested substrings for each file is given in the second column of Table 1, whereas maximal e of the first substring of each file (and thus that corresponding to the whole file) is given in the third column. Even if, in an unexpected case, the lowest redundancy part of the whole file is not within the first 10.000 symbols, the obtained results were still relevant, since we made additional experiments on error-correction performance on the first 3000 and 30.000 symbols with the same constant parity used. When observing average adaptive e (E(en)), we performed measurements on two different lengths of source strings, i.e., 3000 bytes and 30.000 bytes, and we again performed the tests on different substrings within each file for both lengths. The value of e1 was in all cases chosen to be equal to 1. Results are shown in fourth and fifth columns of Table 1. The experiment results showed that the maximal constant e that could be embedded in the redundancy of the encoded string is in the best case equal to 3 (geo file), whereas average adaptive e over large number of blocks could be from 4,5 up to 8. These results already justify the use of adaptive e. To justify it further, we performed another experiment. We tested the ability of each algorithm to correct random errors. 32 T. Korošec et al; Informacije Midem, Vol. 42, No. 1 (2012),29 - 35 Table 1: Values of maximal constant and average adaptive e for different length (L) substrings of the Calgary corpus files constant e adaptive e E(e ) over max substrings with ¿=10.000 eomfax the whole E[E(en)] over E[E(en)] over substrings substrings file L=3000 L=30.000 bib 2,00 2 4,79 5,29 book1 2,38 2 4,75 4,94 book2 2,18 1 4,64 5,04 geo 2,40 3 5,48 8,32 news 1,92 1 5,05 5,93 objl 2,50 2 5,05 / obj2 1,46 1 4,68 6,77 paper1 2,00 1 4,64 5,14 paper2 1,88 1 4,65 4,80 paper3 1,75 1 4,62 4,87 paper4 1,00 1 4,70 / paper5 1,00 1 4,75 / paper6 1,67 1 4,81 5,14 progc 2,00 2 4,65 5,70 progl 2,00 2 4,48 6,21 progp 2,25 2 4,96 5,69 trans 1,22 2 4,82 6,26 When testing error correction performance, we performed measurements on three different files from Calgary corpus, i.e., news, progp, and geo, which allow maximal values of constant e equal to 1, 2, and 3 respectively, as shown in Table 1. Measurements were made on the first 3000 and 30.000 bytes of each file respectively. When using the LZR-Sa'77 algorithm, e1 could be an arbitrary value. However, we chose values that approximately correspond to E(en) for each of the tested files. Thus, we chose e1 = 5 for the news and progp test strings, and e1 = 8 for the geo test string. We tested the resilience to errors by introducing different number of errors randomly distributed over the whole encoded string. For error generation, we used a built-in Matlab function, called randerr, which generates patterns of geometrically distributed bit errors. Results for the three test strings, all in two different length variations, and for both described algorithms used (LZRS'77 and LZRSa'77) are shown in the graphs in Fig. 5 to Fig. 7. Each case of string type, string length and algorithm used was tested with different numbers of injected errors. For each number of errors, 100 trials with different randomly distributed errors were performed and number of successful data recovers tested. In the graphs in Fig. 5 to Fig. 7, the measured results are plotted with discrete points, whereas continuous curves represent a polynomial-fitted approximation. The results show quite an improvement in error correction capability when using the LZRSa'77 algorithm instead of LZRS'77, which is a direct consequence of the larger amount of parity used in the first algorithm. The performance improvement decreases with increasing constant e from 1 to 3, but is still noticeable also in the last case, which is practically the best we could achieve with the LZRS'77 algorithm. As can be seen from the results, the performance improvement also somewhat increases with increasing length of the string. This is probably due to the increasing E(en) with increasing length of the string, as evident from Table 1, whereas constant e remains the same. The performance of the LZRSa'77 algorithm could be slightly further improved using higher value of e1, which would, however, improve only the protection of the first block. LZRS'77 - measured - LZRS'77 ■ curve-lilted V LZRSa'77 - measured — LZRSa'77 - curve-fitted a) 60 80 100 BER (10s) LZRS'77 - measured - LZRS'77 ■ curve-filled V LZRSa'77 - measured ---LZRSa'77 - curve-fitted Figure 5: The number of successful recovers among 100 trials for two different length (L) substrings of the file news, for increasing number of bit errors geometrically distributed over the encoded strings, represented as Bit Error Rate (BER), end different algorithm used (LZRS'77 and LZRSa'77). a) L = 3000 bytes; b) L = 30.000 bytes. 33 D. A. Shnawah et al; Informacije Midem, Vol. 42, No. 1 (2012), 3 - 10 Figure 6: The number of successful recovers among 100 trials for two different length (L) substrings of the file progp, for increasing number of bit errors geometrically distributed over the encoded strings, represented as BER, end different algorithm used (LZRS'77 and LZR-Sa'77). a) L = 3000 bytes; b) L = 30.000 bytes. Figure 7: The number of successful recovers among 100 trials for two different length (L) substrings of the file geo, for increasing number of bit errors geometrically distributed over the encoded strings, represented as BER, end different algorithm used (LZRS'77 and LZR-Sa'77). a) L = 3000 bytes; b) L = 30.000 bytes. 5. Conclusion An improved version of the error-resilient LZ'77 data compression scheme was presented. It allows use of adaptive number of parity bits over different blocks of encoded data according to available redundancy in the blocks. Compared to the recently proposed LZRS'77 scheme allowing only constant number of parity bits along the whole string, the new solution better utilizes available redundancy in the string, resulting in a larger number of errors that can be effectively corrected. Such an improvement does not practically degrade the compression rate compared to the LZRS'77 algorithm. Even though the parity of each block has to be calculated each time from the redundancy of the previous block, 34 T. Korošec et al; Informacije Midem, Vol. 42, No. 1 (2012),29 - 35 the time complexity of the new algorithm remains on the order of that of the LZRS'77 algorithm. However, some legacy from the LZRS'77 algorithm still remains in the new algorithm and represents two unsolved problems. The first is a question of an online encoding process, which could not be achieved due to the reverse order of block processing. The second is protection of the first block while maintaining backward compatibility. References 1 J. Ziv and A. Lempel, "A universal algorithm for sequential data compression," IEEE Trans. Inf. Theory, vol. IT-23, no. 3, May 1977, pp. 337-343. 2 M. E. Hellman, "On using natural redundancy for error detection", IEEE Trans. on Commun., vol. 22, October 1974, pp. 1690-1693. 3 K. Sayood and J. C. Borkenhagen, "Use of residual redundancy in the design of joint source/channel coders," IEEE Trans. on Commun., vol. 39, June 1991, pp. 838-846. 4 K. Sayood, H. Otu, and N. Demir,"Joint source/channel coding for variable length codes," IEEE Trans. Commun., vol. 48, no. 5, May 2000, pp. 787-794. 5 M. J. Atallah and S. Lonardi, "Authentication of LZ-77 compressed data," SAC 2003, Melbourne, FL, 2003, pp. 282-287. 6 S. Lonardi and W. Szpankowski, "Joint source-channel LZ'77 coding," in Proc. IEEE Data Compression Conf., Snowbird, UT, 2003, pp. 273-282. 7 Y. Wu, S. Lonardi, and W. Szpankowski, "Error-resilient LZW data compression," in IEEE Data Compression Conf., Snowbird, UT, 2006, pp. 193-202. 8 S. Lonardi, W. Szpankowski, and M. D. Ward, "Error resilient LZ'77 data compression: algorithms, analysis, and experiments," IEEE Trans. Inf. Theory, vol. 53, no. 5, May 2007, pp. 1799-1813. 9 RFC 1951: DEFLATE compressed data format specification version 1.3, P. Deutsch, Aladdin Enterprises, May 1996. Available: http://www.ietf. org/rfc/rfc1951.txt 10 I. S. Reed and G. Solomon, "Polynomial codes over certain finite fields," J. SIAM, vol. 8, 1960, pp. 300-304. 11 The Calgary corpus. Available: http://corpus.can-terbury.ac.nz/descriptions/#calgary Arrived: 25. 02. 2011 Accepted: 26. 1. 2012 35 Original paper ije (mIDEM lournal of M Informacije | Journal of Microelectronics, Electronic Components and Materials Vol. 42, No. 1 (2012), 36 - 42 Influence of parameters of the flanged open-ended coaxial probe measurement setup on permittivity measurement Jure Koselj*, Vladimir B. Bregar Nanotesla Institute Ljubljana, Stegne 29, 1000 Ljubljana, Slovenia Abstract: The flanged open-ended coaxial probe is studied using a full-wave model. Influence of parameters like a gap, sample thickness, set-up measurement geometry, probe impedance, size of the flange and size of the sample are investigated and presented. Study is limited to dielectric materials with different characteristics (low loss, high loss). The results showed that error in an air gap is the most important parameter that affects the permittivity measurement accuracy, but also several other parameters are important and present considerable constraints regarding application of open-ended coaxial probe. We also identified the optimal measurement geometry in order to minimize the effect of these parameters. Key words: full-wave model, dielectric materials, open-ended coaxial probe Vpliv parametrov merilnega sistema z odprto koaksialno sondo na meritve dielektričnosti Povzetek: Predstavljena je študija odprte koaksialne sonde s prirobnico z uporabo modela polnovalne analize. Raziskali in predstavili smo vpliv parametrov kot so reža, debelina vzorca, merilna geometrija, impedanca sonde, velikost prirobnice in velikost vzorca. Študija je omejena na dielektrične materiale z različnimi karakteristikami (nizko izgubne, visoko izgubne). Rezultati so pokazali, da ima največji vpliv na merilno točnost meritve dielektričnosti zračna reža, poleg tega pa so pomembni tudi ostali parametri, ki predstavljajo precejšne omejitve aplikacij z odprto koaksialno sondo. Prav tako smo poiskali optimalno merilno geometrijo, da bi zmanjšali efekt parametrov merilnega sistema z odprto koaksialno sondo. Ključne besede: polnovalni model, dielektrični materiali, odprta koaksialna sonda f Corresponding Author's e-mail: jure.koselj@nanotesla.si 1. Introduction Each material has distinct dielectric properties and knowing these properties enables engineers to use appropriate materials in specific application. Measuring and understanding how dielectric properties of material vary at microwave frequencies is important in many fields like wireless communication, radar detection or biomedical application. Intensive studies have been done in development of measurement of the complex permittivity. Many factors like frequency range, required measurement accuracy, sample size, surface topology, state of the material (liquid, solid, powder, thin film), destructive or nondestructive nature of measurements, have to be considered when choosing appropriate method for measurement permittivity. Methods commonly used include transmission/reflection and resonance methods.Transmission/reflection methods have advantage over resonance methods because they have wide frequency range, are simple to use and can measure lossy materials but are less accurate than resonance methods /1-4/. On the other hand resonance methods are limited to discrete frequencies, defined by resonator dimensions, and to materials with low losses. For transmission/reflection method there are several different approaches by using coaxial waveguide /5, 6/, planar waveguide /7, 8/, rectangular waveguide /9-10/ or free-space method /11-13/. The latter two present frequency limitations due to size of the tested sample and planar waveguides methods are limited to the solid and thick film materials. Hence the most widely used method among transmission/reflection methods is open-ended coaxial line due to its simplicity and accuracy in broadband measurements. 36 T. Korošec et al; Informacije Midem, Vol. 42, No. 1 (2012),29 - 35 An alternative characterization method based on the reflection is application of an open-end coaxial probe /14/. Simple set-up and sample geometry present significant advantage over other methods; however, for determination of the sample material parameters a good model of wave propagation is needed. There are several models for open-ended coaxial probe like capacitance model /5, 15/, radiation model /16/, virtual line model /17/, rational function model /18, 19/ and full-wave model /14, 20, 21, 24/, with increasing accuracy and also complexity. Figure 1 : Flanged open-ended coaxial probe measurement setup with layered dielectric sample with termination. The aim of this paper is to analyze in detail the open-end coaxial probe system and determine which parameters are affecting measured reflection coefficient for given measurement geometry and evaluate the effect of individual parameter. The analysis is made with the full-wave model and thus presents the most accurate ana lytica l representation of the open-end coaxial probe. We focused the study to non magnetic materials as they are both more commonly measured and require simpler measurement set-up compared to the magnetic materials. In addition, we analyzed the system for both liquid and solid samples and for materials that have either low or high dielectric losses. Our results show the effect of different parameters on permittivity (reflection coefficient) and therefore help researchers to select appropriate measurement geometry for measurement solid materials (low and high loss). 2. Methods We focused on the non-magnetic materials where the tested sample has permittivity £s* = £0[£s' - jV'] and permeability |s* = |0, £0 is permittivity of vacuum and |0 is permeability of vacuum. The geometry of the problem as shown in Fig.1 consists of an internal and external region. The internal region represents interior of an open-ended coaxial line, while external region is a layered medium. The coaxial line has inner diameter 2a, outer diameter 2b and is filled with a low loss material of permittivity £c* = £0£c' and permeability |c* = |0. To determine the influence of parameters on measured permittivity we calculate reflection coefficient with full-wave model /14/ and then use this value in optimization algorithm (Matlab's fsolve trust-region-dogleg algorithm) as a substitute for a measured reflection coefficient. To eliminate measurement uncertainties and have well defined geometry we decided to use model instead of actual measurements. Each parameter of interest was varied to get results (permittivity) and afterwards we compared true value which was used to calculate the true reflection coefficient with the one obtained from optimization. A£ = 1 Ad = d - d, , AL = L - L. , (1) (2) where £r is true value of permittivity, £i is the value obtained with optimization, dr and di are values of true gap thickness and gap thickness used in optimization, Lr and L are values of true sample thickness and sample thickness used in optimization algorithm. The true permittivity of each material was calculated for two probes with following parameters of a = 1.51 mm, b = 4.90 mm, £c = 1.99 (realistic 50 Q coaxial probe 1), a = 0.255 mm, b = 0.84 mm, £c = 2.04 (realistic 50 Q coaxial probe 2), £t = 1, d = 50 |im, L=500 |im and 12 TM0n modes were used. In our study we used values for different materials such as Teflon (£r = 2-0.003j at 10 GHz), mixture of titanium dioxide and wax (£r = 10-0.09j at 10 GHz) and mixture of graphite and wax (£r = 40-25j at 10 GHz) to show the effect of low and high loss material on the relative error in permittivity. We also used different measurement set-up geometries to find out which would be optimum for dielectric materials. The full-wave model has three distinct measurement set-up geometries, semi-infinite (model 0), short-circuit (model 1) and dielectric terminated (model 2) geometry. 3. Results and discussion Influence of parameters on permittivity The full-wave model used for calculation of reflection coefficient as mentioned earlier is exact but in the case of real measurement set-up it has some disadvantages due to the assumption that the flange and sample extend to infinity in radial direction. These conditions are never satisfied in real measurement, however, the effect of using finite sample and flange in radial dimensions was investigated by De Langhe et al. /22/. It was found that if the ratio between the aperture size and the surface of the sample is greater than 2.5, the measured characteristics (amplitude and phase) are very close to those of the infinite sample. If the sample thickness decreases, one sees larger differences and if thickness increases, the differences get smaller. It was also found 37 T. Korošec et al; Informacije Midem, Vol. 42, No. 1 (2012),29 - 35 that if the flange radius is at least two times larger than the outer radii of the coaxial probe, only small differences are seen in amplitude and phase. Thus accurate measurements can be made with reasonable flange and sample dimensions despite assumption of infinity. Influence of gap and sample thickness with different measurement set-up geometry One of the important uncertainties is a gap between the sample and the probe as this gap is very difficult to measure. But in reality sample can be also concave or convex and this effect was investigated by A.-K. A. Hassan et al. /23/. It was found that for the concave sample the reflection coefficient is strongly affected by both flange diameter and the radius of concave sample, whereas in the case of convex sample reflection coefficient is affected for small radii of the sample, but the flange diameter has negligible effect on the reflection coefficient. It is concluded that an improve technique is required to achieve better accuracy of measurement of concave samples, while measurements of convex samples, in general, are in good agreement with published data. In order to evaluate how variation of an air gap and sample thickness affect complex permittivity a number of calculations with different measurement set-up geometries were made. In Fig. 2 we compare how error in air gap influences real and imaginary part of permittivity of high loss sample (graphite in wax composite) at 10 GHz. For graphite composite model 2 has the lowest dependence of an air gap on permittivity, nevertheless, error is for both components of permittivity over 40% at air gap value Ad = 50^m. One can see similar results with model 2 for composite of titanium dioxide and wax (Fig. 3). Fig. 4 shows that model 0 and model 2 have similar error in real component of permittivity of teflon, while error in imaginary component clearly shows that model 2 produces better results when the air gap is varied. From Figs. 2-4 one can conclude both that model 2 has in general the least amount of relative error, and that the permittivity of high-loss material is more affected by the air gap. Figure 3: Error in real and imaginary component of permittivity in percentage as a function of an air gap for a frequency of 10GHz for titanium dioxide mixed with wax with different measurement set-up geometries. Figure 2: Error in real and component of permittivity in percentage as a function of an air gap for a frequency of 10 GHz for graphite mixed with wax with different measurement set-up geometries. 38 T. Korošec et al; Informacije Midem, Vol. 42, No. 1 (2012),29 - 35 Figure 4: Error in real and imaginary component of permittivity in percentage as a function of an air gap for a frequency of 10GHz for teflon with different measurement set-up geometries. Fig. 5 illustrates the dependence of sample thickness on permittivity. The results show that in general model 2 produces best results on both materials. Also it is shown that error in sample thickness produces higher error in imaginary component of permittivity of low loss material, this can be due to low absolute value of imaginary component of permittivity and therefore larger error. We obtained similar results for teflon which has low relative error for real component and highest relative error for imaginary component of permittivity, this can be also explained by low absolute value of imaginary component of permittivity and therefore larger error. Influence of probe size and frequency With simple test we also examined the effect of probe dimensions and operating frequency on the required the thickness of sample that can be used as semi-infinite sample. For reference we computed reflection coefficient for the geometry of semi-infinite sample. Then for finite thickness geometry we adapt the thickness of sample so that the reflection coefficient computed had the same value (on 6th decimal place) as reference reflection coefficient. Results for different probe dimensions can be seen in table 1. Table 1: Influence of probe dimensions and frequency on electromagnetic field penetration for different materials Figure 5: Error in real and imaginary component of permittivity in percentage as a function of the sample thickness for a frequency of 10GHz for titanium dioxide-wax and graphite-wax composites with different measurement set-up geometries. probe frequency thickness £ Of dimensions [GHz] [mm] material a [mm] b [mm] 1,51 4,87 1 1120 10-0,01i 1,51 4,87 5 2070 10-0,01i 1,51 4,87 10 2080 10-0,01i 1,51 4,87 1 54 10-25i 1,51 4,87 5 18 10-25i 1,51 4,87 10 11 10-25i 0,225 0,84 1 20 10-25i 0,225 0,84 5 11 10-25i 0,225 0,84 10 7 10-25i 0,225 0,84 1 36 10-0,01i 0,225 0,84 5 182 10-0,01i 0,225 0,84 10 211 10-0,01i 39 T. Korošec et al; Informacije Midem, Vol. 42, No. 1 (2012),29 - 35 As expected, for both probes material with higher dielectric losses needs lower thickness to be applicable as semi infinite sample. For the larger probe and relatively low-loss material the required thicknesses are substantial. The smaller probe shows the same dependence but required thicknesses are as expected much lower. But it is evident that operating frequency and probe dimensions are key factors when one wants to use semiinfinite geometry for measurement set-up. Table 2 comprises probe dimensions, frequency of operation, true and obtained value of permittivity. The true value is permittivity used for calculation of reflection coefficient with parameters of d=50 ^m, L=500 ^m, mode=12, model=2 and frequency=10 GHz, the obtained value is permittivity obtained by optimization algorithm from calculated reflection coefficient with parameters of d=0 ^m, L=500 ^m, mode=12, model=2 and frequency at 10 GHz. Table 2: Comparison of error in complex permittivity for used probes at 10 GHz probe frequency true obtained dimensions [GHz] value value of £ a [mm] b [mm] of £ 1,51 4,87 10 10-0,09i 7,96-0,049i 0,225 0,84 10 10-0,09i 3,73-0,012i 1,51 4,87 10 2-0,003i 1,89-0,0028i 0,225 0,84 10 2-0,003i 1,60-0,0016i 1,51 4,87 10 40-25i 25,31-9,85i 0,225 0,84 10 40-25i 24,66-9,10i Table 2 shows the difference between error in obtained permittivity with small and large probe. It is also seen that for low-loss materials small probe produces higher error in permittivity than large probe if the value of the air gap (d) is not at the correct value (the difference is 50 nm). We obtain similar results for other values of air-gap uncertainty Ad. For high-loss materials both probes give similar error in obtained permittivity. The frequency has little effect on error of permittivity for both probes and the above conclusions are valid over the operating frequency range. Influence of TM0n modes In our study we also analyzed the influence of number of used TM0n modes with different geometry set-up. For reference value we used 12 TM„ modes. We observed dif- 0n ferent behavior on high-loss material as it has the smallest relative error in real part of permittivity with model 0 (Fig. 6) and highest relative error with model 1 and model 2. With model 1 and model 2 low-loss materials has smallest relative error in real part of permittivity. One observes the opposite in relative error for imaginary part of permittivity for all three models (model 2 has the lowest relative error) where high-loss material has lowest error among all three materials. Again, higher relative error in imaginary component can be explained with low absolute value and therefore high relative error. Figure 6: Error in real and imaginary component of permittivity in percentage as a function of a number of TMon modes for a frequency of 10GHz for different materials with measurement set-up geometry of model 0. From data shown in figure 6 we can conclude that higher modes affect permittivity and should be used as many as possible. We used only 12 modes as reference and it is obvious that low number of used TM„ 0n modes contribute to measurement error. Figure 7: Error in real component of permittivity in percentage as a function of a number of TM0n modes for a frequency of 10GHz for titanium dioxide mixed with wax with different measurement set-up geometries. 40 T. Korošec et al; Informacije Midem, Vol. 42, No. 1 (2012),29 - 35 Figure 7 shows that the optimum geometry set-up for titanium dioxide composite (relatively low-loss material) is dielectric terminated geometry (model 2). The same results came for imaginary component. Similar results were obtained for teflon and graphite composite. This is not surprising because in geometry of model 2 strong electric fields interact with sample material, thus giving good measurement results. And we also confirm that model 1 which has short-circuited termination has the worst results due to the boundary condition at the sample position. Electric fields in the sample are weak (magnetic fields are strong) and are approaching zero at the termination. 4. Conclusion In our study of open-ended coaxial probe system we analyzed effect of several key parameters of measurement set-up on measurement error of permittivity. The analysis showed that error in an air gap (error between actual and measured air gap) is clearly the single most important parameter as it produces highest error in permittivity among all studied parameters. Of the studied cases the least effect on error in permittivity was observable for low-loss materials with dielectric terminated measurement geometry. Of other parameters size of the probe and operating frequency affect penetration of the field through the sample as large probe produces more field penetration in sample material than small probe. As expected, our calculations show that at high frequencies there is less penetration than at low frequencies for the case of high-loss materials. Just the opposite is seen for low-loss materials. Also uncertainty of the sample thickness does have some effect on the measured value of permittivity, but this parameter is much easier to control, especially for solid samples. Also, we analyzed the effect of the number of TM modes in calculations and clearly showed that with lower mode number the error can be significant. This is especially important since to obtain the permittivity one compares measured and calculated values of reflection coefficient. The accuracy of the obtained permittivity values is inherently limited by the accuracy of the calculation and this further strengthens the grounds for the use of the full-wave model with several TM modes over simpler models of open-ended coaxial probe. When taken together our results show that the open-end coaxial probe system can be very problematic for solid samples and special effort must be applied to the air gap evaluation in order to get relevant values. Otherwise the method is limited to liquid or deformable materials where a gap can be eliminated. Further, it can be concluded that dielectric terminated geometry (model 2) is best option among feasible measurement setup geometries for permittivity measurements and this is valid for both high- or low-loss materials. References 1 M.D. Janezic and J. Baker-Jarvis, Full-wave analysis of a split cylinder resonator for nondestrucive permittivity measurements, IEEE Trans. on microwave theory and tech., Vol. 47 (10), pp. 2014-2020, 1999 2 W.E. Courtney, Analysis and evaluation of a method of measuring the complex permittivity and permeability of microwave insulators, IEEE Trans. on microwave theory and tech., Vol. MTT-18 (8), pp. 476-485, 1970 3 G. Kent, Nondestructive permittivity measurement of substrates, IEEE Trans. on Instrum. and measurem., Vol. 45 (1), pp. 102-106, 1996 4 J. Krupka and C. Weil, Recent advances in metrology for the electromagnetic characterization of materials at microwave frequencies, 12th Intern. Conf. on Microwaves and radar (MIKON '98), Vol. 4, pp.243-253, 1998 5 M.A. Stuchly, T.W. Athey, G.M. Samaras and G.E. Taylor, Measurement of radio frequency permittivity of biological tissues with an open-ended coaxial line: Part II - experimental results, IEEE Trans. on microwave theory and tech., Vol. MTT-30 (1), pp. 87-92, 1982 6 V.K. Ivanov, A.O. Silin and A.M. Stadnik, Determination of dielectric permittivity of materials by an isolated coaxial probe, Radioelectronics and communications systems, Vol. 50 (7), pp. 367-374, 2007 7 P. Queffelec et al., A microstrip device for the broad band simultaneous measurement of complex permeability and permittivity, IEEE Transactions on magnetics, Vol. 30 (2), pp. 224-231, 1994 8 N. Berger et al., Broadband non-destructive determination of complex permittivity with coplanar waveguide fixture, Electronics letters, Vol. 39 (20), 2003 9 N.N. Al-Moayed et al., Nano ferrites microwave complex permeability and permittivity measurements by T/R technique in waveguide, IEEE Transactions on magnetics, Vol. 44 (7), pp. 1768-1772, 2008 10 K.J. Bois, A.D. Benally and R. Zoughi, Multimode solution for the reflection properties of an open-ended rectangular waveguide radiating into a dielectric half-space: The forward and inverse problem, IEEE Trans. on Instrum. and measurem., Vol. 48 (6), pp. 1131-1140, 1999 41 T. Korošec et al; Informacije Midem, Vol. 42, No. 1 (2012),29 - 35 11 D.K. Ghodgaonkar, V.V. Varadan and V.K. Varadan, Free-space measurement of complex permittivity and complex permeability of magnetic materials at microwave frequencies, IEEE Trans. on Instrum. and measurem., Vol. 39 (2), pp. 387-394, 1990 12 I.S. Seo, W.S. Chin and D.G. Lee, Characterization of electromagnetic properties of polymeric composite materials with free-space method, Composite structures, Vol. 66, pp. 533-542, 2004 13 C.A. Grosvenor et al., Electrical material property measurements using a free-field, ultra-wideband system, 2004 annual report conference on electrical insulation and dielectric phenomena, pp.174177, 2004 14 J. Baker-Jarvis, M.D. Janezic, P.D. Domich and R. G. Geyer, Analysis of an open-ended coaxial probe with lift-off for nondestructive testing, IEEE Trans. on Instrum. and measurem., Vol. 43 (5), pp. 711718, 1994 15 T.W. Athey, M.A. Stuchly and S.S. Stuchly, Measurements of radio frequency permittivity of biological tissues with an open-ended coaxial line: Part I, IEEE Trans. on microwave theory and tech., Vol. MTT-30 (1), pp. 82-86, 1982 16 M.M. Brady, S.A. Symons and S.S. Stuchly, Dielectric behavior of selected animal tissues in vitro at frequencies from 2 to 4 GHz, IEEE Trans. on biomedical engineering, Vol. BME-28 (3), pp. 305-307, 1981 17 F.M. Ghannouchi and R.G. Bosisio, Measurement of microwave permittivity using a six-port reflec-tometer with an open-ended coaxial line, IEEE Trans. on Instrum. and measurem., Vol. 38 (2), pp. 505-508, 1989 18 J.M. Anderson, C.L. Sibbald and S.S. Stuchly, Dielectric measurements using a rational function model, IEEE Trans. on microwave theory and tech., Vol. 42 (2), pp. 199-204, 1994 19 S.S. Stuchly, C.L. Sibbald and J.M. Anderson, A new aperture admittance model for open-ended waveguides, IEEE Trans. on microwave theory and tech., Vol. 42 (2), pp. 192-198, 1994 20 C.L. Li and K.M. Chen, Determination of electromagnetic properties of materials using flanged open-ended coaxial probe - full-wave analysis, IEEE Trans. on Instrum. and measurem., Vol. 44 (1), pp. 19-27, 1995 21 G. Panariello, L. Verolino and G. Vitolo, Efficient an accurate full-wave analysis of the open-ended coaxial cable, IEEE Trans. on microwave theory and tech., Vol. 49 (7), pp. 1304-1309, 2001 22 P. De Langhe, L. Martens and D. De Zutter, Design rules for an experimental setup using an open-ended coaxial probe based on theoretical modelling, IEEE Trans. on Instrum. and measurem., Vol. 43 (6), pp. 810-817, 1994 23 A.-K.A. Hassan, X. Deming, Z. Yujian, Analysis of open-ended coaxial probe for EM-properties of curved surfaces materials testing y FDTD method, 1999 International conference on Computational Electromagnetics and Its Applications, pp. 549552, 1999 24 J.W. Steward and M.J. Harvilla, Electromagnetic characterization of a magnetic material using an open-ended waveguide probe and a rigorous full-wave multiomode model, J. of Electromagn. Waves and appl., Vol. 20 (14), pp. 2037-2052, 2006 Arrived: 18. 04. 2011 Accepted: 26. 1. 2012 42 Original paper ije rMIDEM Journal of M Informacije | Journal of Microelectronics, Electronic Components and Materials Vol. 42, No. 1 (2012), 43- 55 Kinetics of discharging arc formation France Pavlovcic* University of Ljubljana, Faculty of electrical engineering, Ljubljana, Slovenia Abstract: A scope of this paper is to present a mechanism of a discharge arc ignition in mechanically operated electric contacts in a gas mixture medium, such as the air. Introductory, the electric contacts are classified due to their mechanic and electric operation with given examples and corresponding most probable transient phenomena during their typical operation. In the first place, drawn arcs being metal vapour arcs of contact materials are most wearisome and destructive to the electric contacts, but mostly the discharging arcs are just preceding phenomenon to metal vapour arcs, and as such, they have indirectly the same effect on weariness of the electric contacts as the drawn arcs with intensity proportional to the arc current. A phenomenon of discharging arcs formation is discussed in this paper, which is an end result of throughout ionization and a throughout ionized path formation. The author's mathematical model calculates average kinetic energy of electrons in the non-homogeneous electric field due to an electron primary and secondary flow between two spherical electrodes. Exciting energy of gas molecules gained through electron impacts causes ionization of the molecules if the energy is high enough, but with the lower energy levels, dissociation of these molecules is carried out if they are at least two-atom molecules. Further on, the dissociated particles are associating in the other molecules, being also influenced by the electric field and so resulting in another processes of ionization and dissociation, and further on, recombination and association. There is continuous kinesis within the gas mixture, which gains a steady state mixture of the consistuent gases, until the throughout ionization of one of the constituent gases at least is established by the increasing electric field throughout a space between the electrodes. So far, physics of this phenomenon deals with the electron kinetic energy and the energy of other energy carriers, such as photons and displacement current, in the electric field, and its transferring to molecules as the exciting energy causing their ionization and the ion recombination and the molecule dissociation, but the dissociated particles are part of chemical process, which is, together with their association in the newly produced compounds, dealt by chemism of this phenomenon. The new gaseous compounds have their own physics of their excitation in the electric field, and further on, the physics is followed by chemism of newly produced gases. Both of them, the physics and the chemism, results in kinetics of the throughout ionization formation and hence the discharging arc formation. Keywords: discharging arc, gas throughout ionization, exciting energy, gas molecule dissociation, gas molecule ionization, gas chemism in electric field. Kinetika nastanka razelektritvenih oblokov Povzetek: Namen tega članka je predstavitev mehanizma vžiga razelektritvenega obloka v mehansko delujočih električnih kontaktih v zmesi plinov, kot je zrak. Uvodoma so električni kontakti razdeljeni glede na njihovo mehansko in električno delovanje s podanimi primeri in najbolj verjetnimi spremljajočimi tranzientnimi pojavi med njihovim tipičnim delovanjem. Na prvem mestu glede na obrabo in uničenjem električnih kontaktov so potegnjeni obloki, ki so obloki s kovinsko paro kontaktnih materialov. Toda večinoma so razelektritveni obloki predhodni pojav k oblokom s kovinsko paro in kot taki posredno enako učinkujejo na obrabo kontaktov kot potegnjeni obloki z jakostjo proporcionalno toku obloka. V tem članku je obravnavan pojav nastanka razelektritvenega obloka, ki je končni rezultat nastanka skoznje ionizacije in skoznje ionizacijske poti. S pomočjo avtorjevega matematičnega modela se izračunava povprečna kinetična energija elektronov v nehomogenem električnem polju zaradi primarnega in sekundarnega elektronskega toka med dvema kroglastima elektrodama. Vzbujevalna energija plinskih molekul, pridobljena s trki elektronov, povzroča ionizacijo molekul, če je energija dovolj visoka. Z nižjimi nivoji vzbujevalne energije se vrši disociacija - razdruževanje molekul, če so le-te vsaj dvoatomske. Nadalje se razdruženi delci združujejo v molekule drugih spojin in tako preidejo v druge procese ionizacije in disociacije ter nadalje rekombinacije in združevanja. V plinski mešanici, ki doseže stalno mešanico sestavnih plinov, obstaja nepretrgana kineza, dokler se pri večanju električnega polja ne vzpostavi skoznja ionizacija vsaj enega sestavnega plina plinske mešanice preko prostora med elektrodama. Fizika tega pojava obravnava kinetično energijo elektronov in energije drugih nosilcev, kot so fotoni in poljski tok, v električnem polju in njen prenos na molekule v obliki vzbujevalne energije, ki povzoča njihovo ionizacijo in rekombinacijo ionov ter razdruževanje molekul. Vendar razdruženi delci so del kemičnega procesa, ki je, skupaj z njihovim združevanjem v novo nastale spojine, obravnavan kot kemizem pojava. Nove plinaste spojine imajo svojo fiziko vzbujanja v električnem polju, in nadalje, fiziki sledi kemizem novo nastalih plinov. Oboje, fizika in kemizem sestavljata kinetiko nastanka skoznje ionizacije in tako tudi nastanka razelektritvenega obloka. Ključne besede: razelektritveni oblok, skoznja ionizacija plinov, vzbujevalna energija, disociacija plinskih molekul, ionizacija plinskih molekul, kemizem plinov v električnem polju. * Corresponding Author's e-mail: france.pavlovcic@fe.uni-lj.si 43 L. Pavlovic et al; Informacije Midem, Vol. 42, No. 1 (2012), 56 - 59 1. Introduction Researching arcing between electric contacts, there are some differences between the transient phenomena due to contact mechanic and electric operation. When shifting contacts are making contact, their bouncing occurs, and hence drawn arcs, which are metal vapour arcs - vapour of contact materials, which usually are metals [1]. The drawn arcs occur also in holding mode with sliding contacts, since they slips in some kinds of their design (sliders, trolleys, slip rings) while they are holding electric contact. But, the drawn arcs are accompanying phenomenon at the operation of breaking electric current, especially in heavy duty operations, nevertheless, which kind of mechanic operation is used with the electric contact. So far, these differences between the electric contacts are over-viewed in Tab. 1. Table 1: The clasification of electric contacts due to their machanic and electric operations in connection with the possible transient phenomena associated with their operations. On the other hand, discharging arcs ignite by electric breakdown of throughout ionized surrounding gas medium between contact members [2], when they are in separated position, while they are closing at making operation, are opening at breaking operation or are opened still in switch-off position. The gaseous substance is surrounding gas medium, physically and chemically changed by electric field between the contact members. The differences between the drawn arcs and the discharging arc are: • in the time-depended electric current flow through gaseous substance between the contact members: in the drawn arcs, the current flow continues without interruption; but in the discharging arcs, it starts at zero and increases up to the arc ignition, or it is interrupted, reduced to zero and re-established through the arc ignition; • in plasma particles, which depend on the gaseous substance between the contact members: in the drawn arcs, the ionized metal vapour of contact materials; but in the discharging arcs, the ionized gas of surrounding gas medium. The discharging arcs would not be harmful by themselves to cause the contact wear by the involved electrons and the ions of the surrounding gas constituents, if the discharging arc did not invoke the metal vaporization and the ionization of the contact material followed by material disposition from one to another contact member. In this paper the kinetics, and thereby, the physics and the chemism of discharging arc are discussed. 2. The physics of the gas throughout ionization The current between the separating contact members instantly falls towards zero value with the discharging arcs. A transient voltage appears due to time-derivative of the current, which extends to a breakdown voltage value of the medium - Fig. 1. The medium of the discharging arc is the existing ionized gas from the surrounding space. Figure 1: The principle electric discharge UI characteristic [3] with the range hereafter dealt in this discourse. 44 L. Pavlovic et al; Informacije Midem, Vol. 42, No. 1 (2012), 56 - 59 With the increasing transient voltage, the dielectric breakdown of the insulating gas occurs, and due to it, the electric current increases. The kind of discharge, which follows, depends on the current through the gas: the dark discharge, the glow discharge or the discharging arc, either stable or unstable, the latter one resulting in sparking. The separating contact rivets in some opening distance are substituted by a spark gap of two spherical electrodes to research the discharging arc formation. Therefore the mathematical model of the spark gap was developed to study the electric field and the ionizing process in the gap. The electric load is an air coil in this mathematical model, replaced by a conceptual circuit equivalent in very simplified way. T{r,v>) anode Figure 2: The geometrical drawing of the cathode with the layer of spatial distributed electric charge of integrated values of+Q2 and -Q2 around the cathode, the anode and the distances between them in r-q> coordinate system. The spark gap consists of two spherical electrodes with the same radius r0, separated by the distance dsur between their surfaces and the gas medium around them. An anode is positively charged and a cathode has a negative charge of the same absolute amount of charge in the first stage of the arc development, when there is no charge in the space nearby the cathode, so far without a cathode layer. The cathode is earthed so that a positive charge flows from the cathode to the earth. Due to the mutual influence of the anode and the cathode charges, the equivalent point charges of the anode and the cathode, +Q, and -Q,, lie in the eccentric positions in the relevant spheres - Fig. 2. In the cathode layer shown in Figs 2 and 3, when it comes to existence, there are a positive ions layer and an electrons layer. Since the positive ions layer is closer to the cathode than the electrons layer, a distributed additional negative charge is induced just beneath the cathode surface. The integral of the distributed positive charge of the ions layer over volume, the integral of the distributed charge of the electrons layer over volume, and the integral of the distributed additional negative charge of the cathode are equal by their absolute values. Due to the signs assigned to these charges, the sum of these integrals equals to the integral of the distributed additional negative charge of the cathode when radii of the ions layer and the electrons layer are limiting to the cathode radius, and so far the integrated additional negative charge of the cathode is substituted by the equivalent point charge -Q2 in the cathode centre. The mathematical model in spherical coordinates (r, q>, O) is simplified to two dimensional space (r, q>, O = n/2) ^ (r, q>) because the electric field is rotary symmetrical. Hereafter, to avoid misunderstanding, the usage of Cartesian coordinates refers strictly to the plane (r, and not to three dimensional space. The eccentric positions of the equivalent point charges are defined by eccentric radius r as it is shown in Fig. 2, and further on, ecc ~J the eccentric position of the cathode equivalent point charge -Q, is the zero point of the coordinate system. A potential Uof any point T(r, = T(r, r, rœn) is an algebraic sum of the partial potentials caused by the anode and the cathode charges since the potential is scalar value. It is defined by the following equation in the bi-radial coordinates r, ra plus dependent coordinate rcent, as follows [4]: Q, U(r, ra rcen„ Z 4-ns 4-ns , , , , - + - +---------- r r r r + d a ecc ecc sur ' (1) --,- + ,-r r cent 0 where Q, and Q2 are absolute values of point charges. Variables of this equation are coordinates, as already mentioned, r, r, r , as shown in Fig. 2. ' ' a cent ^ «4 e. iV 11 I = : & £ I I \ \ / dc Tr J ✓ ; a T " V v \-Y? A F1/ _ . 2 0 ** A/M / - G 0 e i M T 1 POTENTIALS; •1 *U cathode (-Q1 ) -2 U<-al»vwte(-Q2) * 3 *U tons layer 4 U Aloetfooa l*yo r *5-Uanode{*Q1) •6*U anode-cathode FIELD INTENSITY: - 7 ' E jiwxSe-catfwde GEOMETRY: a - -Q2 distributed charge ■ b 'caflhod® circl« C t ion s lay« d- electron* layer ' 8 - anode circle Radial coordinate [mm] at

r +-------A ■ N. with N = 2 in the presented model: surK A A 1 drn W. = e ■ E(r, v) ■ A (8) whilst the electric field intensity and the average free path are collinear vectors. When the electron collided with the gas molecule, its kinetic energy from Eq. (8) and the initial electron kinetic energy carried on from the previous collision W are together transferred to the molecule by impact as exciting energy of the molecule: W = Wk + W exm ek car_on (9). where the quantity d is the sum of molecule and electron radii, defined as a collision diameter: d = r + r (7), R is the gas constant, p and T are gas pressure and temperature, and v and v are average velocities of 1 m_avg e_avg ^ molecules and electrons respectively. A covalent radius is the nominal radius of the atoms of an element when covalently bound to other atoms, as deduced through the separation between the atomic nuclei in molecules. In principle, the distance between two atoms that are bound to each other in a molecule (the length of that covalent bond) should equal the sum of their covalent radii. In the previous papers [2] the oxygen atom covalent radius of 68 pm was taken into account [6] and hence the molecule radius of 133.4 pm, and so far, the relevant value of the electron kinetic energy was estimated rather too low, which caused breakdown voltage values being estimated very high. Using Van der Waals radii would result in still lower values of the electron energy and higher breakdown voltage estimates. Therefore the empirical covalent radii (oxygen: 60.4 pm, nitrogen: 54.9 pm), the relevant covalent bond lengths of diatomic molecules (oxygen: 121.0 pm, nitrogen: 110.0 pm) are used afterwards [7, 8]. The molecule of diatomic gases is approximated as a rod with rounded ends and the collision area is obtained by this rod projection in the direction of the electron movement using the mean value of the collision area due to the molecule rotation. The molecule collision radii (oxygen: 81.3 pm, nitrogen: 73.9 pm) are calculated due to the mean value of the collision area. Since in the vicinity of less than ANA to the cathode there is no impacts of electrons, the average kinetic en- In this discourse the term of excitation, and hence the exciting energy, is used, according to some scientific terminology, in the unconventional way. The excitation as a general term is an elevation in energy level above an arbitrary baseline energy state of an atom or a molecule without causing any changes in its charge on the whole (ionization, electron attachment) or any chemical changes (molecule dissociation). But hereafter, the excitation, and hence exciting energy, means firstly an increase in energy level up to the levels of ionization and dissociation, if applicable, secondly their accomplishment, and thirdly causing the changes in energy of newly begotten particles - the elevation in energy level with ions or dissociated atoms, or changes in kinetic energy of free electrons, if there are any involved in the excitation process. Namely, the excitation is not obtained by the electron collisions only, but also by photon impacts and through a displacement current effect. But, the outcoming excited particles are considered to have a short lifetime, and afterwards produce the photon, which further on cause the excitation of another atom or molecule. So far, we will not deal with the changes in energy level of outcoming particles (ions, dissociated atoms), but the part of the exciting energy beyond the ionization or the dissociation energy is attributed to the involved electrons, if it is applicable, but otherwise to the photons with the same end effect, as in the case of the involved electrons, but with the mass equals to zero. In both cases this energy is carry-on energy. Due to the function of the electric field intensity depending on the radius there are three ranges between the electrodes: a highly ionized range, a partly ionized range and a non-ionized range - Fig. 4. 47 L. Pavlovic et al; Informacije Midem, Vol. 42, No. 1 (2012), 56 - 59 Figure 4: The ranges due to ionization degrees of nitrogen (N) in the midst of the air medium in the electric field - the ionization degrees refer to nitrogen atoms only. The highly ionized range is in the vicinity of the each electrode because nearly every collision causes such exciting energy in the molecule that its ionization occurs due to very high kinetic energy of the electrons, and in the one of the next moments the ionization is followed by the recombination. From the viewpoint of kinetic energy of colliding particles before and after an inelastic collision, the kinetic energy lost by the collision is mainly consumed by some other process - in discussed case as the ionization energy, and further on as the dissociation energy, whereas the kinetic energy before the collision is the exciting energy being stored as the kinetic energy of an electron. Taking into account this loss defined by the difference (Wleefore - Wf), a fraction of the kinetic energy loss is introduced as W., - W f, w. before after ic Wbf W before exm m v m + m y em at a total inelastic collision, and so the exciting energy of the molecule must be at least [9]: After the recombination collision, since the collision is totally inelastic, and it consumes the whole exciting energy received by the electron, and further on, one part of recombined molecule energy &Wrec is emitted (for instance as emitted radiation energy, where the photon is energy carrier) and further consumed by ionizing molecules in continuous process: AW = W. • (1 - P ) ror irr\y\ * i ' (12), but the other part is transformed to thermal energy, which K-part is conveyed to surroundings by thermal conduction, convection and/or radiation, but (1-K)-part of it causes the molecule temperature rise above the ambient temperature for the increment: 2 • (W - W. ) Ai =--------• (i - p ) • (i - k) 3 • k 1 (13). So far the average temperature of the gas in the neighbourhood of the cathode and the anode rises and the temperature of the each electrode increases too when the molecules bump at it. The parameter K=99.8% in Eq. (13) defines the percentage of the energy in this equation, conveyed and conducted to the cathode, the anode, and further on, to the ambient by a natural or a forced cooling. Due to the temperature increment of Eq. (13), the average kinetic energy of the gas molecules increases, and hence the average molecule velocity. Next to this range, the partly ionized range exists up to the point where no ionization occurs. In this range, besides the ionization collisions, the dissociation collisions are happening with the exciting energy of the molecule equal to: W. • 1 + m > W > W.. • exm diss m 1 + - (14). W > W. • exm .on m 1 + ■ (10) m y to cause ionization. The exciting energy of the whole amount of moving electrons with their kinetic energy, is divided between the ionization as the P. part and the recombination as the (1-P.) part. After the ionization collision, the average carry-on kinetic energy per electron, carried on by the one colliding and by the one emitted is: W - W. • exm icon m 1 + ■ m W (11). In this case, the exciting energy of the gas molecule causes the dissociation of the two-atom molecule into two gas atoms. This collision is partly inelastic and it consumes the dissociation energy. The remaining kinetic energy of the colliding electron is carried on by the same electron: W = W - W,. ■ car_on exm d.ss m 1 + ■ m (15), and further on, it increases because the electron passes the next average free path before the next collision - Eq. (8). Towards the gap centre, the non-ionized range begins, where the dissociation collisions and also the unaffected collisions are present. If the exciting energy of the m y m v my m P 48 L. Pavlovic et al; Informacije Midem, Vol. 42, No. 1 (2012), 56 - 59 molecule is lower than the dissociation energy of the gas molecule: m 1 + ■ > W (16), m y the colliding electron has no effect on the gas molecule. The kinetic energy of the electron after the collision is the same as before, which is: W = W car_on exm (17). Although there is a sequence of all mentioned phenomena, we could not consider there are any pure ranges such as an ionization-recombination, a dissociation or an unaffected range. Knowing the electric field intensity at the cathode surface at q> = 0 and its temperature, the following conductive current densities are calculated: the current density of the field emission and the current density of the thermionic emission. Both of them are the active current densities. If an alternating electric field is applied, as in this particular case caused by a sinusoid of the anodecathode voltage with the amplitude of 20 kV and the frequency 5 kHz being interrupted by the breakdown after achieving the throughout ionization voltage, the displacement current occurs. The displacement current is defined by its density [4], and the latter is, likewise the electric field intensity, depending on time, therefore both of them can be represented by phasors in complex plane. An angle 5 between the displacement current density phasor and the phasor of time-derivative of the electric field intensity is defined by the complex value of the relative permittivity. The displacement current density is vector, collinear to the time-derivative of the electric field intensity vector multiplied by absolute value of relative permittivity £r and the permittivity £0 of vacuum, and hence collinear to the electric field intensity vector. The relative permittivity is complex scalar constant defined by its absolute value and by losses angle 5, where the dielectric losses of the gas are defined as the imaginary part of the complex value of the relative permittivity, so they are associated with sinus function of the losses angle. The active displacement current causes the excitation of the molecules and the phasor of this current density is in phase with the electric field intensity phasor in the complex plane. The reactive displacement current vector is a capacitive current, so far, its phasor is perpendicular to the electric field intensity phasor, but the relevant vectors are collinear. Because the active displacement current causes the excitation of the molecule in the volume between the cathode and the anode, the ac- tive displacement power is the integral of the product of the phasors of the active displacement current and of the electric field intensity throughout the volume relevant to one molecule that is the gas molecule itself and in the hollow volume around it, which is VmJP, T) (a volume of one mol divided by the Avogadro Na Avogadro number), under the relevant thermodynamic conditions of the gas. Further on, the active displacement energy of one molecule is defined by: dE(r, y, t) Vmol A « £ • £„--------------E(r, y t) • -------------sin Ô (18). Dact r 0 dt N. v A e_avg The losses angle 5 is defined by the ratio of the volume of one molecule Vm and its relevant part of empty space belonging to it: tan Ô - 1 VmJP' T) (19). - - 1 V • Na m A The mathematical model of the electric discharge in gases has to take into account both, the kinetic energy of the electrons, the energy of photons and the energy of the displacement current. The electron kinetic energy is partly transferred to the gas molecule by the electron impact, and causes the ionization or the dissociation, discussed heretofore. In this case, the ionization is considered as the impact ionization although it is more probable that the ionization is done through the excitation of the gas molecule on its higher energy level [5]. The dissociation of the two-atom molecule just cannot be carried out directly by the electron impact due to the large difference of the electron mass and the dissociated atom mass. The dissociation is completed by the exciting energy of the two-atom molecule due to the impact energy when raised in such extend that the dissociation energy level is achieved. This is the dissociation due to the conductive and the convective current. The displacement current energy also affects the gas molecules, and also causes their ionization and their dissociation. Because it has no carriers, the ionization and the dissociation are caused by the excitation of the gas molecule with no impact, but only due to the displacement current. All these processes: the impact ionization, the dissociation - both due to the conductive and the convective current, the ionization and the dissociation - due to the displacement current, have the same mechanism of being completed - raising the molecule energy on its higher energy level, and afterwards the accomplishment of the process. Therefore the kinetic energy and the displacement energy 49 L. Pavlovic et al; Informacije Midem, Vol. 42, No. 1 (2012), 56 - 59 Eq. (18) are summarized in the exciting energy of the gas molecule, which is the active energy: W(r, y, t, p, T, Wn m) = e • E(r, y, t) • A(p, T) + (20), + W + £r• £0 • ■ W • m v 0 A 1 + - m , WA. • diss m v 0 X\ 1 + - m JJ dE(r, y, t) dt • E(r, y, t) • VmJp, T) A (p, T) N, ■ • sin Ô where the mass of the impacting particle has the value of m if it is an electron, or has the value of zero if it is a e ' photon. Due to carry-on energy the exciting energy of the gas molecule has its minimal value and its maximal value, as follows: W (r, y, t, p, T, m ) = = e • E(r, y, t) • A(p, T) + (21), + fr • f0 ^ * dE(r, y, t) -d--t------ • E(r, y, t) • VmJP,T) ^ A (P, T) N- v ■ • sin Ô W r y, t, p, T, W., W... m) = exm_mx* ' ' ' ion diss m' = e • E(r, y, t) • A(p, T) + (22). ( + (W. - W,. ) x ion diss ' dE(r, y, t) m 1 + — A + + £r • £0 • * dt • E(r, y, t) • VJpT ^ A (P, T) N- v • sin Ô The minimal value is very obvious, but the maximal value is numerically determined by the mathematical model. The exciting energy values, at the one and the same point (r, y), are lying between these two values concerning the carry-on energy probability distribution, which is the square probability distribution. The probability distributions of the other quantities in Eq. (20) would contribute their shares to the exciting energy probability on the whole, but it is reasonable that these other contributions are considered as a part of uncertainty calculation. C = { T(r, y) ^ W (r, y, t, p, T, W., W.., m) = (23). eqex k r 1 ~J exm* ' ' ' .on d.ss' m' v ' = constant a PWexm = constant} For various constants the family of the curves (surfaces) is achieved. Among these curves (surfaces), corresponding to Eq. (23) there is one most important, and it is a border curve between the ionized range and the non-ionized range in Fig. 4, and further on also in Fig. 10, considering at least some ionization is carried out by photons: C . = { T(r, y) ^ eqex_.on K r ' ^ W (r, y, t, p, T, W., W.. , m) = W. } exm_mx ' ' ' .on d.ss' mJ .onJ (24). So far, the ionization border line (or surface) is a particular equi-exciting energy line (surface) within the particular gas distinguishing between the area (space) of the ionized gas on one side of this curve and the area (space) on the other side of the curve predominately occupied by its neutral molecules and atoms (also dissociated if relevant) with no ionized particles. The whole exciting energy probability distribution domain is below the value of the ionization energy of the gas molecule, and the ionization occurs just at the upper domain border, hence the probability that the exciting energy is lower than the ionization energy of the particular gas is one (PWexm = 1) in the non-ionized range in Fig. 4: R . = { T(r, y) ^ (25). non_ion r ' K ' ^ W (r, y, t, p, T, W., W.. , m) < W. } exm_mx T r ion diss m ion The ionization border line is an outer contour of both the highly and the partly ionized ranges in Fig. 4. If the anode-cathode voltage increases, the non-ionized range in Fig. 4 narrows, and the partly ionized range from the cathode side touches the partly ionized range from the anode side, and the ionized path between the electrodes arises throughout the gas gap, the dielectric breakdown occurs and the electric discharge arc takes place. With this phenomenon especial curves and surfaces are defined, which are equi-exciting energy curves and surfaces. The equi-exciting energy line (or surface) in the electric field within the medium of the particular gas is the set of points at which the gas molecules are exposed to the same exciting energy at the same exciting energy probability: + e_avg 50 L. Pavlovic et al; Informacije Midem, Vol. 42, No. 1 (2012), 56 - 59 Figure 5: The physics of arising the discharging arc in the two-atom molecule gas. Hereafter, the mathematical algorithm, describing the phenomenon of arising the discharging arc in the stage of building the ionized path through gas gap up to a throughout ionization, was computerised to get a dynamic model of this phenomenon as shown in Fig. 5. There are three kinds of electric current density, beginning with the one already discussed: • the displacement current density, as discussed; • the conductive current density, which is in phase with electric field intensity, appears in consequence of the cold electron emission, due to the electric field effect, and due to the thermionc electron emission, whose relevant current density is, to the large degree, independent on time, but time-dependent part is in phase with the electric field intensity; • the convective current is Laplacian of the electric potential U(r,y,t), which is direct or alternating quantity; since the Laplacian is zero in the state before the electric breakdown, the convective current is zero too. The total cathode current is achieved by integration of the vector sum of all contributing current densities over the cathode surface in spherical coordinates (r, y, $) taking into account the rotary symmetry of the electric field and system geometry: K = 2 • J 1 p=0 9=o E(r, 9, t) jD ( dE^ . dt + (L (Ifl) + UT.\E\)) • (26) I E(r, 9, t) I +jconv(E) \ • r\j9) • d9 • sin p • d(p having in mind the quantities depended on the electric field intensity depends indirectly on (r,9) and time, and moreover each of them depends on the cathode temperature. The shortest field line between the spherical electrodes of the opposite charges is the shortest surface-to-surface distance between the spheres. In this case the angle 9 is zero. The vector of electric field intensity has only radial component since the angular component is zero. Hereafter the mathematical model deals with the electric field and the phenomena associated with it in this particular direction. Therefore, the radial coordinate in this direction 9 = 0 is named as rx - radius in the x-direction of the (r,9) plane. When discussing the electrical breaking contact, its contact members are the electrodes. The distance between them increases from zero, when the breaking contact still holds the closed position, up to the maximum value. In the model, the distance of 1 mm is used because the Paschen law minimum is being avoided. 3. The chemism the gas throughout ionization The whole kinetics of the discharging arc formation, and hence of the throughout ionization of a gas mixture medium, continuing to the electric breakdown, and further on, followed by the discharging arc ignition in that medium, consists of its physics, discussed heretofore, and its chemism - the chemism of the discharging arc formation in any gas mixture as well as in any two-atom molecule gas even of chemical singleelement. In the process of the gas medium ionization, each of the gases is ionized and afterwards ions are re-combined, and multi-atoms molecules are dissociated and later their atoms associate in another molecules, all happening fluently and continuously forming the ionized ranges ending finally in the throughout ionization of the gas in the gap between electrodes if only the electric field in the gap is sufficient for it. And this part of the kinetics of the throughout ionization formation, which consequently leads to the electric breakdown and to the discharging arcs ignition, is the chemism of this phenomenon. When dealing with air as a gas me- 51 L. Pavlovic et al; Informacije Midem, Vol. 42, No. 1 (2012), 56 - 59 dium, all constituents must be taken into account: oxygen, nitrogen, argon, carbon dioxide etc. In this paper, only oxygen and nitrogen, as major constituents of air, are considered when modelling the throughout ionization of the air gap. In the case of the O2-O-O3 gas mixture, as well as in the case of the N2-N-NO2 gas mixture, there are three kinds of gas and its ions in each mixture, and hence the physics of the processes are defined in the following way: • the ionization by the set of the transfer functions I = { f , g , h } for each gas in the mixture rek ion1 ion1 ion J ^ spectively; • the dissociation by the set of the transfer functions D = { fdis, gdis, hdis } for each gas in the mixture respectively; • the unaffected collisions by the set of the transfer functions U = { funaff, gunaff, hunaff} for each gas in the mixture respectively. The sum of the f-functions is one whenever and wherever in the space, and likewise the sum of the g-func-tions and the sum of the h-functions for each kind of compounds: the oxygen and the nitrogen compounds separately, and these sets are also different for each of them. The chemical process of the oxygen compounds mixture is shown in Fig. 6, but the chemical process of the nitrogen compounds mixture in Fig. 8, both followed by the relevant results and discussion. Figure 6: The chemical process of oxygen compounds in the electric field. Yields of the outcoming gases along the shortest path between the cathode and the anode are pre-sented graphically in Fig. 7 in the moment of the throughout ionization of O3 molecules, which is the end of the discharging arc formation in the air followed by the electric breakdown and by the discharging arc itself. Figure 7: The yields of the chemical process of oxygen compounds of the air in the electric field along the shortest path between the electrodes. The ratios between the oxygen compound particles densities, which are the output of the chemical process, and the initially impacted oxygen molecules density, which is the input of the chemical process, are the yields of the chemical process output, indexed as in Fig. 7, are as follows: y1(O2 ^ O2+), y2(O2 ^ O2), y3(O2 ^ O+), y4(O2 ^ O3+)and y5(O2 ^ O3). The chemical process produces the mixture of the oxygen compound particles, which maintains its structure ratios despite its continuous kinesis: ionizing, recombining, dissociating and associating the molecules, the atoms and the ions respectively, whatever is appropriate for each kind of particle, within its steady state structure. The model of the chemism of the discharging arc formation in oxygen presumes that the whole amount of the O atoms, after the O2 molecules dissociation, is combined to ozone in one or another way. These atoms are primarily associated to the unaffected O2 molecules, but there can be some leftover amounts either of the O atoms or the O2 molecules. If there are the O atoms leftover, they are combined with themselves to form ozone. On the other hand, if there are the O2 molecules leftover, they are outcome of the process by the yield of y2 as unaffected and not combined to ozone, which is shown in Fig. 7 as the result of this (and only this) particular case, which is zero throughout the gap. The chemical process with the O2-O-O3 gas mixture is the same from the standpoint of oxygen compounds structure whether there is pure oxygen involved in it or there is some other gas in the mixture as the nitrogen in the air. But, it is not the same with nitrogen since the chemical process of the pure nitrogen in the electric field gives the N2-N mixture including their ions, but the chemical process of nitrogen in the air produces the N2-N-NO2 mixture including their ions and dissociated particles. Hereafter the N2-N-NO2 mixture is dealt with. It is presumed that there are no volatile organic compounds in the standardized (unpolluted) atmosphere, and hence 52 L. Pavlovic et al; Informacije Midem, Vol. 42, No. 1 (2012), 56 - 59 the dissociation of NO2 is defined as NO2 ^ NO + O, and further oxygen atom reacts with O2 molecule from the air O2 + O ^ O3 and produced ozone, and once again oxidizes nitrogen monoxide NO + O3 ^ NO2 + O2 returning the O2 molecule into the air, so effectually the O2 molecules do not need to be exchanged from and into the air, but is produced and consumed inside the chemical sub-process following the dissociation of the NO2 molecules - Fig. 8. The NO2 molecules, which resulted from this sub-process, undergo the electric field and are ionized, dissociated or unaffected. Figure 8: The chemical process of nitrogen compounds of the air in the electric field. As in the previous case, the yields of the outcoming nitrogen gases along the shortest path between the cathode and the anode are shown in Fig. 9 in the moment of the throughout ionization of O3 molecules formation - see Fig. 10, which is followed by the electric breakdown in the air gap. Figure 9: The yields of the chemical process of nitrogen compounds of the air in the electric field along the shortest path between the electrodes. The ratios between the nitrogen compound particles densities, which are the output of the chemical process, and the initially impacted nitrogen molecules den- sity, which is the input of the chemical process, are the yields of the chemical process output, indexed as in Fig. 9, are as follows: y,(N2 ^ N2+), y2(N2 ^ N2), y3(N2 ^ N+), y4(N2 ^ NO2+)andy;(N2 ^ NO2). Because the discharging arc formation depends on the throughout ionization formation, during whose process the ionized ranges of each constituent gas is being formed as a part of its kinetics, the whole mixture kinetics is influenced by each gas in the mixture, although the chemical process of the particular constituent does not depend on the other gases in the mixture. The chemical process of the O2-O-O3 gas mixture in the electric field, and so far its chemism of the throughout ionization formation in this mixture, is unchanging whether it involves pure oxygen or the air. Nevertheless, the kinetics of the throughout ionization formation, and hence of the discharging arc ignition, is influenced by the particles of the other constituents of the air through its physics. Namely, the other constituents particles, as nitrogen compounds particles, are intercepting the electrons, and so reducing the electron collisions to oxygen particles, and beside that they are forming their own ionized ranges, which contribute their part to the ionized ranges growth, and so far to the throughout ionization formation. The N2-N-NO2 gas mixture is also influenced by the other air constituents through its physics, hence the kinetics of the throughout ionization formation is based on an equilibrium between all air constituents in every stage of the ionized ranges growth. The kinetics, weather dealt separately as the O2-O-O3 mixture and the N2-N-NO2 mixture or together as the air, is qualitatively the same, but it is quantitatively different in the transfer functions, pondered by mixing ratio of oxygen (cca 20%) and nitrogen (cca 80%) in the air as modelled through physics, and hence in yield functions of chemism, and further on in concentrations and in the ionized ranges growth until the one of the constituent gases is throughout ionized - Fig. 10. Figure 10: The ionization border lines C as the axi- ^ eqex_ion al cross-section of the ionization border surfaces for the main constituents of the air just when the throughout ionized path is formed by ozone ions. 53 L. Pavlovic et al; Informacije Midem, Vol. 42, No. 1 (2012), 56 - 59 In this figure, there are the ionization border lines C representing the ionization border surfaces, as eqexjon ^ ^ ' the geometric bodies of the electrodes are rotary symmetrical, for each kind of the constituent ions in the air, where the throughout ionization is achieved by ozone ions. These lines are not the Cassini ovals. The ionization border lines (surfaces) C defined by Eq. (24), v ' eqexjon ' " v " circumscribe the ionized range, which consist of the highly ionized and the partly ionized range in Fig. (4), and is complementary to the non-ionized ranges in Eq. (24): R. = R . = (27) ion non_ion v ' = { T(r, y) ^ W (r, y, t, p, T, W.. W.. , m) > W. } k r 1 ~J exm_mx* ' ' r' 1 .on ass' m ' .on J for each kind of constituent ions. If the ionized range of Eq. (27) or the union of all ionized ranges, since the air is gas mixture, includes at least one subset of points that is continuous between any point on the cathode and any point on the anode, the throughout ionization is established. Hereby, any subset of the ionized range or of their union that is continuous between any point on the cathode and any point on the anode is the throughout ionized path, and the discharging arc is formed by the electric breakdown along one of the throughout ionized paths. A voltage over the cathode-anode gap, at which the throughout ionization is established, is the throughout ionization voltage, and a relevant current is the throughout ioni-zation current. If there is no throughout ionized path, the non-ionized gap exists with its distance between the relevant ionized ranges around the cathode and the anode, which is the infimum of the distances between any two of their respective points. The non-ionized gap can be stated due to the whole gas mixture (i.e. air) or separately due to the particular constituent gas - Tab. 2. Table 2: The characteristic parameters at the throughout ionization of the air gap. unit Lair O2' O" Ol Mi* N* NOi* NO, throughout ionization voltage KV 2.23 throughout ionization currant (lA 26 6