VI-1 VI. A Comparison Between Manual and Automatic Indexing Methods G. Salton and D. K, Williamson Abstract The effectiveness of conventional document indexing is compared with that achievable by fully-automatic text processing methods. Evaluation results are given for a comparison between the MEDLARS search system used at the National Library of Medicine, and the experimental SMART system, and conclusions are reached concerning the design of future automatic information systems. 1. Introduction The design and operations of large-scale information systems has become of concern to an ever-increasing segment of the scientific and professional world. Furthermore, as the amount and complexity of the available information has continued to grow, the use of mechanized or partly mechanized procedures for various information storage and retrieval tasks has also become more widespread. As a result, a number of large information systems are now in operation in which at least the search operations — that is, the comparison of incoming search requests with stored information — is carried out automatically. Typical examples in the United States are the NASA Scientific and Technical Information Facility, and the MEDLARS system at the National Library of Medicine. VI-2 While these operational information systems are thus able rapidly to search vast storage files, often containing many hundreds of thousands of items, most of the operations other than the search itself are performed manually with the help of human experts. In particular, all the content analysis and indexing operations, leading to the assignment of suitably chosen combinations of index terms to the stored documents and to incoming search requests are normally performed by specialists who know the given subject area, as well as the performance characteristics of the retrieval environment within which they operate. Many of the information systems which base their operations on manual indexing but largely automatic search methods are quite successful in isolating, from the large mass of largely irrelevant stored material, many of the items which prove pertinent to the users' information needs. Nevertheless, the feeling that manual systems and procedures should be replaced by suitably chosen automatic methods has continued to grow, and a number of fully-automatic information storage and retrieval systems have been designed and put into operation, at least on an experimental basis. The SMART system represents one such effort to replace the intellectual indexing by sophisticated automatic text analysis procedures, and thereby to produce a retrieval environment in which all document and query handling procedures are performed automatically [1,2,3]. In the next few paragraphs, some of the evaluation measures that have been widely used to determine the effectiveness of information systems are introduced, and typical evaluation results obtained with the SMART system are given. Thereafter, the design of the SMART-MEDLARS test is VI-3 examined, and evaluation results are given for the comparison between SMART and MEDLARS searches, using a variety of different analysis and search methods. Suggestions are made for improving the performance of presently operating information systems, and for the design of future automatic retrieval services. 2. The Evaluation of Information Systems Many different criteria may suggest themselves for measuring the performance of an information system. In the evaluation work carried out with the SMART system, the effectiveness of an information system is assummed to depend on its ability to satisfy the users1 information needs by retrieving wanted material, while rejecting unwanted items. Two measures have been widely used for this purpose, known as recall and precision, and representing respectively the proportion of relevant material actually retrieved, and the proportion of retrieved material actually relevant [4,5]. (Ideally, all relevant items should be retrieved, while at the same time, all nonrelevant items should be rejected, as reflected by perfect recall and precision values equal to 1 ) . It should be noted that both the recall and precision figures achievable by a given system are adjustable, in the sense that a relaxation of the search conditions often leads to high recall, while a tightening of the search criteria leads to high precision. Unhappily, experience has shown that on the average, recall and precision tend to vary inversely since the retrieval of more relevant items normally also leads to the retrieval VI-4 of more irrelevant ones. In practice, a compromise is usually made, and a performance level is chosen such that much of the relevant material is retrieved, while the number of nonrelevant items which are also retrieved is kept within tolerable limits. In the SMART evaluation system, these various possible operating ranges are taken into account by computing for each search request, and for each processing method a variety of different statistics related to recall and precision. Specifically, four global statistics are generated, known as rank recall, log precision, normalized recall, and normalized precision respectively, as well as ten local statistics, consisting of the standard precision at ten different recall levels. The global statistics are used to represent the overall performance of a given search, whereas the local statistics furnish individual recall-precision pairs for specific operating ranges of the system. Paired comparisons are normally presented, consisting of the average performance over many search requests of two given search and retrieval systems [5]. One of the document collections used for evaluation purposes with the SMART system over the last few years is the set of 200 documents and 42 search requests in the field of aerodynamics used earlier as part of the well-known Aslib-Cranfield experiments [4]. This collection is attractive for test purposes since a number of actual user queries were available, as well as sets of relevance judgments obtained from the scientists constituting the user population. Furthermore, English abstracts were fur- nished with each document, and it thus became possible to compare the effectiveness of the conventional retrieval operations based on a matching VI-5 of the index term sets — manually assigned by trained indexers at Cranfield — with the performance of the fully-automatic language processing devices based on the manipulation of document abstracts, used by the SMART programs. Such a comparison could then produce evidence to indicate whether document identifiers automatically generated by language analysis methods, such as suffix cut-off procedures, thesaurus look-up, phrase generation methods, statistical term associations, syntactic analysis, and others, would perform equally as well as manually assigned index terms. A typical comparison between the Cranfield indexing, and an automatic word stem matching process based on a matching of weighted word stems extracted from document abstracts and search requests, respectively, is shown in Fig. 1, averaged over the 42 Cranfield queries. The recallprecision graph of Fig. 1(a), and the corresponding tables of Fig. 1(b), indicate that the manual indexing is slightly superior to the simple automatic word-stem process. However, the statistical significance com- putations, included in Fig. lCc), show that the differences in performance between the two systems are not significant. Specifically, each of the values shown in Fig. 1(c) represents the probability — computed by using either a standard t-test, or a sign test — that if the performance of the two systems (manual indexing and automatic word-stem match) were, in fact, equally high, then a test value as large as the one actually observed would occur in practice [5]. A probability of 0.05 is usually taken as an upper bound in judging whether a deviation in test values is significant or not. The probability values included in Fig. 1(c) are seen to be much higher than 0.05, and the assumption that the two systems are approximately comparable in effectiveness cannot safely be rejected. VI-6 Prec islon i o—o Abstract, Stem (SMART) a—o Index, Stem (Cranfield) 1.0 0.8 0.6 o.>o^ 0.4 0.2 0^°-0^.n ideal system - *5 \N \.N>. A A v\ 8 .2h V\ \ ; ^ 1.0 RecoM a) Recall-Precision Graph Word Stem Abstracts R 0. 0. 0. 0. 0, 0. 0. 0.8 0.9 1.0 RNK LOG NOR NOR REC= PRE= REC= PRE= 0.9167 0.8132 0.8008 0.7646 0.7311 0.6947 0.5893 0.4904 0.3962 0.2814 0.4092 0.6926 0.9104 0.8199 Word Stem Titles Evaluation Measures Precision at R = 0.1 R = 0.3 R = 0.5 R = 0.7 R = 0.9 RNK LOG NOR NOR REC PRE REC PRE Probab ilities t-Test Sign Test (Abstracts over Titles) 0.8 0.9 1.0 RNK LOG NOR NOR REC= PRE= REC= PRE= 0.8973 0.8154 0.7610 0.7178 0.6718 0.5694 0.4683 0.3904 0.2909 0.2319 0.3023 0.6446 0.7508 0.7169 0.5066 0.3076 0.2311 0.0438 0.1305 0.1446 0.4790 0.0005 0.0085 0.0000 1.0000 0.2266 1.0000 0.0213 0.1435 0.0042 0.0127 0.0074 0.0127 0.0000 Combined b) Recall-Precision Tables c) Significance Output Recall-Precision Data for Abstract-Title Comparisons (MEDLARS collection, 18 queries, 273 documents) Fig. 3 VI-27 Precision a—o Query with negative phrase deletion A—A Original query (abstract word stem) 4 Precision o—o Query with weighted important words A—A Original query (abstract word stem) 1.0 .8 * » a 1.0 .8 .6 .4 .2 £C£-A J ^a^ .6 ^s \ .2 a) .4 .4 is .2h .6 .8 •hr*- Recall s\ .8 1.0 -*~Recolf Recall-Precision Graphs for Comparison of Original Queries with Altered Queries Evaluation Measures Upweight over Original (Sign Test) Original over Negative (Sign TestM Measures Precision at R = 0.1 R = 0.3 R = 0.5 R = 0.7 R = 0.9 RNK REC LOG PRE NOR REC NOR PRE b) Original Upweight Negative 0.9167 0.8008 0.7311 0.5893 0.3962 0.4092 0.6926 0.9104 0.8199 0,9627 0.8500 0.8004 0.6441 0.4776 0.4546 0.7260 0.9200 0.8476 0.9288 0.7939 0.7246 0.5794 0.3898 0.3973 0.6874 0.8294 0.7674 Precision at R = 0.1 R = 0.3 R = 0.5 R = 0.7 R = 0.9 RNK REC LOG PRE NOR REC NOR PRE Combined cl 0.6875 1.0000 0.7539 1.0000 0.0574 0.4545 0.1796 0.4240 0.3018 0.0011 1.0000 1.0000 1.0000 0.7266 0.7539 0.7539 0.5078 0.7539 0.7539 0.1215 Recall-Precision Tables Significance Output Recall-Precision Comparisons for Original Queries and Altered Queries by Negative Phrase Deletion and Upweighting (MEDLARS collection, 18 queries, 273 documents, abstract stem process) Fig. 4 VI-28 given thesaurus grouping includes a variety of different concepts, where some of these concepts occur in a negative phrase, while others occur in a positive sense within the same query. In that case, the deletion of the negative phrases produces a decrease in the weight of important terms, which may consequently reduce the search effectiveness. The upweighting process for important technical terms generally produces an improvement in search effectiveness. may be less uniform than expected. However, the improvement For some queries, it is easy to pick appropriate terms whose weight should be increased; for example, in query 01, listed in Table 8, the term "lens" may be expected to be much more essential for the subject description than, for example, the term "vertebrate" Other query statements may, however, occur for which the important terms are much more difficult to locate; in such cases, the search improvements due to upweighting may remain small, or may be nonexistent. The two query modification procedures incorporated into the SMART system are only two possible methods which may improve the result of the automatic searches. Similar methods can, of course, also be used for the semi-manual MEDLARS searches. The prospects for such potential improvements in retrieval effectiveness are taken up in the concluding section. 6. Conclusions The MEDLARS test comparisons which are described in this study lead to the same conclusions previously reached in other test environments with the SMART evaluation system [5]: Fully-automatic text analysis and search VI-29 Query Number 01 Original Query The crystalline lens in vertebrates, including humans, but not drug therapy or surgery. Electron microscopy of lung or bronchi. Pleura or pleural diseases may be excluded. Negative Phrase Delete The crystalline lens in vertebrates, including humans. With Term Upweighting [Original Query], crystalline lens, crystalline lens. 03 Electron microscopy of lung or bronchi. [Original Query], electron, electron, lung, bronchi* 1 13 Blood or urinary steroids in human breast or prostatic neoplasms. Drug therapy, toxicology, etc., to be excluded. Blood or urinary steroids in human breast or prostatic neoplasm. [Original Query], steroids, steroids, breast, prostate. Samples of Query Modification by Negative Phrase Deletion and Upweighting Table 8 VI-30 systems do not appear to produce a retrieval performance which is inferior to that obtained by conventional systems using manual document indexing and manual search formulations. While the manual indexing and search formulations can lead to exceptionally fine results when the indexer and/or the searcher are completely aware of the relationships between the stored collection and the user needs, the search results may also be very poor when these conditions are not met. The automatic process, on the other hand, with its exhaustive input data and complex analysis methods performs rarely very poorly, and may often produce completely satisfactory retrieval action. Two important questions mayube asked concerning the practical implications of the foregoing test results: First, is it reasonable to expect that identical results would hold if the automatic text processing methods were applied to the operational MEDLARS environment comprising half-a-million or more documents; and, second, can anything be done to improve the search effectiveness of presently existing automatic and manual information systems beyond those reflected in the recall-precision graphs of Figs. 1 to 4. The first question cannot be answered with full certainty, since it is obviously not likely that keypunched abstracts should ever become available for the full MEDLARS collection. To what extent the present results can safely be extrapolated to searches performed with the full MEDLARS collection depends to a large extent on whether the set of properly rejected nonrelevant documents included in the MEDLARS collection falls VI-31 into subject categories which are clearly far away from the query subjects. Obviously, if the nonrelevant documents not included in the SMART subset but included in the full collection could be assumed to be easier to reject than the nonrelevant actually included in the subset, then the SMART results for the full collection should be the same as those obtained for the subset alone. If, on the other hand, there are many more hard-to-reject nonrelevant items in the full collection, than in the subset, the results obtained by SMART on the subcollection may not be directly transferable to those obtainable on the full collection. An estimate for the amount of degradation to be expected in such a case may be obtained by adding to the SMART subset new documents which are nearly — but not quite — relevant to the search requests, and repeating the searches with the augmented collection. Based on the previous test results obtained with the SMART system in other subject areas, it is this writer's guess thab the degradation, if any, will be small. assertion remains, however, to be tested. The problem relating to the fundamental improvements of both the SMART and MEDLARS searches is easier to treat. The originators of the This internal MEDLARS test have, in fact, some pertinent suggestions to make concerning possible changes to be implemented in the search formulations, indexing language, and user-system interaction: a) Concerning an appropriate query formulation "...the prime requirement is a complete statement of what the requester is looking for in the requester's own natural language, narrative form; [the query forumation must not] be deliberately phrased.... in a form that the requester believes will approximate a MEDLARS search strategy" [9, p. 117]; VI-3 2 b) Concerning the indexing language to be used "we recommend a shift in emphasis away from the external advisory committee on terminology and towards the continued analysis of the terminological requirements of MEDLARS users as reflected in the demands placed upon the system [9, p. 193]; c) Concerning user-system interaction during the search "the greatest potential for improvement in MEDLARS exists at the interface between user and system; a significant improvement in the statement of requests can raise both the recall and the precision.,.." [9, p. 193]. That these suggestions are all well taken has been shown by the retrieval comparisons previously made with the SMART system [5]. Indeed, the search formulations suggested as ideal for MEDLARS are exactly the ones already used for all SMART searches. Furthermore, the dictionary construction principles derived for the SMART system also point in the direction of greater responsiveness to collection makeup and user needs, and away from committee control [12]. Finally, user-controlled iterative searches have been implemented successfully with the SMART system for several years [13,14,15]. It is difficult to predict exactly how much improvement in search effectiveness may result from the introduction of these various search and retrieval aids. The test results obtained under experimental conditions with the SMART system appear to indicate that the potential improvement will not exceed ten to fifteen percent, leading to a recall afnd precision performance of 0.70 or 0.75, instead of the present 0.50 to 0.60. would still be far short of what is desirable. Such a performance However, it is encouraging VI-33 to note that the present situations are well enough understood to make it reasonable to suggest avenues for the design of future improved systems, including viable automatic search and analysis procedures in place of some of the uncertain manual ones now in use. VI-34 References [1] G. Salton and M. E. Lesk, The SMART Automatic Document Retrieval System — An Illustration, Communications of the ACM, Vol. 8, No. 6, June 1965. M. E. Lesk, Operating Instructions for the SMART Text Processing and Document Retrieval System, Scientific Report No. ISR-11 to the National Science Foundation, Section II, Department of Computer Science, Cornell University, June 1966. G. Salton, et al., Information Storage and Retrieval, Scientific Reports to the National Science Foundation, Nos. ISR-11, ISR-12, ISR-13, Department of Computer Science, Cornell University, June 1966, June 1967, and January 1968. C. W. Cleverdon and E. M. Keen, Factors Determining the Performance of Indexing Systems, Vol. 2 — Test Results, Aslib-Cranfield Research Project, Cranfield, 1966. G. Salton and M. E. Lesk, Computer Evaluation of Indexing and Text Processing, Journal of the ACM, Vol. 15, No. 1, January 1968. A. M. Rees, Evaluation of Information Systems and Services, in Annual Review of Information Science and Technology, Vol. 2, C. Cuadra, editor, Interscience Publishers, New York, 1967. A. M. Rees and D. G. Schultz, A Field Experimental Approach to the Study of Relevance Assessments in Relation to Document Searching, Final Report to the National Science Foundation, Center for Documentation and Communication Research, Case Western Reserve University, October 1967. F. W. Lancaster, Evaluating the Performance of a Large Operating Information Retrieval System, Proceedings of the Second Electronic Information Handling Conference, Thompson Book Company, Washington, 1967. F. W. Lancaster, Evaluation of the Operating Efficiency of MEDLARS, Final Report, National Library of Medicine, January 1968. [2] [3] [4] [5] [6] [7] [8] [9] VI-35 References Ccontd) [10] G. Salton, The Evaluation of Computer-based Information Retrieval Systems, Proceedings of the FID 1965 Congress, Spartan Books, Washington 1966, E. M. Keen, Suffix Dictionaries and Thesaurus, Phrase, and Hierarchy Dictionaries, Information Storage and Retrieval, Report No. ISR-13 to the National Science Foundation, Sections VI and VII, Department of Computer Science, Cornell University, January 1968. G. Salton, Information Dissemination and Automatic Information Systems, Proceedings of the IEEE, Vol. 54, No. 12, December 1966. J. J. Rocchio, Jr., Document Retrieval Systems — Optimization and Evaluation, Harvard University Doctoral Thesis, Scientific Report No. ISR-10 to the National Science Foundation, Harvard Computation Laboratory, March 1966. J* J. Rocchio and G. Salton, Information Search Optimi zation and Iterative Retrieval Techniques, Proceedings of the AFIPS Fall Joint Computer Conference, Vol. 27, Spartan Books, November 1965. G. Salton, Search and Retrieval Experiments in RealTime Information Retrieval, Proceedings of the IFIP Congress '68, Edinburgh, August 1968, (also Technical Report 68-8, Department of Computer Science, Cornell University, Ithaca, New York, February 1968). [11] [12] [13] [14] [15] VI-36 Appendix A Recall-Precision Comparisons for Individual Queries VI-37 ^rro LO vo r^- oo LD LO T) g | ^ r ^ ^ r^ro LO LO ^ D O ^ n c N O O i n ^ o ^ n o h O h i / i o r H O O O r H r H O O r H O O i H O r H O O f H TJ >H g CD [ ^ O O ^ r H C N L O C N V O L O r H C N r H r ^ O O O O O O r H i H TJ g U C D TJ O -P C £ D C O 12 U) s^ m 92 ^3 LO 00 LO rH VD M o o oo V • • en C > • CD H CN LO 0) ^ + > 2 E-j o £ En TJ g 0) ^ o o i O r H C N L O C N ^ O ^ r H r H r H r ^ o o o o o O i H r H £j « kO VD r H O L 0 C N O r H r H 0 0 r H r ^ O C N i H C N O rH rH CD CD > > 03 03 O U O -H U 03 u -H 03 i -P rH rd HJ > -P C D O rH £r* 0) £ H n o ^ n ^ i n ( N r i H r i f rH rH rH y ) n > ^ n ^ ( N H tf M-l m 0 r ^ O O O O I ^ r H r H C N v Q o o ^ o O ' ^ ' ^ o O r H C O ^ O CM i H r H r H C N r H C N r H r H rH CN H H -P 2 ° > i u C D 01 3 iH<>jfn^LOkOr-oo(T>ooo^rvx»oocNor^oo H H H H H n ^ O O O rH 00 VI-38 | oooo'sr^rt^ooo TJ g i 5-1 0 i 0 -P O O O O ^ C ^ C N O r ^ LO PO ^ ( N r > ^ J (M or^or^rHoocT>iooo L O l O l O ^ O I ^ O O C N t ^ O C M O O C M v D L D O O ^ r o O O O o £ i 12 oo o o o o o o o o o o o o o o o o o o TJ g Q) -P CO I •H U ^ < £ > L O O O O O I'D (D U 0 £ ^ o r^ r^ UO LO « • o o , ^ :«-« ! o |rH KQ VD O x: H MAR H i o o o o o o o o o o o o o o o o o o + > 0) [•n ITJ 2 00 T3 g 0 O £ P-. r ^ o o o o ^ t ^ - o o o ^ T o o o o c T i C N i o t ^ ^O 0 0 0 0 CM [ ^ ^ t 1 CM O O O O O O O O O c D O h H n ^ m c o L o r M L n ^ D r ^ o o r M r - o C M ^ C M ^ L O O O ^ O O O O O O O O O O O O O o < C W a) ^ EH • O• rH CO T J 5H g U 00 00 ID 15 PM T5 U 0 £ g (D -P CO ocM^LnooconoooLnrooj^rH(Tin\ro H TJ g JH £ fd 0^ 00 00 \D\ 00 M > 0 0 ^ O rH ~~\ 00 I gj| 10 tf Q* w 1 ft ^ i \< x: £i c* CO T j f-l o GM ^ 1 00 riso fd Qj 0 0 M LO * o• j o G o w CN 1 ^ • .1 o o !5 fc ft < u n *c 0) rH ,q fd tq g | ^ 00 « r< ^ 0 -H (/) -H ( Q S o ! (NooooLncT>ooor^r^vx)r-r^^rr^^LOCN oooor-v^oor^^r-ioo^^r-i^DrHr^a^ oooor^r-c^^cNrHO^r^^^r^^r^oo^ o o o d o o o ' o d o o o o o o o o o i ^H CO cu -p £ fd ^ § 1 LHD L o r - r H o ro o o o o i — ir r - i H o o c N i n c MrL o n o ^ o o H H r H r H CO P^ Q H W o) -H I rH fd -P EH o -P tf n r - ^ D C ^ r ^ r H O r H ^ O C N J ^ C N O O r - o O i H O O O O O rH r H r H CM H r H r H r H CM H r H 0> > i 1 J S S asic PQ i ^ ^ Q tf ^ CN VO ^ 0 • •1 a o ° LO T-ME « ^ 0 00 00 • o o *J B ^ Q ^ - 0) H in H tf l ^ P O i r j i H C N L O C N ^ ^ f r H r H r H r - r o r o r O r H r H < D JC H m —— ^ o ^ MJ tf ^ M a) £J C O ^r o <£> M •1 o• o r^ LO KD \o\ KD VD o -H -P CD rH (D Q 0) 0) fd JH —— TJ g O O Icn IS fa ^ ( T t r O L O f H r H L O C M ^ L O O C N r H r - O O P O m r H r H £ fa u O u 0 o• o •1 Hi a) > « < J Q W rH rH C U U 0) en o o o m o o en o o o m o r- LO I D CN o < c •^ Q 3 CO en\ r>. ^r\ v.0 ^0 -H 4-> fd fd EH r H r H r H O O r H r H O r H r H r H O H O O O r H O s tf T3 L-JL o• o CD fd •I tr> CD ,C 0) rd rH rd -P -H a5 > fd fd o S 05 +J •H 05 fd > -P C D 0 rH EH < U •H H r o C T i r o ^ r m f M H H H r O f O h ^ n ^ f N H rH rH rH 1 o u o o fd G tf MH MH -P U o 3 r^oooor^rHrH(NvDrO'^1ro^^rrorHcx),^,0 CN rH rH rH CM rH (N r l H r-\ C M rH rH >i I ^ LO h§ to. ant Word Stem T Rel , OJ are si TJ O rH « CO MEDL Rele UOTS •H O OJ 5^ ^ co U) I VD O ^rrMvD ^ rH 00 on r» ooo^r-r^crir^^mLO 0 0 ( N V O v £ > C M M ) C N r > C 0 \ x , H r^Ln^r^r^D^oNi 1 o o o o o o o o O ^ I H V O ^ ^ O L O O O O O o o o o o o o o o o T5 0) TJ £ 5H CD g rLO °M 0 -P CO •[ • [ I o O 4J U) T5 £ rH 0 fa P 0 6 ^ o o ^ r ^ r ^ oo ooo^rcTicsj r[^oo^rojr-'srog o o o o o o o o LO CM o o o r^iHoocr>LOOo uo^)r^oocNjr^o CM < > LO OO ^r OO OO £ o o o o o o o r 4 00 | Cfi OJ ! 00 [•f-\ T) 2 P ! ^ F-" P co T5 5H i x oo M LD k o •[ O• o ^r !c^ CM k r ! LO LOI o -H +J OJ rH 0) Q | o j f N L n m o o c o n o r o ^ i n c N ^ i H t T i r o ^ o g rH 0 £ cMoo^r^r^ooooOrHvDCMCMoocMrHooLOO -H rH -P £ T$ M £ O fa g OJ CO o• o l°l *I e n rd U X OJ OJ > U) OJ x: rH LO (Ti 00 00 ; 0 0 l 0 -P o• O •J > •H < rd g 5H o j o j ^ L n o o c o n o r o o n o j ^ H O ^ n ^ o rH ft " en CD X rH 1 £ 0 fa <: hi rt u* LO CT> 00 ^r I -P rd tn 0.) CT» oo[ o• o •I x -p -H h§ r° CO 0 — TJ g U\ 0 0 3: fa 5H LO LO 00 1 tf c\iooooLO(T>ooor^[^<£>tv-r^^rr^^ogH(^r-^)vDt^vDr^ro^) O O O O O O O O O O O O O O O O O O m oo 00} • o. [ o :< c: o -H W •H U O J s CO r< -P G > u ! , 1 Q iH 3 o ^r KQ\ o LO CM KOI m { pu m i n h n o o o n H r i H o o c N m c N i n n ^ o tH rH rH rH rH L^ ( . o —J C D tr> rp rd rd >H e n 0) OJ M 0) rH n3 -P O FH > OJ -H 5H nj > rd > "o rH U r^r^rHOrHvDrM^rcNoor-oorHooooo rH H r l fM H r H r H r H CM r H i H -P OJ "b M -H o a 2 OJ rd > » ^ 01 3 r H O N i o o ^ m ^ r ^ c o c ^ o o o ^ v D o o c M O r - o o H H H H H r O ^ O O O rH 00 VI-41 p1I nu M !o j *H g CD vo oo o o LO(N m m in o o m o o m o o o m o o o o o o o r ^ o r ^ o o | H r H O O O O r H O O r H r H O r H O r H O r H r H eca H -P C O £ tf H , cn 0) • j SMA « , -£ ! EH ^ o ^ n i n o o m c o o ^ ) ^r ^ oo o LOCN r> o t ^ o r - o m m TJ U g C D O r H d o d i H r H O O r H O O r H O H O p H O O -P -P £ W & tT I'd C — o \o\\ r- \a\\ r> r \ • « o p " u u g H O O L O P O C N O O i n ( T » O O r <> oo in x in oo r ) O m r > - O in r > 0 0 1 -H o £ D c n C D EH i (Tl VD O o CM (Ti "~" 0 O r H r H O O O f H r H O O r H r H O r H O r H O r H r H ft ^ ° .K • P CN Ej TJ g MS *H d) (i) 0 -P CD •H S M M W O O £ P n r H o O L n r H C N ^ r c N v 0 a ^ r H o O r H r > o o o o o o c N r H rH ^ O < ^ e r^ l 00 o |o tf o H •1 . > £ C O TJ U C D C/) M g -P (1) c n C D EH [ ^ O O ^ r H C N i n C N i £ > < X > r H C N O r ^ O O O O O O C N O O -P C £ D rt > I -H • ^ ^ ^ Ico rd cn TJ g 0 0 r H o o m r H r H L n C N v D O r H o O r H r ^ o o o o o o C N r H rH rH S3 o cn EH x; a —J ^r o o ^ r> v£> •] O• o r- m <> <£> x ^0 VD ^ 00 & *-l\ ^o ]>\ • *1 o o — c n C D -H JH C D ^5 Oi Ej tf < TJ g S W & rH CN m O 00 X) rd H P U M-J rH rH r H r H r H O O r H r H O r H r H r H O r H O O O r H O 2 tf TJ ' rH CTJ Ti U £ g C D W O -P m rHl ^> r-• I . o o o S5 3-2 Q W S ti M -P < D r H O O C ^ r H O m C N O r H r H O O r H r ^ O C N r H C N O rH rH 1 "H D* c n I -H O C D - — *-*\o 1 CN CT> 1 KD VD O u EH x: H U\ •' O j . J o a) tf rd En 1 ^H -P H I 1 -P C D O rH H C D id ftJ > £ i ( w o o £ fa| r H O O C T l O O ' N r L n C N r H r H r H O O O O r ^ ^ r O O ' ^ C N r H rH rH rH I S 3 Tl g *r\ ^ o ^ rvO . o• ] o 00 | 00 *T\ KD\ VD • ] O tf M-l 4-1 s Q 3 *! r ^ O O O O r ^ r H r H C N ^ O O - ^ O O ^ ^ O O r H O O ^ O CN H r l H (N H CN r H r H rH (N rH rH M • o 0 -P 2 U o o u u o o r H C N 0 0 ^ t , > i u 3 C D OJ L r ) ^ D r ^ 0 0 ( T » O 0 0 ' ^ v D 0 0 C N O [ ^ 0 0 H H H H H n ^ O O O rH 00 VI-42 ^ , JH gI CD I 0 -P c 0 •H 01 •H CN 0 0 ^ 0 0 0 O O 00 0 0 0 0 LD LD O LD r ^ OOOOLDOOOOOOCN 0 O r ^ 0 O r ^ i H 0 O C T > L D L D 0 0 i D O O < X > r - 0 O C N r ^ 0 0 L D O O O O ^ O L D O O ^ O O O O o T5 '.; g CD 00 rH CN iJ5 co o o o o o o o o o o o o o o o o o o Lc ^ co tn -H -P 0 -P ^ kr • • o jo o 1 ^ U CD ^ O, r^ C O C D £\ I ^ D O ^ D ^ r O O C N i | O rO LD O O LD I D CN 0 0 ^ T 00 r-1 O O H h H in C O C D £ Qa C O C D H 1° |00 O O L D O O r H v D r ^ r ^ O O L D L D C N O O L D ^ l D O O L D D o ^ 1 ^ LH i o o o o o o o o o o o o o o o o o o. o• 1 ^H &! § CO IS ^ TJ CNJOO^CNOOOO g , G 00 < T H O O rO N i oooo^^oo^rcM 0 O 15 fa \ o o o o o o o h O ^ n h H n H L n c N ^DCNOOkDr-OOOOI^^D ^D^TOO^OLDOOOOOO'SI1 o o o o o o o o o o o CO fe! tf n3 g }q ^ O 0 £ fa ^r ^r m rH 00 o• o ^• I CO CD -H U a) T5 *H g CD LD 00 00 CN| Oi TJ aj •P D^ -H Q J ^ 00 •1 Tl JH g CD H-> £ 0 15 -P C O LDr^LD^oor^ooor^LD'srcN^rrHcTiooiDO rH C D O > ^ -H fd -P CO • o 00 o rd (D H tn CD CO CD X co CD — —1 — LD CTi en ^ 00 >, o j n m L O C O c o c N O ^ h r o H ^ C N f N i r o h O rH rH £ X E-t o r1 ^ • o .] ool LD 00 - — 1 LD — 1 p o M-l O -H W -H C J CD U P-. : < a. kii pd gj T5 CO S T5 g C/) H H g U 0 1X4, LJO 00 o o U 0 iDCN^rr^ooooooooo^D^rrM^rHoooovDO rH ^ fa l •! o o• 1 r^l vol o-) • J £ £ rsiooooLDcnoooi^r^^Dr^r^^rr^^riDrNj < to C Q W O O O O r - ^ O O r ^ ^ D H O O v D v D r H ^ D H r ^ C B TJ g . U C D H O P fd 3: co -H CT> l-H M O OV 00 00 c o o •1 | C O C D £1 o C D oooor^r^cTi^cNHcTir^^Dvor-vDr-oo'X) (DOOOOOOOOOOOOOOOOCD s 04 ^ C O -^ £ L D L D r ^ O O O O O O O r H H H O O C M O O C N L D O O C T i O H rH H rH rH H I ^ tf IS ^ & g u\ 0 fa •-* Ol ° rn OH ro o • I _ LD 00 1 _— ' yD\ tf rd J (D Q H W ^ g tf TJ CD > C D -rH T5 en u O £ ro o• ' ; •J CO rH fd -P EH J Q h ^ O l h H O H i f l O J ^ C N n h f O r H C O r o O rH r H r H CN H r H H H CN H rH <: tf o -P u \ CD ^ >i 1^