萨师煊国际大数据分析与研究中心.ppt

上传人:本田雅阁 文档编号:3497232 上传时间:2019-09-03 格式:PPT 页数:40 大小:2.40MB
返回 下载 相关 举报
萨师煊国际大数据分析与研究中心.ppt_第1页
第1页 / 共40页
萨师煊国际大数据分析与研究中心.ppt_第2页
第2页 / 共40页
萨师煊国际大数据分析与研究中心.ppt_第3页
第3页 / 共40页
萨师煊国际大数据分析与研究中心.ppt_第4页
第4页 / 共40页
萨师煊国际大数据分析与研究中心.ppt_第5页
第5页 / 共40页
点击查看更多>>
资源描述

《萨师煊国际大数据分析与研究中心.ppt》由会员分享,可在线阅读,更多相关《萨师煊国际大数据分析与研究中心.ppt(40页珍藏版)》请在三一文库上搜索。

1、Weiyi Meng 孟卫一 Department of Computer Science State University of New York at Binghamton July 9, 2012,Large-Scale Distributed Information Retrieval on the Web,萨师煊国际大数据分析与研究中心 Summer Research Camp Seminar,About SUNY Binghamton,Founded in 1946 after WWII. Located in Binghamton a city in Southern Tier

2、of New York State About 15,000 students (3,000 grad students) IBM was founded in Binghamton One of the 4 University Centers of SUNY system: SUNY at Stony Brook, SUNY at Buffalo, SUNY at Albany. For more information, see http:/www2.binghamton.edu/features/premier/index.html,What is Information Retrie

3、val?,Information retrieval (IR) is a computer science discipline for finding unstructured data (usually text documents) that satisfy an information need from within large collections that are stored on computers. In this seminar, we are going to extend this definition to include both unstructured an

4、d structured data.,What is Distributed Information Retrieval (DIR)?,It is a special branch of information retrieval where the data of the IR system are stored in multiple distributed locations/collections. In the Web environment, DIR deals with data that are distributed across many websites or web s

5、ervers. Related terms for DIR: metasearch engine, federated search, web DB integration system,The Scale How Large?,It can be as large as the number of data sources on the Web. A 2007 survey (Madhavan et al. 2007) indicates there were about 50 million searchable Web data sources in 2007. 25 million f

6、or un- or less structured data (web pages, weibo, ) 25 million for structured data (web databases),Where do Web data reside?,Iceberg Structure: A small fraction is on the Surface Web with mostly static web pages that are crawlable by following hyperlinks. Publicly indexable portion: 40-60 billion pa

7、ges Most are in the Deep Web with both structured data and less structured text documents hidden behind numerous search interfaces. About 1 trillion pages/records,Two paradigms to provide integrated access to Web data,Crawling-based: Gather Web data from various Web servers and/or search engines and

8、 build a search index for the gathered data. Surface Web crawling Deep Web crawling Metasearching-based (DIR-based): Integrate existing search engines into federated systems. Metasearching text documents Metasearching structured data by domain,Advantages of each approach,Crawling-based: Complete con

9、trol on crawled data: Can add metadata Can link data from different sources in advance Can create an archive gradually Complete control on retrieving techniques and ranking functions Fast response time,Metasearching-based: Capabilities of search engines can be leveraged Natural clustering of the dat

10、a by individual search engines can be utilized Three-level query evaluation process (SE selection, SE retrieval, result merging) can lead to better effectiveness More likely to obtain fresher results,Disadvantages of each approach,Crawling-based: Deep Web crawling difficult Often incomplete Many sit

11、es not crawlable Lose semantics/structure of the data Cannot leverage search engines capabilities Crawling delay leads to less up-to-date results Copyright and privacy issues,Metasearching-based: Performance depends on the quality of used search engines May cause search engines to crash Access could

12、 be blocked by search engines No direct control of the data Slower response time,Conclusions?,Both technologies (crawling-based and metasearching-based) have unique values and they should co-exist. They actually complement each other! Question: Is there an effective way to combine both technologies

13、into a single platform?,Our seminar will focus on the metasearching (DIR)-based approach.,Two types of metasearching systems,Because structured and unstructured data have very different characteristics, they are often handled separately with different technologies. Metasearching systems for text doc

14、uments (metasearch engines or DIR systems). Metasearching systems for structured data, each for a given domain (Web database integration systems). We will first introduce large-scale metasearch engines and then introduce large-scale Web database integration systems. Due to limited time, we will focu

15、s on challenges and remaining challenges, not on current solutions.,Large-Scale Metasearch Engines (MSE),user user interface query dispatcher result merger search search search engine 1 engine 2 engine n . . . . . . text text text source 1 source 2 source n,query,result,A simple MSE architecture,Wha

16、t is a large-scale MSE?,A large-scale metasearch engine needs to satisfy ALL of the following requirements: It is a metasearch engine. It is connected to a large number of (thousands or more) component search engines. The component search engines are special-purpose search engines Covering a specifi

17、c domain: news, sports, medicine, Covering a specific organization: RenDa, IBM, ACM, Why the third requirement? To retain the advantages on freshness and searching the deep Web.,Technical challenges with large-scale MSE,Scalable and accurate search engine selection Most search engines are useless fo

18、r a given user query. Best 10 results, 10,000 search engines at least 9990 useless. Using useless search engines is bad Unnecessary network traffic Waste resources of local search engines Incur higher cost at the metasearch engine Lead to poor effectiveness How to identify the most appropriate searc

19、h engines for any given query accurately and in a timely manner? How to summarize a search engine content (representative)? How to collect the representative? How to use the representatives to perform selection?,Technical challenges (cont.),Automatic search engine inclusion into metasearch engine Au

20、tomatic connection to search engines (automatic connection wrapper generation) Submit queries and receive result pages via program Automatic search result records (SRR) extraction (automatic extraction wrapper generation) Automatic wrapper maintenance Search engines may change the connection paramet

21、ers and and result presentation any time,Technical challenges (cont.),Effective and efficient result merging Autonomous component search engines likely employ different matching techniques between queries and documents (index techniques, weighting schemes, similarity functions, link-based ranking, e

22、tc) Local scores and ranks are generally not comparable How to re-rank the results returned from different search engines into a single ranked list such that high effectiveness can be achieved in a speedy manner?,Large-scale MSE architecture, ,Search Engine m,Search Engine Selector,Query Dispatcher,

23、Result Merger,Result Collector and Extractor,Search Engine 1,Search Engine Representatives,User query,World Wide Web,Web,Search Engine Discovery,SE List,SE Incorporation,Automatic connection and result extraction,Metasearch Engine Construction Module,Query Processing Module.,Result,Search engine Rep

24、resentatives Generation,Two Recent Books (Monographs),W. Meng and C. Yu. Advanced Metasearch Engine Technology. Morgan & Claypool Publishers, December 2010. http:/ Table of content: Introduction Metasearch engine architecture Search engine selection Search engine incorporation Result merging Summary

25、 and Future Research,Two Recent Books (Monographs),M. Shokouhi and L. Si. Federated Search. Foundations and Trends in Information Retrieval, 5(1), pp.1-102, 2011. Table of content: Introduction Collection representation Collection selection Result merging Federated search testbeds Conclusion and Fut

26、ure Research Challenges,Search Engine Selection (1),Problem: Given any user query and a set of search engines (or document collections), determine the search engines that match the user query the best. Basic solution: Summarize the content of each search engine in advance. For each user query, compa

27、re it with the search engine summaries and compute a matching score. Rank search engines in descending order of their matching scores with the query and select the top-ranked search engines.,Search Engine Selection (2),Question 1: How to summarize the content of each search engines? Advanced solutio

28、ns are statistics-based: One or more statistics for each term in the documents of a search engine. Some used statistics for a term t: document frequency (df): The number of documents in the search engine that contain t. collection frequency (cf): The number of search engines in a metasearch engine t

29、hat contain t. average normalized weight (anw): The avg of the weights of t in all documents containing t in a SE. maximum normalized weight (mnw): The max of the weights of t in all documents in a SE.,Search Engine Selection (3),Question 2: How to obtain the summaries of search engines? Two general

30、 scenarios: Straightforward computation if the documents of the search engine is available. Query-based sampling if the documents of the search engine are not directly available (i.e., deep web search engine). Many published solutions, but still not scalable to large-scale metasearch engines.,Search

31、 Engine Selection (4),Question 3: How to rank search engines for each user query? Sub-questions: How to define a measure of usefulness of a search engine with respect to a query? How to compute the measure very quickly (highly efficiently) in a large-scale metasearch engine? A large number of search

32、 engine selection algorithms have been proposed, most are not very scalable.,Automatic connection to any search engine given its URL Pass queries to the search engine programmatically. Receive results from the search engine programmatically. Automatic extraction of retrieved search results Extract t

33、he URLs and snippets of retrieved pages. Extract the number of hits Extract the URL pattern of the next page button. Automatic connection and extraction maintenance Automatic failure detection,Automatic Search Engine Incorporation,Extract connection parameters from the HTML form tag of each search e

34、ngine. Apply HTTP request method (GET or POST) to perform connection.,Automatic Search Engine Connection,Complex search forms with many control elements Ill-formatted HTML search forms Multiple search forms on the same page Search forms with JavaScript and/or CSS (cascading Style Sheets) Search form

35、s that have action redirections Search forms that utilize sessions/cookies Search engines that do not allow metasearching,Search form extraction: Difficulties,A search result record (SRR) consists of the returned information associated with a retrieved Web page. URL of the page Title of the page A s

36、hort summary of the page Other misc.: size, date, category, Result pages often contain irrelevant information such as that related to advertisement and hosting organization, in addition to SRR.,Automatic Search Result Records (SRRs) Extraction (1),WebScales: Wrapper Generation,an SRR,an SRR,Extract

37、correct SRRs from returned response pages while discarding irrelevant information. The problem is to identify the rules (often called wrapper) that can extract the correct SRRs.,Automatic SRR Extraction (2),General methodology Utilize the tag strings/DOM trees/visual information on one or more resul

38、t pages from the same search engine to mine extraction patterns. Identify the minimal data-rich region/subtree that likely contains the SRRs. Identify separator(s) that separate different SRRs. More recent solutions use more visual information on result pages. Still cannot handle complex result page

39、s well (javascript, multiple columns, multiple sections, multiple SRR formats),Automatic SRR Extraction (3),Result Merging (1),Problem: Merge returned documents from multiple sources into a single ranked list. Difficulties Full documents of search results are not available or too expensive to downlo

40、ad and analyze on the fly. Local similarities (thus local ranks) are usually not comparable due to different similarity functions different term weighting schemes different statistical values, e.g., global idf vs. local idf,Result Merging (2),A large number of solutions has been proposed to perform

41、result merging. Some use local similarities associated with each result (modern search engines no longer provide the information). Some use local ranks of search results. Some analyze downloaded full documents. Some use the titles and snippets of the search results. Some consider the quality of the

42、used search engine. Some consider whether a result is retrieved from multiple search engines. Some use a sample set of documents from each search engine,Information that could be utilized for result merging: Local similarity or local rank of each result Title of each result Snippet of each result Pu

43、blication time of each result Organization/person who published the result (from URL) Size of each result Number of search engines that returned the result Ranking scores of the search engines that returned the result Full content of each result (or some of the results) PageRank or number of backlin

44、ks of each result A sample set of documents from each search engine,Result Merging (3),Remaining Research Challenges (1),Search engine summary generation and maintenance Query-based sampling methods have not been shown to be practically viable for a large number of truly autonomous search engines. C

45、ertain statistics used by some search engine selection algorithms, such as the maximum normalized weight, are still too expensive to collect as it may require submitting a substantial number of queries to cover a significant portion of the vocabulary of a search engine. The important issue of how to

46、 effectively maintain the quality of summaries for search engines whose contents may change over time has started to get attention only recently and more investigation into this issue is needed.,Remaining Research Challenges (2),Automatic search engine connection with complex search forms. More and

47、more search engines are employing more advanced tools to program their search forms. For example, more and more search forms now have Javascripts. Some search engines also include cookie and session id in their connection mechanism. These complexities make it significantly more difficult to automati

48、cally extract all needed connection information.,Remaining Research Challenges (3),Automatic maintenance. Search engines used by metasearch engines may make various changes due to upgrade or other reasons. Possible changes may include search form change, query format change, and result display forma

49、t change. These changes can cause the search engines not usable in the metasearch engines unless necessary adjustments are made automatically. Automatic metasearch engine maintenance is critical for the smooth operation of a large-scale metasearch engine but this important problem remains largely unsolved. There are mainly two issues. detect and differentiate vario

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 其他


经营许可证编号:宁ICP备18001539号-1