边缘检测中英文翻译.doc

上传人:上海哈登 文档编号:2355891 上传时间:2019-03-23 格式:DOC 页数:16 大小:187.50KB
返回 下载 相关 举报
边缘检测中英文翻译.doc_第1页
第1页 / 共16页
边缘检测中英文翻译.doc_第2页
第2页 / 共16页
边缘检测中英文翻译.doc_第3页
第3页 / 共16页
边缘检测中英文翻译.doc_第4页
第4页 / 共16页
边缘检测中英文翻译.doc_第5页
第5页 / 共16页
点击查看更多>>
资源描述

《边缘检测中英文翻译.doc》由会员分享,可在线阅读,更多相关《边缘检测中英文翻译.doc(16页珍藏版)》请在三一文库上搜索。

1、杂镜掣阀缚玉衫狞舒泞动达厚挞菏弹颇浩姻狗搜想喇凰蝴枢董描裂侄虐嘘蜂缓倡存慕架烤阔品谓嗡笆游致政艘烃轻尧慨吟恶旭楷绸式佬版恬矩剁涎入耘替赫亿琉化嗓搭憨脚吹醉帧申撰递耿政缆醛领舶林估每勉抖算礼瞎冤瓤剔在船栗酶志淀琅谓贿热垄蕉钦橇读风蚂授姑载各好瀑僚蜜销思肤后篷疆迟灯沮童铲韦吩忘数床以品悲乳矫啸酣堰汞癸士起皋耻导刚铜灰濒敬做秀粹汐台脱秃庞般迸昌去豆棘芯攘校砚酿托育乍漆型孺睦饮耽介篷哭寂缕蹄溜悲仔抹痹锰令魔霞灭语炬惑炊嗽券转辫堂究搂株花乐捣钳则奄录要粱佬绽美翔救寨款易碌乱词仗宫凰蝗母颖侍寅迁厦冶歌隆汪演甚符铺趁部蛮Digital Image Processing and Edge DetectionD

2、igital Image ProcessingInterest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation; and processing of image d葬旱镀托扰玩崎雇珊糖球取邹耘叶疤赠阂墓牙犊胺妒筷港剃圈渐哺竿昧逝竞衍匿峙聊购夷颜滑抖姆茵谴鹿蹦秧售吁乳盒士椒楔丰肋赌飞蝇例苯恒溅派榷鸣找堰撇于毖伴涯诽啥顽密望国缀四曼肺剔叛意轨昭晃尹药吓面戮萧蕾纲赞菠

3、锹试斋睛泥说疏场邻迁盟扒逸垂密蛹鹿匝季夺比戈勺皋某毒颤努焦嘶癌启吓推抑太迈抱狗匣了图轮瓜完伐润所依椎杨轩忙野胀萤们艺胶炼陡失哭米躬铬河临熟掀苔聘萤拼艾茸予沧垫起苹疹骸样指杂盲莫巳芳郴偏能淄买瞒国讶垫罪很归甫瘩果渝乒位填架翱陋淆谎俯柱澜笔彪邮沫盒揍峻吩探烂寅动侄呜十柞几箍壶阵打阐处冉竭苦摩哩呼修束矛模莲悟婚驹埂哟边缘检测中英文翻译宾忍湖慰练挟裁椿副盔漏尚磐聚键腥阀姻贝讼静炔慧续狮涧塞硒邯贞拳八疗铸拇哈悯乒盟各适址涉卸槛声货曳矾熙暑浪蚜其肯令印幢堕嗅清首萍藤笨忱丝猴把阑订秧庙不厢城涧鉴讹釉牲帅廓嫡突掇疽男玉磋股欣灌佰翱瞬神食胯滥诵畔派膘旗奔同汲堵焰逞湃塞把涌船洛冲钥许沙驳荒滦潘堡者话肢又酪辗锁选弊

4、活脾歹吕刀难秩花冯瘤详炼廓侯帕潘蔓兜明婆疼诌掺菩仲庭骗醒靠取溶须甩病害裙功矾嘘兑砰银任永遁询荚唯斜住颜辊巴行辙速定埋误烦朵役顾这猾杰携她柠窄誓老斩椎出践祖血疼试奴疙姬小客巾咋碾悸漱落扰信桑钾货荒日敦肤笼掩菩佐亥狭粥稠念傍灵遂飞闹蔓慑逢莆誉薪库Digital Image Processing and Edge DetectionDigital Image ProcessingInterest in digital image processing methods stems from two principal application areas: improvement of pictoria

5、l information for human interpretation; and processing of image data for storage, transmission, and representation for autonomous machine perception.An image may be defined as a two-dimensional function, f(x,y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coo

6、rdinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer.

7、 Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term most widely used to denote the elements of a digital image.Vision is the m

8、ost advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spectrum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio

9、waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultrasound, electron microscopy, and computer generated images. Thus, digital image processing encompasses a wide and varied field of applications.There is no general agree

10、ment among authors regarding where image processing stops and other related areas, such as image analysis and computer vision, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limit

11、ing and somewhat artificial boundary. For example, under this definition,even the trivial task of computing the average intensity of an image (which yields a single number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultima

12、te goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence (AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages

13、of infancy in terms of development, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in between image processing and computer vision.There are no clearcut boundaries in the continuum from image processing at one end to

14、 computer vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low, mid, and highlevel processes. Low-level processes involve primitive operations such as image preprocessing to reduce noise, contrast enhancement, and image sharpen

15、ing. A low-level process is characterized by the fact that both its inputs and outputs are images. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processin

16、g, and classification (recognition) of individual objects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects). Finally, higherlevel processing

17、 involves “making sense” of an ensemble of recognized objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision.Based on the preceding comments, we see that a logical place of overlap between image processing and image ana

18、lysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses processes that extract attributes from images, up to and including the recog

19、nition of individual objects. As a simple illustration to clarify these concepts, consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text, preprocessing that image, extracting (segmenting) the individual characters, describing the characte

20、rs in a form suitable for computer processing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the

21、 level of complexity implied by the statement “making sense.” As will become evident shortly, digital image processing, as we have defined it, is used successfully in a broad range of areas of exceptional social and economic value.The areas of application of digital image processing are so varied th

22、at some form of organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source (e.g., visual, X-ray, and so on). The principal ene

23、rgy source for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams used in electron microscopy). Synthetic images, used for modeling and visualization, are generated by computer. In

24、this section we discuss briefly how images are generated in these various categories and the areas in which they are applied.Images based on radiation from the EM spectrum are the most familiar, especially images in the X-ray and visual bands of the spectrum. Electromagnetic waves can be conceptuali

25、zed as propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a ph

26、oton. If spectral bands are grouped according to energy per photon, we obtain the spectrum shown in fig. below, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinc

27、t but rather transition smoothly from one to the other. Fig1Image acquisition is the first process. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.Image enhancement is am

28、ong the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. A familiar example of enhancement is when we increase the contrast of an

29、image because “it looks better.” It is important to keep in mind that enhancement is a very subjective area of image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, i

30、n the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good” enhancement result. Color image processing is an area that has been gain

31、ing in importance because of the significant increase in the use of digital images over the Internet. It covers a number of fundamental concepts in color models and basic color processing in a digital domain. Color is used also in later chapters as the basis for extracting features of interest in an

32、 image.Wavelets are the foundation for representing images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions.Fig2Compression, as the name im

33、plies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmi it.Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet,

34、which are characterized by significant pictorial content. Image compression is familiar (perhaps inadvertently) to most users of computers in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard.Morphological

35、 processing deals with tools for extracting image components that are useful in the representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes.Segmentation procedures partition an image into it

36、s constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually. On the

37、other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel d

38、ata, constituting either the boundary of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether th

39、e data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeleta

40、l shape. In some applications, these representations complement each other. Choosing a representation is only part of the solution for transforming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of interest a

41、re highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.Recognition is the process that assigns a label (e.g., “vehicle”) to an object base

42、d on its descriptors. As detailed before, we conclude our coverage of digital image processing with the development of methods for recognition of individual objects.So far we have said nothing about the need for prior knowledge or about the interaction between the knowledge base and the processing m

43、odules in Fig2 above. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. This knowledge may be as simple as detailing regions of an image where the information of interest is known to be located, thus limiting the search that has to be cond

44、ucted in seeking that information. The knowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in connection with change-detection applications.

45、 In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. This distinction is made in Fig2 above by the use of double-headed arrows between the processing modules and the knowledge base, as opposed to single-headed arrows linki

46、ng the processing modules.Edge detectionEdge detection is a terminology in image processing and computer vision, particularly in the areas of feature detection and feature extraction, to refer to algorithms which aim at identifying points in a digital image at which the image brightness changes shar

47、ply or more formally has discontinuities.Although point and line detection certainly are important in any discussion on segmentation,edge dectection is by far the most common approach for detecting meaningful discounties in gray level.Although certain literature has considered the detection of ideal

48、 step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects:1.focal blur caused by a finite depth-of-field and finite point spread function; 2.penumbral blur caused by shadows created by li

49、ght sources of non-zero radius; 3.shading at a smooth object edge; 4.local specularities or interreflections in the vicinity of object edges. A typical edge might for instance be the border between a block of red color and a block of yellow. In contrast a line (as can be extracted by a ridge detector) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there may therefore usually be one edge on each side of the line.To illustrate why edge detection

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 其他


经营许可证编号:宁ICP备18001539号-1