基于主观对比度的变分图像融合方法.pdf

上传人:西安人 文档编号:5020794 上传时间:2020-01-29 格式:PDF 页数:6 大小:498.32KB
返回 下载 相关 举报
基于主观对比度的变分图像融合方法.pdf_第1页
第1页 / 共6页
基于主观对比度的变分图像融合方法.pdf_第2页
第2页 / 共6页
基于主观对比度的变分图像融合方法.pdf_第3页
第3页 / 共6页
基于主观对比度的变分图像融合方法.pdf_第4页
第4页 / 共6页
基于主观对比度的变分图像融合方法.pdf_第5页
第5页 / 共6页
点击查看更多>>
资源描述

《基于主观对比度的变分图像融合方法.pdf》由会员分享,可在线阅读,更多相关《基于主观对比度的变分图像融合方法.pdf(6页珍藏版)》请在三一文库上搜索。

1、Vol.33, No.2ACTA AUTOMATICA SINICAFebruary, 2007 Perceptual Contrast-Based Image Fusion: A Variational Approach WANG Chao1YE Zhong-Fu1, 2 AbstractLocal contrast, or variation, plays an important role in image fusion which is mainly to preserve the important in- formation from the source images to th

2、e fusion result. Weber? s law tells us that the same variation under diff erent backgrounds will cause diff erent perceptual feelings, thus an ideal image processor has to take into account the eff ects of vision psychology and psychophysics. This paper considers the property of human visual system

3、(HVS) and transfers the quantitative perceptual variations from each source image to the result. Using just-noticeable-diff erence (JND) as measurement, the multiband image?s perceptual contrast is obtained as a target. We construct a functional extremum problem to fi nd a single band image, or fusi

4、on result, which has the closest perceptual contrast to the target one. Via the variational approach, the Euler-Lagrange equation is derived, and a gradient descent iteration is employed. Experimental results show that this method is perceptually good. Key wordsImage fusion, variational approach, pe

5、rceptual contrast, Weber? s law, human visual system (HVS), partial diff erential equations (PDEs) 1Introduction The purpose of image fusion is to extract and synthesize information from multiple images in order to produce a more accurate, complete and reliable composite image of the same scene or t

6、arget, so that the fused image is more suitable for human or machine interpretation. It is useful for analysis, detection, recognition and tracing of targets of interest. Image fusion can be performed in three stages, named as pixel-level, feature-level and decision-level respectively. In this paper

7、 we only discuss pixel-level image fusion, and we think all the input channels have been well registered. Up to now, plenty of image fusion algorithms are based on multiscale decomposition, see 1 for a review. Among these decomposition methods, discrete wavelet transform (DWT) is probably the most t

8、ypical one2. Recently, So- colinsky et al proposed a novel variational paradigm for image fusion, which has shown a promising future36. In Socolinsky?s framework, variation of the intensity is the only thing to consider. However, the same variation under diff erent backgrounds will cause diff erent

9、perceptions7, as the Weber?s law tells us. The human visual system (HVS) is a necessary ingredient to be taken into consideration for all image processing tasks, including image fusion. In the next section, the previous work will be briefl y reviewed, including wavelet fusion method, Socolinsky?s me

10、thod and the HVS. In Section 3, we will generalize Socolinsky?s framework by Weber?s law, present the cor- responding model based on perceptual contrast, and give the discretization for the proposed scheme.The experi- ments are presented in Section 4. In the last section, we give some concluding rem

11、arks and the future directions for study. 2Previous work and our moti- vation 2.1Wavelet fusion framework2 Received July 13, 2005; in revised form August 10, 2006 Supported by the Graduates Innovation Fund of University of Sci- ence and Technology of China (KD2005043). 1. Institute of Statistical Si

12、gnal Processing, Department of Electri- cal Engineering and Information Science, University of Science and Technology of China, Hefei 230027, P.R.China2.National Lab- oratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100080, P.R.China DOI: 10.1360/aas-007-

13、0132 Fig. 1 DWT fusion framework2 Here, we will not rewrite the wavelet transform theory, but only give the existing fusion scheme based on DWT2. The typical diagram of DWT based image fusion can be generalized as Fig.1. The “selection rule” is the key issue of the whole task.It embodies what featur

14、e our choice is, or what we think to be the important information that should be preserved in the fused image. Diff erent DWT- based algorithms often use diff erent rules. As far as the smooth component (LL) of the coarsest level DWT coeffi cient is concerned, Li et al2simply used the average value

15、of each source image? s coeffi cients at the exact point. A maximum absolute amplitude (MAA) rule with consistency verifi cation is used for the coeffi cients of the other three components (LH, HL, HH) in each level. However, as many researchers have noticed, any nonlin- ear operation following DWT,

16、 such as MAA-based selec- tion, will cause many fl uctuations near the strong edges. Furthermore, with the increasing decomposition level, the fl uctuations will be much more serious, and thus an insur- mountable diffi culty8,9. That greatly limits the applica- tion of DWT in image fusion. 2.2Contra

17、st-based fusion Socolinsky et al have advanced a novel paradigm for im- age fusion. They defi ned the contrast (i.e., the fi rst fun- damental form) of a multiband image, which agrees with the gradient defi nition in the special case of single band. They used the variational approach to fi nd the so

18、lution, i.e., the fused image, which has the closest gradient to the multiband?s contrast. The objective contrast (or the gradient of single-band image) is the only thing used in this framework. We do not review the procedure here, in- stead, we will describe the HVS-generalized contrast-based schem

19、e in Section 3. For the detailed description of con- trast fusion method, please refer to 36. No.2WANG Chao and YE Zhong-Fu: Perceptual Contrast-Based Image Fusion: A Variational Approach133 2.3Human visual system (HVS) Fig. 2 The approximation of Weber?s law The human visual system has been already

20、 investigated for a long period of time1012. Among many literatures, maybe the most important one is the Weber?s law, which says that the ratio IJ/I is constant within wide lim- its of intensity, where I is the intensity and IJis just- noticeable-diff erence (JND)10,12. According to 7, the re- latio

21、n between IJ /I and I satisfi es the empirical relation in Fig.2 approximately. Unfortunately, as stated in 13, the application of JND has some main diffi culties: JND is only the case of the exact threshold, what the perception will be when the stimulus is high above the threshold? The empirical re

22、lation is obtained by the measure of physical illumination, when we utilize it in digital images, we mainly consider the intensity shown on the monitor, or printed on the paper, what is the relation between the intensity of a monitor luminance and the measure in the empirical curve? The curve is obt

23、ained by some simple test pattern such as Fig.3. When the natural image pattern, a complex one, is considered, what is the corresponding perception by HVS? According to the problems listed above, in this paper, we have the following assumptions respectively: We do not use JND as a threshold, but we

24、assume it to be a unit to measure the variations. A similar hypothesis has been made in 11; Because the gray level shown on a monitor has the identical order with the physical illumination, we think the horizontal axis (I) of Fig.2 is the gray level (0-255) in digital images.Following 14, we set the

25、 parameters as: P3= 0.575, P2= 0.09, P1= 0.035, I0= 0, I1= 60, I2= 200, I3= 255; The empirical curve still stands approximately for com- plex patterns. Then according to Fig.2, we can obtain the quantitative relation between the gray level variation I and perceptual variation Ip, measured by JND. We

26、 denote this relation as the perceptual ratio c(I) = Ip/I. Thus c is a factor between objective intensity variation (I) and the percep- tual one (Ip). Here, to avoid the zero-divided problem, I is modifi ed as (I + 1) in the denominators. Thus Ip I = c(I) ? ? ? ? ? 1 (0.5750.009I)(I+1), if0 I 0,?2 0

27、, such that (0) = p and ?(0) = v v v. The rate of variation of s at p in the direction of v v v is given by the magnitude of the vector s s s(v v v) d dl(s )(l)|l=0, where the composition operator denotes that (s)(l) s(l), l ?1,?2. Thus we can easily get s s s(v) = Jpv v v(4) where Jpis the Jacobian

28、 matrix of s at the point p. Then the perceptual contrast at p in the direction v v v is given by the quantity (Jpv v v)tgs(p)(Jpv v v) = v v vt(Jt pgs(p)Jp)v v v (5) Here, since we want to evaluate the perceptual variations, the metric g should represent the perceptual meanings and we choose it as

29、gs(p)= diagc2(sk(p)k=1,2,n(6) Let 2 i,j= J t pgs(p)Jp, then we have 2 i,j(p) = n ? k=1 c2(sk(p)sk xi sk xj , 0 i,j 1(7) 2is the image contrast form of the objective varia- tions evaluated by a perceptual metric gs(p).From an- other viewpoint, one can think 2as the contrast form of perceptual variati

30、ons by an Euclidean metric, since pCs (x,y) = c(s(x,y)s(x,y) stands, as defi ned in (3). The contrast form 2 is the fi rst fundamental form of the perceptual contrast of the multichannel source. 2is a nonnegative matrix with two nonnegative eigen- values +,(+ 0), and the corresponding normalized eig

31、envectors are E E E+,E E E. Then the perceptual contrast at point p is defi ned as a 2-dimensional vector V V V (p) V V V (p) = ?(p) +E E E(p)+ (8) Where ?(p)+ is the maximum perceptual variation am- plitude in all 2D directions, and the corresponding varia- tional direction is exactly E E E(p)+. Be

32、cause E E E+and E E E+ can span the same eigenspace, and they do not have any priority, we must choose one out of them, or there will be a direction ambiguity. Here we select the contrast of the average bands as an auxiliary function. C C Cs aux= 1 n n ? i=1 si(9) Fig. 4 Amplitude of the perceptual

33、contrast Because the multiband image?s perceptual contrast must represent the general variations of each band, intuitively it should be close to C C Cs auxin direction, then a further modifi cation to determine the value of perceptual contrast can be made as V V V (p) = V V V (p)sign(C C Cs aux(p) V

34、 V V(p) (10) where sign(t) = ? 1,ift 0 1,else (11) Equation (10) is the fi nal defi nition of multiband image?s perceptual contrast in this paper. For a special case n = 1, the single-band gray level image, we may easily fi nd that (10) is exactly the same as (3), which shows the reason- ableness of

35、 such a defi nition. Here we give an example for the multiband image?s per- ceptual contrast. The amplitude |V V V | of T1-, T2-weighted MRI is shown in Fig.4. According to this illustration, the main contrasts, such as the contours, in the sources are well preserved in the perceptual contrast |V V

36、V |. 3.3Variational formulae and gradient de- scent In Section 3.2, a multiband image?s perceptual contrast has been constructed, which agrees with the single band?s very well. It depicts the dominant perceptual variations of the sources. What to do next is how to visualize V V V (p). An intuitive w

37、ay is to solve the equation for f c(f(p)f(p) = V V V (p),p (12) However, this equation generally has no solution. The substitute is to fi nd a 2-D function f(p), 0 f(p) 255,p , which minimizes the following functional Q(f) = ? |c(f)f V V V |2dx0dx1(13) where the notation | | denotes the length of a

38、vector. The fi rst fundamental form is very sensitive to noise, and so is the target perceptual contrast V V V , since it con- siders only the local variation.To smooth such a noise- oriented case, TV (total variation) model15is employed. A very similar case has been used in image enhancement16. Ado

39、pt such TV energy, we minimize the following energy instead of (13). Q(f) = ? |f|dx0dx1+ ? |c(f)f V V V |2dx0dx1 (14) No.2WANG Chao and YE Zhong-Fu: Perceptual Contrast-Based Image Fusion: A Variational Approach135 (a) MRI-T1(b) MRI-T2(c) DWT fusion2(d) Perceptual fusion Fig. 5 Image fusion experime

40、nt on brain MRI images (a) Left blurred baboon(b) Right blurred baboon(c) Contrast fusion 5 (d) Perceptual fusion Fig. 6 Image fusion experiment on “multifocused” baboon where , are positive parameters weighting the smooth- ness and the contrast fi delity. Using the variational approach17, we can de

41、rive the cor- responding partial diff erential equation (PDE) with a given Neumann boundary condition as (15). ? 2(c?(f)|f|2+ c(f)2f divV V V ) + ( f |f |) = 0,on| f n = 0,on (15) Here, Neumann condition is straightforward because in image processing, the boundaries are usually extended sym- metrica

42、lly. Then the minimizer of (14) can be found by a gradient descent procedure with iterations like (16), where k is the stepsize with a positive value. To ensure the convergence, k has a small positive value in general. ft+1= ft+ k (2(c?(ft)|ft|2+ c(ft)2ft divV V V ) + ( f t |ft| )(16) Considering th

43、at curve c in (1) has the fi nite support, (16) must be re-constrained after each iteration, as (17). ? ? ? ? ? ? ? ft+ 1 2= ft+ k (2(c?(ft)|ft|2+ c(ft)2ft divV V V )+ ( ft |ft|) ft+1= max(0,min(f t+1 2,255) (17) 3.4Discretization scheme In this section, we discuss the issue on discretization of the

44、 iteration (17). Because the method proposed in this paper is on pixel- level image fusion, we had better use the central diff erence in realizing the derivative operators to ensure their sym- metric spatial support, so that the visible shift is avoided. But we do not do in such a way. Instead, we u

45、se the for- ward diff erence in all the fi rst-order derivative operators on f and sk, including /xiin generating V V V and in itera- tions; while V V V is concerned, “div” is realized by a backward diff erence. The Laplace operator 2is simply realized by 5-points discretization scheme. Using these

46、operations, we successfully avoid the computational load coming from cen- tral diff erence in half-pixel, and simultaneously, we ensure the absence of visible shift in pixel. As far as the boundary region is considered, analogous to many tasks in image processing, we extend the original image (f and

47、 sk) symmetrically. Meanwhile, because the original image is extended symmetrically, when the “div” operator is applied, V V V should be extended by zero-padding. The other parameters in (17) are chosen as: k = 0.1, k = 0.001, the iteration will stop when t reaches 600. In a theoretical view, f0can

48、be any randomly selected initial value. However, the model (13) is not convex, thus we can not guarantee that the global minimum will be found using an iterative procedure of gradient descent as (17).Such an iterative procedure may seek a “good” local minimum of (14). There is reason to believe that

49、 the local extrema approach is more relevant to this image fusion task if we choose the initial value as the average of input bands, since the fusion result should represent the input bands infor- 136ACTA AUTOMATICA SINICAVol.33 (a) CT(b) MR(c) DWT fusion2 (d) Contrast fusion 5 (e) Perceptual fusion(f) LPT fusion20 Fig. 7 Image fusion experiment on

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 研究报告 > 商业贸易


经营许可证编号:宁ICP备18001539号-1