ãï¼ï¼ï¼ï¼ã[0001]
ãçºæã®å±ããæè¡åéãæ¬çºæã¯ç»åå¦çè£
ç½®ãæ¹æ³
åã³è¨é²åªä½ã«ä¿ããç¹ã«ãç»åè¨é²ææã«å¯è¦å
ãç
§
å°ãããã¨ã§å¾ãããå¯è¦ç»åæ
å ±ãåã³éå¯è¦å
ãç
§
å°ãããã¨ã§å¾ãããéå¯è¦ç»åæ
å ±ãç¨ãã¦æå®ã®ç»
åå¦çãè¡ãç»åå¦çè£
ç½®ã該ç»åå¦çè£
ç½®ã«é©ç¨å¯è½
ãªç»åå¦çæ¹æ³ãåã³ã³ã³ãã¥ã¼ã¿ãç»åå¦çè£
ç½®ã¨ã
ã¦æ©è½ãããããã®ããã°ã©ã ãè¨é²ãããè¨é²åªä½ã«
é¢ãããBACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image processing apparatus, a method and a recording medium, and more particularly to a method for irradiating visible image information obtained by irradiating an image recording material with visible light and irradiating non-visible light. Image processing apparatus for performing predetermined image processing using the obtained invisible image information, image processing method applicable to the image processing apparatus, and recording medium storing a program for causing a computer to function as the image processing apparatus About.
ãï¼ï¼ï¼ï¼ã[0002]
ã徿¥ã®æè¡ãåçãã£ã«ã ã¯ãåæ±ãæ¹ã«ãã£ã¦ã¯ä¹³
å¤é¢ãããã¯é¢ï¼ä¹³å¤é¢ã®è£é¢ï¼ã«å·ãä»ããã¨ããã
ããåçãã£ã«ã ã®ç»åè¨é²é åå
ã«ç¸å½ããç®æã«å·
ãä»ãã¦ããå ´åã該åçãã£ã«ã ã«è¨é²ããã¦ããç»
åãåºåï¼å°ç»ç´çã®ç»åè¨é²ææã«è¨é²ãæãã¯ãã£
ã¹ãã¬ã¤çã®è¡¨ç¤ºææ®µã«è¡¨ç¤ºï¼ããã¨ããã¨ãå·ã®ç¨åº¦
ã«ãããããåçãã£ã«ã ã«ä»ããå·ãã使¿åº¦ã®çã
ç½ãççã®æ¬ é¥é¨ã¨ãã¦åºåç»åä¸ã§æçã«è¦èªããã
ãã¨ãå¤ããã¾ããåçãã£ã«ã ã®è¡¨é¢ã«å¡µåçã®ç°ç©
ãä»çãã¦ããå ´åã«ãã該ç°ç©ãæ¬ é¥é¨ã¨ãã¦æçã«
è¦èªãããã2. Description of the Related Art A photographic film may be damaged on the emulsion surface or the back surface (the back surface of the emulsion surface) depending on how to handle the film. However, the photographic film has been damaged at a portion corresponding to the image recording area of the photographic film. In this case, if an image recorded on the photographic film is output (recorded on an image recording material such as photographic paper or displayed on a display means such as a display), the image attached to the photographic film depends on the degree of scratches. The scratches are often clearly recognized on the output image as defective portions such as low-density streaks and white streaks. Also, when foreign matter such as dust adheres to the surface of the photographic film, the foreign matter is clearly recognized as a defective portion.
ãï¼ï¼ï¼ï¼ãåçãã£ã«ã ã«å
ãç
§å°ãåçãã£ã«ã ã
ééããå
ãå°ç»ç´ã«ç
§å°ãããã¨ã§å°ç»ç´ã«ç»åãé²
å
è¨é²ããé¢é²å
ã¿ã¤ãã®åçç¼ä»è£
ç½®ã§ã¯ãåçãã£
ã«ã ã®å·ä»ã対çã¨ãã¦ãå
æºã¨åçãã£ã«ã ã¨ã®éã«
æ¡æ£æ¿ãé
ç½®ããæ¡æ£æ¿ã«ãã£ã¦æ£ä¹±ãããå
ãåçã
ã£ã«ã ã«ç
§å°ãã¦ãããããããä¸è¨æè¡ã§ã¯åºåç»å
ï¼å°ç»ç´ã«é²å
è¨é²ããç»åï¼ä¸ã®æ¬ é¥é¨ãæ¶å»ããã
ã¨ã¯å°é£ã§ãããæ¬ é¥ãè¥å¹²è»½æ¸ãããï¼ç®ç«ããªããª
ãï¼ã«éããªããIn a surface exposure type photographic printing apparatus which irradiates light on a photographic film and irradiates light transmitted through the photographic film onto a photographic paper to expose and record an image on the photographic paper, a light source is used as a measure against scratches on the photographic film. A diffusion plate is disposed between the photographic film and the photographic film, and the light scattered by the diffusion plate is irradiated on the photographic film. However, it is difficult to erase a defective portion in an output image (an image recorded on a photographic paper by exposure) with the above technique, and the defect is only slightly reduced (is less noticeable).
ãï¼ï¼ï¼ï¼ãã¾ããåçãã£ã«ã ã«è¨é²ãããç»åãï¼£
CDçã®èªåã»ã³ãµã«ãã£ã¦èªã¿åãæ§æã®ç»åèªåè£
ç½®ã«é©ç¨å¯è½ãªæè¡ã¨ãã¦ãç¹éå¹³ï¼ï¼âï¼ï¼ï¼ï¼ï¼å·
å
¬å ±ã«ã¯ãå¯è¦å
åï¼æ³¢é·ã¨ãéå¯è¦å
åï¼ä¾ãã°èµ¤å¤
åãç´«å¤åï¼ï¼æ³¢é·ãå«ãå°ãªãã¨ãï¼æ³¢é·ä»¥ä¸ã®æ³¢é·
åã§åçãã£ã«ã ãåã
èªã¿åããéå¯è¦å
åã§ã®èªã¿
åãã«ãã£ã¦å¾ãããæ
å ±ã«åºã¥ãã¦ãå¯è¦å
åã§ã®èª
ã¿åãã«ãã£ã¦å¾ãããç»åæ
å ±ãè£æ£ããæè¡ãé示
ããã¦ãããFurther, an image recorded on a photographic film is represented by C
As a technique applicable to an image reading apparatus configured to read by a reading sensor such as a CD, Japanese Patent Application Laid-Open No. H11-75039 discloses three wavelengths in a visible light range and one wavelength in an invisible light range (for example, an infrared range or an ultraviolet range). A technique for reading a photographic film in at least four wavelength ranges including at least four wavelengths and correcting image information obtained by reading in a visible light range based on information obtained by reading in a non-visible light range is disclosed. Have been.
ãï¼ï¼ï¼ï¼ãå¯è¦åã®å
ï¼ä»¥ä¸ãå¯è¦å
ã¨ããï¼ã¯ãå
çãã£ã«ã ã«è¨é²ããã¦ããç»åæ¿åº¦ã«å¿ãã¦ééå
é
ãå¤åããã¨å
±ã«ãåçãã£ã«ã ã«å·ãç°ç©ãä»ãã¦ã
ãç®æã§ãå·ãç°ç©ã«ãã£ã¦å
ãä¸é¨å±æããããåå°
ããããã¨ã§ééå
éãå¤åããã䏿¹ãéå¯è¦åã®å
ï¼ä»¥ä¸ãéå¯è¦å
ã¨ããï¼ã¯ãåçãã£ã«ã ã«å·ãç°ç©
ãä»ãã¦ããç®æã§ã¯ééå
éãå¤åãããã®ã®ãåç
ãã£ã«ã ã«è¨é²ããã¦ããç»åæ¿åº¦ã®å½±é¿ã¯åããªãã[0005] Light in the visible region (hereinafter referred to as visible light) changes the amount of transmitted light in accordance with the image density recorded on the photographic film. As a result, light is partially refracted or reflected to change the amount of transmitted light. On the other hand, light in the invisible range (hereinafter, referred to as invisible light) changes the amount of transmitted light in a portion where the photographic film has scratches or foreign matter, but is not affected by the image density recorded on the photographic film. .
ãï¼ï¼ï¼ï¼ãå¾ã£ã¦ãåè¨å
¬å ±ã«è¨è¼ã®æè¡ã«ããã°ã
éå¯è¦å
ã®ééå
éã®å¤åããåçãã£ã«ã ã«ä»ãã¦ã
ãå·ãç°ç©ãæ¤åºããåçãã£ã«ã ã«ä»ãã¦ããå·ãç°
ç©ã«èµ·å ããå¯è¦å
ã®ééå
éã®å¤åãè£æ£ãããã¨ã
ããªãã¡åçãã£ã«ã ã«ä»ãã¦ããå·ãç°ç©ã«èµ·å ãã
ç»åï¼å¯è¦å
ã®ééå
éãæ¤åºãããã¨ã§å¾ãããç»å
æ
å ±ã表ãç»åï¼ã®æ¬ é¥é¨ãä¿®æ£ãããã¨ãå¯è½ã¨ãª
ããTherefore, according to the technique described in the above publication,
Detecting flaws and foreign matter on the photographic film from changes in the amount of transmitted non-visible light, and compensating for variations in the amount of visible light transmitted due to the flaws and foreign matter on the photographic film;
That is, it is possible to correct a defective portion of an image (an image represented by image information obtained by detecting the amount of transmitted visible light) due to a scratch or a foreign substance attached to the photographic film.
ãï¼ï¼ï¼ï¼ã[0007]
ãçºæã解決ãããã¨ãã課é¡ãããããªãããå¯è¦å
ã®ééå
éãæ¤åºãã¦å¯è¦ç»åæ
å ±ãåå¾ããã¨å
±ã«ã
éå¯è¦å
ã®ééå
éãæ¤åºãã¦éå¯è¦ç»åæ
å ±ãåå¾
ããåå¾ããå¯è¦ç»åæ
å ±åã³éå¯è¦ç»åæ
å ±ãæ¬ é¥é¨
ã®æ¤åºã»ä¿®æ£ã«ç¨ããå ´åã種ã
ã®å
å¦ç¹æ§ã«èµ·å ãã¦
å¯è¦ç»åæ
å ±ã¨éå¯è¦ç»åæ
å ±ã®ç¸éãçããã¨ããå
é¡ããããHowever, while detecting the amount of transmitted visible light to obtain visible image information,
When the amount of transmitted invisible light is detected to acquire invisible image information, and the acquired visible image information and invisible image information are used for detection and correction of a defective portion, the visible image information is generated due to various optical characteristics. And invisible image information.
ãï¼ï¼ï¼ï¼ãããªãã¡ãåçãã£ã«ã ã«è¨é²ãããç»å
ãèªåã»ã³ãµã«ãã£ã¦èªã¿åãæ¢åã®ãã£ã«ã ã¹ãã£ã
ã¯ãæ¬æ¥ãå¯è¦åå
ã®æ³¢é·ã®å
ã®ã¿ãæ±ããã¨ãåæã«
å
å¦ç³»ãè¨è¨ããã¦ãããããã¦åè¨å
å¦ç³»ã®ãã¡ãå
çãã£ã«ã ãééããå
ãçµåãããçµåã¬ã³ãºã¯ãå¾
æ¥ãå¯è¦åå
ã®ï¼æ³¢é·ï¼ä¾ãã°ï¼²ï¼ï¼§ï¼ï¼¢ï¼ã®å
ã«ã¤ã
ã¦å
å¦ç¹æ§ãæã£ã¦ãããã¨ãç®æ¨ã¨ãã¦è¨è¨ãããé
å¯è¦åã«ãããå
å¦ç¹æ§ã«ã¤ãã¦ã¯æ®ã©èæ
®ããã¦ããª
ããThat is, an existing film scanner that reads an image recorded on a photographic film by a reading sensor is originally designed with an optical system on the assumption that it handles only light having a wavelength in the visible range. In the above-mentioned optical system, an imaging lens for imaging light transmitted through a photographic film has conventionally been designed to have uniform optical characteristics with respect to light of three wavelengths (for example, R, G, and B) within a visible region. Designed, little consideration is given to optical properties in the non-visible range.
ãï¼ï¼ï¼ï¼ããã®ãããçµåã¬ã³ãºã®æªæ²åå·®ãåçè²
åå·®ï¼å
å¦ç¹æ§ã®ï¼ã¤ï¼ã¯éå¯è¦åã«ããã¦é¡èã§ãã
ãã¨ãå¤ããå¯è¦å
ã®ééå
éãæ¤åºãããã¨ã§å¾ãã
ãå¯è¦ç»åæ
å ±ã¨ãéå¯è¦å
ã®ééå
éãæ¤åºãããã¨
ã§å¾ãããéå¯è¦ç»åæ
å ±ã§ãå®éã®ç»åä¸ã§ã®ç»ç´ ä½
ç½®ãç¸éãããæè¬ç»ç´ ãããçºçãããã¨ãå¤ããã¾
ããå¯è¦å
ãæ¤åºããã»ã³ãµã¨å¥ã«éå¯è¦å
ãæ¤åºãã
ã»ã³ãµãè¨ããå ´åãåã»ã³ãµã®é
ç½®ä½ç½®ï¼ãããå
å¦
ç¹æ§ã®ï¼ã¤ï¼ã®ç¸éã«ãã£ã¦ãç»ç´ ãããçºçãããã¨
ããããFor this reason, distortion and chromatic aberration of magnification (one of the optical characteristics) of the imaging lens are often remarkable in the non-visible range, and visible image information obtained by detecting the amount of transmitted visible light. In addition, the so-called pixel shift that the pixel position on the actual image is different in the invisible image information obtained by detecting the transmitted light amount of the invisible light often occurs. In addition, when a sensor that detects invisible light is provided separately from a sensor that detects visible light, a pixel shift may occur due to a difference in the arrangement position (also one of the optical characteristics) of each sensor.
ãï¼ï¼ï¼ï¼ãã¾ããåè¿°ã®ããã«çµåã¬ã³ãºã®éå¯è¦å
ã«ãããå
å¦ç¹æ§ã¯æ®ã©èæ
®ããã¦ããªãã®ã§ãçµåã¬
ã³ãºã®ç¦ç¹è·é¢ï¼å
å¦ç¹æ§ã®ï¼ã¤ï¼ã«ã¤ãã¦ãå¯è¦åã¨
éå¯è¦åã¨ã§å¤§ããç°ãªã£ã¦ãããã¨ãå¤ããå¯è¦å
ã®
ééå
éã®æ¤åºã¨éå¯è¦å
ã®ééå
éã®æ¤åºãåæã«è¡
ãçã®å ´åã«ãå¯è¦ç»åæ
å ±ã表ãç»åã¨éå¯è¦ç»åæ
å ±ã表ãç»åã®é®®é度ãç¸éãããã¨ãå¤ããFurther, since the optical characteristics of the imaging lens in the invisible range are hardly considered as described above, the focal length (one of the optical characteristics) of the imaging lens is different between the visible range and the non-visible range. In many cases, the sharpness of the image represented by the visible image information and the sharpness of the image represented by the non-visible image information are different when the detection of the transmitted light amount of visible light and the detection of the transmitted light amount of invisible light are performed simultaneously. Often different.
ãï¼ï¼ï¼ï¼ãä¸è¨ã®åé¡ã解決ããããã«ãå¯è¦åã®ã¿
ãªããéå¯è¦åã«ãããç¹æ§ãæãããã«è¨è¨ãããçµ
åã¬ã³ãºãç¨ãããã¨ãèããããããç¹ã«é·æ³¢é·å´
ï¼èµ¤å¤åå´ï¼ã®ç¹æ§ãæãããã«ã¬ã³ãºãè¨è¨ãããã¨
ã¯å°é£ã§ããããã®ãããªã¬ã³ãºã¯ã³ã¹ããé常ã«åµ©ã
ã®ã§ç¾å®çã§ã¯ãªããã¾ããä¸è¨ã®ãããªã¬ã³ãºãç¨ã
ãã¨ãã¦ããåçãã£ã«ã ã®ãã£ã«ã ãã¼ã¹èªä½ããé
éå
ã«å¯¾ããå
éã®æ¸è¡°åº¦ï¼å
å¦ç¹æ§ã®ï¼ã¤ï¼ãééå
ã®æ³¢é·ã«ãã£ã¦ç¸éããã¨ããç¹æ§ãæãã¦ããã®ã§ã
éå¯è¦å
ã®ééå
éã®æ¤åºçµæãããåçãã£ã«ã ã«ä»
ãã¦ããå·ãç°ç©ã«èµ·å ããå¯è¦å
ã®ééå
éã®å¤åã
精度è¯ãæ±ãããã¨ã¯å°é£ã§ãããIn order to solve the above-mentioned problem, it is conceivable to use an imaging lens designed to have the same characteristics not only in the visible region but also in the non-visible region. It is difficult to design a lens to have the same characteristics, and such a lens is not realistic because the cost is very high. Even when the above-described lens is used, the film base itself of the photographic film has a characteristic that the degree of attenuation of light amount (one of optical characteristics) with respect to transmitted light differs depending on the wavelength of transmitted light. So
It is difficult to accurately determine the change in the transmitted light amount of visible light caused by a scratch or a foreign substance on the photographic film from the detection result of the transmitted light amount of invisible light.
ãï¼ï¼ï¼ï¼ãæ´ã«ãåçãã£ã«ã ã®ãã£ã«ã ãã¼ã¹ã®å±
æçã¯å
ã®æ³¢é·ãé·ããªãã«å¾ã£ã¦å°ãããªããã¨ãä¸
è¬çã§ããããã®ãããéå¯è¦å
ã¨ãã¦èµ¤å¤åå´ã®æ³¢é·
åã®å
ãç¨ããå ´åãåçãã£ã«ã ã«ä»ããå·ã®æ¤åºç²¾
度åä¸ã®ä¸ææ®µã¨ãã¦ãéå¯è¦å
ã®æ³¢é·åï¼ãããå
å¦
ç¹æ§ã®ï¼ã¤ï¼ãçæ³¢é·å´ã«è¥å¹²ã·ããããããã¨ãèã
ããããããããããã«ä¼´ãéå¯è¦å
ã®æ³¢é·åã®ä¸é¨ã
å¯è¦åã«å
¥ããã¨ã§ãåçãã£ã«ã ã®ä¹³å¤å±¤ã®ã·ã¢ã³è²
ç´ æåã«ããå¸åãçºçããéå¯è¦å
ã¨ãã¦åçãã£ã«
ã ã«ç
§å°ããå
ã®ééå
éããåçãã£ã«ã ã«è¨é²ãã
ã¦ããç»åã®ï¼²æ¿åº¦ã«ãã£ã¦è¥å¹²å¤åãããã¨ã«ãªããFurthermore, the refractive index of the film base of a photographic film generally decreases as the wavelength of light increases. For this reason, when light in the infrared wavelength range is used as the invisible light, the wavelength range of invisible light (also one of the optical characteristics) is used as a means of improving the detection accuracy of a flaw on a photographic film. A slight shift to the shorter wavelength side is conceivable. However, a part of the wavelength range of the invisible light enters the visible range, causing absorption by the cyan dye component of the emulsion layer of the photographic film, and the transmitted light amount of the light applied to the photographic film as invisible light Varies slightly depending on the R density of the image recorded on the photographic film.
ãï¼ï¼ï¼ï¼ãããã¦ãä¸è¨ã®ç»ç´ ãããé®®é度ã®ç¸éã
æ¸è¡°åº¦ã®ç¸éãéå¯è¦å
ã®æ³¢é·åãã·ããããããã¨ã«
èµ·å ããééå
éã®å¤åã¯ãå¯è¦ç»åæ
å ±åã³éå¯è¦ç»
åæ
å ±ãç¨ãã¦æ¬ é¥é¨ãæ¤åºãã¦æ¬ é¥é¨ã®ä¿®æ£ãè¡ã£ã
å ´åã«ãæ¬ é¥é¨ã«å¯¾ããä¿®æ£ç²¾åº¦ã®ä½ä¸ã«ç¹ãããã¨ã
ãåé¡ããã£ããThen, the above-mentioned pixel shift and difference in sharpness,
The difference in attenuation, the change in the amount of transmitted light caused by shifting the wavelength range of invisible light, occurs when a defective portion is detected and corrected using visible image information and invisible image information. This leads to a problem that the accuracy of correction for the defective portion is reduced.
ãï¼ï¼ï¼ï¼ãæ¬çºæã¯ä¸è¨äºå®ãèæ
®ãã¦æããããã®
ã§ãä½ã³ã¹ããªæ§æã§æ¬ é¥é¨ã®ä¿®æ£ç²¾åº¦ãåä¸ãããã
ã¨ãå¯è½ãªç»åå¦çè£
ç½®ãæ¹æ³åã³è¨é²åªä½ãå¾ããã¨
ãç®çã§ãããSUMMARY OF THE INVENTION The present invention has been made in view of the above facts, and has as its object to provide an image processing apparatus, method, and recording medium capable of improving the accuracy of correcting a defective portion with a low-cost configuration. .
ãï¼ï¼ï¼ï¼ã[0015]
ã課é¡ã解決ããããã®ææ®µãä¸è¨ç®çãéæãããã
ã«è«æ±é
ï¼è¨è¼ã®çºæã«ä¿ãç»åå¦çè£
ç½®ã¯ãç»åè¨é²
ææã®ç»åè¨é²é åã«å¯è¦å
ãç
§å°ãåè¨ç»åè¨é²é å
ãééåã¯åå°ããå¯è¦å
ãç¬¬ï¼æ¤åºææ®µã«ãã£ã¦æ¤åº
ãããã¨ã§å¾ãããå¯è¦ç»åæ
å ±ãåã³åè¨ç»åè¨é²é
åã«éå¯è¦å
ãç
§å°ãåè¨ç»åè¨é²é åãééåã¯åå°
ããéå¯è¦å
ãç¬¬ï¼æ¤åºææ®µã«ãã£ã¦æ¤åºãããã¨ã§å¾
ãããéå¯è¦ç»åæ
å ±ãåå¾ããåå¾ææ®µã¨ãåè¨å¯è¦
ç»åæ
å ±åã³åè¨éå¯è¦ç»åæ
å ±ã®å°ãªãã¨ã䏿¹ã«å¯¾
ããå
å¦ç¹æ§ã«èµ·å ããåè¨åæ¹ã®æ
å ±ã®å·®ç°ãè£æ£ã
ãè£æ£ææ®µã¨ããå«ãã§æ§æããã¦ãããIn order to achieve the above object, an image processing apparatus according to the first aspect of the present invention irradiates visible light to an image recording area of an image recording material and transmits or reflects the visible light through the image recording area. The visible image information obtained by detecting the visible light obtained by the first detecting means, and the non-visible light transmitted to or reflected from the image recording area by irradiating the image recording area with the invisible light is detected by the second detecting means. Acquiring means for acquiring invisible image information obtained by detection, and correcting means for correcting a difference between the visible information and at least one of the invisible image information due to an optical characteristic. And is configured.
ãï¼ï¼ï¼ï¼ãè«æ±é
ï¼è¨è¼ã®çºæã§ã¯ãç»åè¨é²ææã®
ç»åè¨é²é åã«å¯è¦å
ãç
§å°ãç»åè¨é²é åãééåã¯
åå°ããå¯è¦å
ãç¬¬ï¼æ¤åºææ®µã«ãã£ã¦æ¤åºãããã¨ã§
å¾ãããå¯è¦ç»åæ
å ±ãåã³ç»åè¨é²é åã«éå¯è¦å
ã
ç
§å°ãç»åè¨é²é åãééåã¯åå°ããéå¯è¦å
ã第ï¼
æ¤åºææ®µã«ãã£ã¦æ¤åºãããã¨ã§å¾ãããéå¯è¦ç»åæ
å ±ãåå¾ææ®µã«ãã£ã¦åå¾ãããããªããç¬¬ï¼æ¤åºææ®µ
åã³ç¬¬ï¼æ¤åºææ®µã¯æ¬çºæã«ä¿ãç»åå¦çè£
ç½®ã«å«ã¾ã
ã¦ãã¦ãããããå¥ã®è£
ç½®ï¼ç»åèªåè£
ç½®çï¼ã«å«ã¾ã
ã¦ãã¦ããããç¬¬ï¼æ¤åºææ®µåã³ç¬¬ï¼æ¤åºææ®µãå¥ã®è£
ç½®ã«å«ã¾ãã¦ããå ´åãåå¾ææ®µã«ããå¯è¦ç»åæ
å ±å
ã³éå¯è¦ç»åæ
å ±ã®åå¾ã¯ãåè¨å¥ã®è£
ç½®ããåç»åæ
å ±ãåä¿¡ãããã¨ã§å®ç¾ã§ãããAccording to the first aspect of the present invention, visible image information obtained by irradiating the image recording area of the image recording material with visible light and detecting the visible light transmitted or reflected by the image recording area by the first detecting means. And irradiating the image recording area with invisible light and transmitting or reflecting the invisible light through the image recording area to the second
The invisible image information obtained by detection by the detection means is acquired by the acquisition means. The first detecting means and the second detecting means may be included in the image processing device according to the present invention, or may be included in another device (such as an image reading device). When the first detection unit and the second detection unit are included in another device, the acquisition of the visible image information and the invisible image information by the acquisition unit can be realized by receiving each image information from the another device. .
ãï¼ï¼ï¼ï¼ãããã¦ãè«æ±é
ï¼ã®çºæã«ä¿ãè£æ£ææ®µ
ã¯ãå¯è¦ç»åæ
å ±åã³éå¯è¦ç»åæ
å ±ã®å°ãªãã¨ã䏿¹
ã«å¯¾ããå
å¦ç¹æ§ã«èµ·å ããåæ¹ã®æ
å ±ï¼å¯è¦ç»åæ
å ±
ã¨éå¯è¦ç»åæ
å ±ï¼ã®å·®ç°ãè£æ£ããããã®è£æ£ã¯ãä¾
ãã°ã¬ã³ãºãã»ã³ãµçã®å
å¦ç³»ãæãã¯éå¯è¦å
ã®æ³¢é·
åã«é¢é£ããå
å¦ç¹æ§ã«ã¤ãã¦ã¯ãåè¨å
å¦ç¹æ§èªä½ã
æãã¯åè¨å
å¦ç¹æ§ã«èµ·å ããåæ¹ã®æ
å ±ã®å·®ç°ãäºã
測å®ãã¦è¨æ¶ãã¦ãããæ¸¬å®çµæãèªã¿åºãã¦ç¨ããã
ã¨ã§è¡ããã¨ãã§ãããã¾ããä¾ãã°ç»åè¨é²ææã«é¢
é£ããå
å¦ç¹æ§ã«ã¤ãã¦ã¯ãåè¨å
å¦ç¹æ§èªä½ãæãã¯
åè¨å
å¦ç¹æ§ã«èµ·å ããåæ¹ã®æ
å ±ã®å·®ç°ãç»åè¨é²æ
æã®ç¨®é¡æ¯ã«äºã測å®ãã¦ç¨®é¡æ¯ã«è¨æ¶ãã¦ãããå¦ç
対象ã®ç»åè¨é²ææã®ç¨®é¡ã«å¯¾å¿ããæ¸¬å®çµæãèªã¿åº
ãã¦ç¨ãããã¨ã§è¡ããã¨ãã§ãããThe correcting means according to the first aspect of the present invention is arranged such that, for at least one of the visible image information and the invisible image information, the difference between both information (visible image information and invisible image information) due to the optical characteristics. Is corrected. This correction is, for example, an optical system such as a lens or a sensor, or an optical characteristic related to the wavelength range of invisible light, the optical characteristic itself,
Alternatively, it can be performed by measuring and storing in advance the difference between the two types of information caused by the optical characteristics, and reading and using the measurement result. Further, for example, as for the optical characteristics related to the image recording material, the optical characteristics themselves or the difference between both information caused by the optical characteristics is measured in advance for each type of the image recording material and stored for each type. The measurement can be performed by reading and using the measurement result corresponding to the type of the image recording material to be processed.
ãï¼ï¼ï¼ï¼ãä¸è¨ã®è£æ£ã«ãããéå¯è¦åã¾ã§å
å¦ç¹æ§
ã®æã£ãçµåã¬ã³ãºçã®é«ä¾¡ãªå
å¦é¨åãç¨ãããã¨ãª
ããå
å¦ç¹æ§ã«èµ·å ããå·®ç°ãè£æ£ãããå¯è¦ç»åæ
å ±
åã³éå¯è¦ç»åæ
å ±ãå¾ããã¨ãã§ããã®ã§ãå¯è¦ç»å
æ
å ±ã表ãç»åã®æ¬ é¥é¨ã®æ¤åºåã³æ¤åºããæ¬ é¥é¨ã®ä¿®
æ£ã«éããè«æ±é
ï¼ã«ãè¨è¼ããããã«ãè£æ£ææ®µã«ã
ãè£æ£ãçµãå¯è¦ç»åæ
å ±åã³éå¯è¦ç»åæ
å ±ãç¨ãã
ãã¨ã§ãä½ã³ã¹ããªæ§æã§æ¬ é¥é¨ã®ä¿®æ£ç²¾åº¦ãåä¸ãã
ããã¨ãå¯è½ã¨ãªããWith the above correction, the visible image information and the invisible image information in which the difference caused by the optical characteristics has been corrected can be used without using expensive optical components such as an imaging lens having uniform optical characteristics up to the invisible region. When detecting a defective portion of an image represented by the visible image information and correcting the detected defective portion, the visible image information and the non-visible image information that have been corrected by the correction unit as described in claim 8. By using the method, it is possible to improve the accuracy of correcting a defective portion with a low-cost configuration.
ãï¼ï¼ï¼ï¼ãã¾ããç»åè¨é²ææã«ã«ã©ã¼ç»åãè¨é²ã
ãã¦ããçã®å ´åãç»åè¨é²é åãééåã¯åå°ããå¯
è¦å
ãå¯è¦åå
ã®è¤æ°ã®æ³¢é·åï¼å¥½ã¾ããã¯äºãã«ç°ãª
ãï¼ä»¥ä¸ã®æ³¢é·åï¼æ¯ã«åã
æ¤åºãããã¨ã§ãå¯è¦ç»å
æ
å ±ã¨ãã¦ãåè¨è¤æ°ã®æ³¢é·åæ¯ã®ééåã¯åå°å
é
ï¼æãã¯ééåã¯åå°æ¿åº¦ï¼ã表ãæ
å ±ãå¾ããã¨ãä¸
è¬çã§ãããä¸è¨ã®å¯è¦ç»åæ
å ±ã¯ãä¾ãã°åè¨è¤æ°ã®
æ³¢é·åã®å
ãç»åè¨é²é åã«é 次ç
§å°ããç»åè¨é²é å
ãééåã¯åå°ããå
ãç¬¬ï¼æ¤åºææ®µãé æ¬¡æ¤åºãã
ããæãã¯åè¨è¤æ°ã®æ³¢é·åã®å
ãå«ãå¯è¦å
ãç»åè¨
é²é åã«ç
§å°ããç»åè¨é²é åãééåã¯åå°ããå¯è¦
å
ã第ï¼ã®æ¤åºææ®µãåè¨è¤æ°ã®æ³¢é·åã®å
ã«åè§£ãã¦
æ¤åºãããã¨ã§å¾ããã¨ãã§ãããIn the case where a color image is recorded on an image recording material or the like, visible light transmitted or reflected through the image recording area is applied to a plurality of wavelength ranges within the visible range (preferably three or more different wavelength ranges). In general, information representing the amount of transmitted or reflected light (or the density of transmitted or reflected light) for each of the plurality of wavelength ranges is obtained as visible image information. The visible image information is, for example, sequentially irradiating the light in the plurality of wavelength ranges to the image recording area, and the first detection unit sequentially detects the light transmitted or reflected through the image recording area, or the plurality of wavelength ranges. Irradiating visible light including the light of the above to the image recording area, and the first detecting means decomposes and detects the visible light transmitted or reflected through the image recording area into light of the plurality of wavelength ranges. .
ãï¼ï¼ï¼ï¼ãããããä¸è¨ã®ããã«ãã¦å¾ãããè¤æ°ã®
æ³¢é·åæ¯ã®å¯è¦ç»åæ
å ±ç¸äºã«ããã¬ã³ãºãã»ã³ãµçã®
å
å¦ç³»ã«é¢é£ããå
å¦ç¹æ§ãç»åè¨é²ææã«é¢é£ããå
å¦ç¹æ§ã«èµ·å ããç¸éãããããã®ããåå¾ææ®µããå¯
è¦ç»åæ
å ±ã¨ãã¦ãç»åè¨é²é åãééåã¯åå°ããå¯
è¦å
ãç¬¬ï¼æ¤åºææ®µãè¤æ°ã®æ³¢é·åæ¯ã«åã
æ¤åºããã
ã¨ã§å¾ãããè¤æ°ã®æ³¢é·åæ¯ã®å¯è¦ç»åæ
å ±ãåã
åå¾
ããå ´åãè«æ±é
ï¼ã«è¨è¼ããããã«è£æ£ææ®µã¯ãè¤æ°
ã®æ³¢é·åæ¯ã®å¯è¦ç»åæ
å ±åã³éå¯è¦ç»åæ
å ±ã«å¯¾ãã
å
å¦ç¹æ§ã«èµ·å ããåè¨åç»åæ
å ±ç¸äºã®å·®ç°ãè£æ£ã
ããã¨ã好ã¾ãããããã«ãããæ¬ é¥é¨ã®ä¿®æ£ç²¾åº¦ãæ´
ã«åä¸ããããã¨ãå¯è½ã¨ãªããHowever, the visible image information obtained for each of a plurality of wavelength ranges obtained as described above also differs from each other due to optical characteristics related to optical systems such as lenses and sensors and optical characteristics related to image recording materials. There is. For this reason, as the visible image information, the acquisition means detects the visible light transmitted or reflected through the image recording area by the first detection means for each of the plurality of wavelength ranges, and obtains the visible image information for each of the plurality of wavelength ranges. When each is acquired, the correction means as described in claim 2, for the visible image information and invisible image information for each of a plurality of wavelength ranges,
It is preferable to correct a difference between the respective pieces of image information caused by optical characteristics. As a result, it is possible to further improve the accuracy of correcting the defective portion.
ãï¼ï¼ï¼ï¼ããªããç¬¬ï¼æ¤åºææ®µåã³ç¬¬ï¼ã®æ¤åºææ®µ
ããç»åè¨é²ææã®ç»åè¨é²é åãééåã¯åå°ããã¤
çµåã¬ã³ãºã«ãã£ã¦çµåãããå¯è¦å
åã¯éå¯è¦å
ãæ¤
åºããæ§æã§ããå ´åãè«æ±é
ï¼ã«è¨è¼ããããã«ãè£
æ£ææ®µã¯ãåç»åæ
å ±ï¼ããªãã¡ãå¯è¦ç»åæ
å ±ã¨éå¯
è¦ç»åæ
å ±ãåã¯è¤æ°ã®æ³¢é·åæ¯ã®å¯è¦ç»åæ
å ±ã¨éå¯
è¦ç»åæ
å ±ï¼ä»¥ä¸åæ§ï¼ã®å·®ç°ã¨ãã¦ãçµåã¬ã³ãºã®å
çè²åå·®åã¯æªæ²åå·®ã«èµ·å ããç»ç´ ä½ç½®ã®ãããè£æ£
ãããã¨ã好ã¾ãããThe first detecting means and the second detecting means transmit or reflect the image recording area of the image recording material and detect the visible light or the invisible light formed by the imaging lens. In this case, as described in claim 3, the correction unit is configured to correct each of the image information (that is, the visible image information and the invisible image information, or the visible image information and the invisible image information for each of a plurality of wavelength ranges: the same applies hereinafter). As a difference, it is preferable to correct the displacement of the pixel position due to the chromatic aberration of magnification or distortion of the imaging lens.
ãï¼ï¼ï¼ï¼ããã®è£æ£ã¯ãä¾ãã°çµåã¬ã³ãºã«é¢é£ãã
å
å¦ç¹æ§ã§ããåçè²åå·®åã¯æªæ²åå·®ãæãã¯åçè²
åå·®åã¯æªæ²åå·®ã«èµ·å ããå¯è¦ç»åæ
å ±ã¨éå¯è¦ç»å
æ
å ±ã®ç»ç´ ä½ç½®ã®ãããäºã測å®ãã¦ãããç»ç´ ä½ç½®ã®
ããã®è£æ£ã«ç¨ãããã¨ã§å®ç¾ã§ãããããã«ãããä¾
ãã°çµåã¬ã³ãºã¨ãã¦ãéå¯è¦åã¾ã§åçè²åå·®ãæªæ²
åå·®ãæå¶ãããé«ä¾¡ãªçµåã¬ã³ãºãç¨ãããã¨ãªãã
çµåã¬ã³ãºã®åçè²åå·®åã¯æªæ²åå·®ã«èµ·å ããç»ç´ ä½
ç½®ã®ãããè§£æ¶ãããã¨ãã§ãããThis correction is performed by measuring in advance, for example, the chromatic aberration of magnification or distortion, which is an optical characteristic related to the imaging lens, or the deviation between the pixel positions of the visible image information and the invisible image information caused by the chromatic aberration of magnification or distortion. It can be realized by using it for correcting the pixel position shift. Thereby, for example, without using an expensive imaging lens in which chromatic aberration of magnification and distortion are suppressed up to the non-visible range as an imaging lens,
The displacement of the pixel position due to the chromatic aberration of magnification or the distortion of the imaging lens can be eliminated.
ãï¼ï¼ï¼ï¼ãå¾ã£ã¦ãè«æ±é
ï¼ã®çºæã«ããã°ãçµåã¬
ã³ãºã®åçè²åå·®åã¯æªæ²åå·®ã«èµ·å ããåç»åæ
å ±ã®
ç»ç´ ä½ç½®ã®ããï¼åç»åæ
å ±ã表ãç»åã®åç»ç´ ã®ãå®
éã®ç»åä¸ã§ã®ä½ç½®ã®ããï¼ã«ãããç»åã®æ¬ é¥é¨ã®æ¤
åºã«éãã¦æ¬ é¥é¨ã®ç¯å²ã誤æ¤åºãããããæ¬ é¥é¨ã®ä¿®
æ£ã«éãã¦å®éã®æ¬ é¥é¨ã¨ãããç¯å²ã誤修æ£ãããã
ã¨ã鲿¢ãããã¨ãã§ãããTherefore, according to the third aspect of the present invention, the displacement of the pixel position of each image information due to the chromatic aberration of magnification or distortion of the imaging lens (the position of each pixel of the image represented by each image information on the actual image) Misalignment of the image) can be prevented from being erroneously detected at the time of detection of a defective portion of an image, or from being erroneously corrected at the time of correction of a defective portion. .
ãï¼ï¼ï¼ï¼ãã¾ããç¬¬ï¼æ¤åºææ®µã第ï¼ã®å
é»å¤æç´ å
ã«ãã£ã¦å¯è¦å
ãæ¤åºããã¨å
±ã«ã第ï¼ã®æ¤åºææ®µã第
ï¼ã®å
é»å¤æç´ åã¨ç°ãªã第ï¼ã®å
é»å¤æç´ åã«ãã£ã¦
éå¯è¦å
ãæ¤åºããæ§æãããªãã¡ç¬¬ï¼æ¤åºææ®µã¨ç¬¬ï¼
æ¤åºææ®µãç°ãªãå
é»å¤æç´ åã«ãã£ã¦å¯è¦å
åã¯éå¯
è¦å
ãæ¤åºããæ§æã§ããå ´åãè«æ±é
ï¼ã«è¨è¼ããã
ãã«ãè£æ£ææ®µã¯ãåç»åæ
å ±ã®å·®ç°ã¨ãã¦ã第ï¼ã®å
é»å¤æç´ åã¨ç¬¬ï¼ã®å
é»å¤æç´ åã®é
ç½®ä½ç½®ã®ç¸éã«èµ·
å ããç»ç´ ä½ç½®ã®ãããè£æ£ãããã¨ã好ã¾ãããFurther, the first detecting means detects the visible light by the first photoelectric conversion element, and the second detecting means detects the invisible light by the second photoelectric conversion element different from the first photoelectric conversion element. That is, the first detecting means and the second
In a case where the detecting means is configured to detect the visible light or the invisible light by using different photoelectric conversion elements, as described in claim 4, the correcting means determines the difference between each image information as the first photoelectric conversion element and the second photoelectric conversion element. It is preferable to correct the displacement of the pixel position due to the difference in the arrangement position of the two photoelectric conversion elements.
ãï¼ï¼ï¼ï¼ããã®è£æ£ã¯ãä¾ãã°ã»ã³ãµã«é¢é£ããå
å¦
ç¹æ§ã§ãã第ï¼ã®å
é»å¤æç´ åã¨ç¬¬ï¼ã®å
é»å¤æç´ åã®
é
ç½®ä½ç½®ã®ç¸éãæãã¯è©²é
ç½®ä½ç½®ã®ç¸éã«èµ·å ããå
ç»åæ
å ±ã®ç»ç´ ä½ç½®ã®ãããäºã測å®ãã¦ãããç»ç´ ä½
ç½®ã®ããã®è£æ£ã«ç¨ãããã¨ã§å®ç¾ã§ãããããã«ã
ãã第ï¼ã®å
é»å¤æç´ åã¨ç¬¬ï¼ã®å
é»å¤æç´ åã®é
ç½®ä½
ç½®ã®ç¸éã«èµ·å ããç»ç´ ä½ç½®ã®ãããè§£æ¶ãããã¨ãã§
ãããThis correction is performed by, for example, a difference in the arrangement position of the first photoelectric conversion element and the second photoelectric conversion element, which is an optical characteristic related to the sensor, or a pixel position of each image information caused by the difference in the arrangement position. Can be realized by measuring the shift in advance and using it for correcting the shift in the pixel position. Thus, it is possible to eliminate a pixel position shift caused by a difference in the arrangement position of the first photoelectric conversion element and the second photoelectric conversion element.
ãï¼ï¼ï¼ï¼ãå¾ã£ã¦ãè«æ±é
ï¼ã®çºæã«ããã°ã第ï¼ã®
å
é»å¤æç´ åã¨ç¬¬ï¼ã®å
é»å¤æç´ åã®é
ç½®ä½ç½®ã®ç¸éã«
èµ·å ããç»ç´ ä½ç½®ã®ããã«ãããç»åã®æ¬ é¥é¨ã®æ¤åºã«
éãã¦æ¬ é¥é¨ã®ç¯å²ã誤æ¤åºãããããæ¬ é¥é¨ã®ä¿®æ£ã«
éãã¦å®éã®æ¬ é¥é¨ã¨ãããç¯å²ã誤修æ£ããããã¨ã
鲿¢ãããã¨ãã§ããããªããè«æ±é
ï¼ã®çºæã«ãã
ã¦ã第ï¼ã®å
é»å¤æç´ åã¨ç¬¬ï¼ã®å¤æç´ åã¯å¥ä½ã§ãã£
ã¦ãããããä¸ä½åããã¦ãã¦ããããTherefore, according to the fourth aspect of the present invention, when a defective portion of an image is detected due to a pixel position shift caused by a difference in arrangement position between the first photoelectric conversion element and the second photoelectric conversion element, Can be prevented from being erroneously detected, or a range deviated from an actual defective portion when a defective portion is corrected. In the invention of claim 4, the first photoelectric conversion element and the second conversion element may be separate bodies or may be integrated.
ãï¼ï¼ï¼ï¼ãã¾ããç¬¬ï¼æ¤åºææ®µåã³ç¬¬ï¼ã®æ¤åºææ®µ
ããç»åè¨é²ææã®ç»åè¨é²é åãééåã¯åå°ããã¤
çµåã¬ã³ãºã«ãã£ã¦çµåãããå¯è¦å
åã¯éå¯è¦å
ãæ¤
åºããæ§æã§ããå ´åãè«æ±é
ï¼ã«è¨è¼ããããã«ãè£
æ£ææ®µã¯ãåç»åæ
å ±ã®å·®ç°ã¨ãã¦ãçµåã¬ã³ãºã®ç¦ç¹
è·é¢ã®æ³¢é·ä¾åæ§ã«èµ·å ãããåç»åæ
å ±ã表ãç»åã®
é®®é度ã®å·®ç°ãè£æ£ãããã¨ã好ã¾ãããFurther, the first detecting means and the second detecting means transmit or reflect the image recording area of the image recording material and detect the visible light or the invisible light formed by the imaging lens. In this case, as described in claim 5, the correction unit corrects a difference in sharpness of an image represented by each piece of image information due to a wavelength dependency of a focal length of the imaging lens as a difference between pieces of image information. Is preferred.
ãï¼ï¼ï¼ï¼ããã®è£æ£ã¯ãä¾ãã°çµåã¬ã³ãºã«é¢é£ãã
å
å¦ç¹æ§ã§ããæ³¢é·æ¯ã®ç¦ç¹è·é¢ãæãã¯ç¦ç¹è·é¢ã®æ³¢
é·ä¾åæ§ã«èµ·å ãããåç»åæ
å ±ã表ãç»åã®é®®é度ã®
å·®ç°ãäºã測å®ãã¦ãããé®®é度ã®è£æ£ã«ç¨ãããã¨ã§
å®ç¾ã§ãããããã«ãããçµåã¬ã³ãºã®ç¦ç¹è·é¢ã®æ³¢é·
ä¾åæ§ã«èµ·å ããé®®é度ã®å·®ç°ãè§£æ¶ãããã¨ãã§ã
ããå¾ã£ã¦ãè«æ±é
ï¼ã®çºæã«ããã°ãçµåã¬ã³ãºã®æ³¢
é·ä¾åæ§ã«èµ·å ããé®®é度ã®å·®ç°ã«ãããç»åã®æ¬ é¥é¨
ã®æ¤åºã«éãã¦æ¬ é¥é¨ã®ç¯å²ã誤æ¤åºãããããæ¬ é¥é¨
ã®ä¿®æ£ã«éãã¦ä¿®æ£å¼·åº¦ãä¸é©æ£ã«è¨å®ããããã¨ã§ã
ä¾ãã°ç»åä¸ã«ä¸å¿
è¦ãªã¨ãã¸ãçããçã®ããã«æ¬ é¥
é¨ã®ä¿®æ£ãä¸é©æ£ã«ãªããã¨ã鲿¢ãããã¨ãã§ãããFor this correction, for example, a difference in sharpness of an image represented by each piece of image information due to a focal length for each wavelength, which is an optical characteristic related to the imaging lens, or a wavelength dependence of the focal length is measured in advance. It can be realized by using it for correction of sharpness. This makes it possible to eliminate the difference in sharpness caused by the wavelength dependence of the focal length of the imaging lens. Therefore, according to the invention of claim 5, due to the difference in sharpness caused by the wavelength dependence of the imaging lens, the range of the defective portion is erroneously detected when the defective portion of the image is detected, or the defective portion is corrected when the defective portion is corrected. By setting the strength incorrectly,
For example, it is possible to prevent improper correction of a defective portion such as occurrence of an unnecessary edge in an image.
ãï¼ï¼ï¼ï¼ãã¾ã第ï¼ã®æ¤åºææ®µããéå¯è¦å
ã¨ãã¦ã
å¯è¦åãä¸é¨å«ãæ³¢é·åã®å
ãç»åè¨é²é åã«ç
§å°ãã
æ§æã§ããå ´åãè«æ±é
ï¼ã«è¨è¼ããããã«ãè£æ£ææ®µ
ã¯ãåç»åæ
å ±ã®å·®ç°ã¨ãã¦ãå¯è¦åãä¸é¨å«ãæ³¢é·å
ã®å
ãéå¯è¦å
ã¨ãã¦ç
§å°ããããã¨ã«èµ·å ããç¬¬ï¼æ¤
åºææ®µã«ããéå¯è¦å
ã®æ¤åºå
éã®å¤åãè£æ£ãããã¨
ã好ã¾ãããFurther, the second detecting means may detect the invisible light as
In the case where the image recording area is configured to irradiate light in a wavelength range partially including the visible range to the image recording area, as a difference between the respective pieces of image information, the correction unit may determine the wavelength range partially including the visible range. It is preferable to correct the change in the amount of invisible light detected by the second detection means due to the irradiation of the light as invisible light.
ãï¼ï¼ï¼ï¼ããã®è£æ£ã¯ãä¾ãã°éå¯è¦å
ã®æ³¢é·åã«é¢
é£ããå
å¦ç¹æ§ã§ããéå¯è¦å
ã®åå
å
éåå¸ï¼åã¯é
å¯è¦å
ã«å«ã¾ããå¯è¦åã®æ³¢é·ã®å
ã®å²åï¼ãæãã¯é
å¯è¦å
ã®æ³¢é·åãå¯è¦åãä¸é¨å«ãã§ãããã¨ã«èµ·å ã
ã第ï¼ã®æ¤åºææ®µã«ããéå¯è¦å
ã®æ¤åºå
éã®å¤åãäº
ãæ¸¬å®ãã¦ãããéå¯è¦å
ã®æ¤åºå
éã®è£æ£ã«ç¨ããã
ã¨ã§å®ç¾ã§ãããããã«ãããéå¯è¦å
ã®æ³¢é·åãå¯è¦
åãä¸é¨å«ãã§ãããã¨ã«èµ·å ããéå¯è¦å
ã®æ¤åºå
é
ã®å¤åã«ãã£ã¦ãç»åã®æ¬ é¥é¨ã®æ¤åºã«éãã¦æ¬ é¥é¨ã®
ç¯å²ã誤æ¤åºãããããæ¬ é¥é¨ã®ä¿®æ£ã«éãã¦ä¿®æ£å¼·åº¦
ãä¸é©æ£ã«è¨å®ããããã¨ã§æ¬ é¥é¨ã®ä¿®æ£ãä¸é©æ£ã«ãª
ããã¨ã鲿¢ãããã¨ãã§ãããThis correction can be performed, for example, by distributing the spectral light amount of invisible light (or the ratio of light of visible wavelength included in invisible light), which is an optical characteristic related to the wavelength range of invisible light, or invisible light. Is realized by measuring in advance the change in the amount of invisible light detected by the second detection means due to the wavelength range partially including the visible region, and using it for correction of the amount of invisible light detected. it can. As a result, the range of the defective portion is erroneously detected when detecting a defective portion of the image due to a change in the detected light amount of the non-visible light due to the wavelength range of the non-visible light partially including the visible range. It is possible to prevent improper correction of a defective portion by setting improper correction strength when correcting a portion.
ãï¼ï¼ï¼ï¼ãã¾ããè«æ±é
ï¼ã«è¨è¼ããããã«ãè£æ£æ
段ã¯ãåç»åæ
å ±ã®å·®ç°ã¨ãã¦ãç»åè¨é²ææãééå
ã¯åå°ããå
ã®ç»åè¨é²ææã«ããæ¸è¡°åº¦ã®æ³¢é·ä¾åæ§
ã«èµ·å ãããåæ¤åºææ®µã«ããæ¤åºå
éã®å·®ç°ãè£æ£ã
ããã¨ã好ã¾ããããã®è£æ£ã¯ãç»åè¨é²ææã«é¢é£ã
ãå
å¦ç¹æ§ã¨ãã¦ãééåã¯åå°å
ã®ç»åè¨é²ææã«ã
ãæ¸è¡°åº¦ã®æ³¢é·ä¾åæ§ãæãã¯è©²æ¸è¡°åº¦ã®æ³¢é·ä¾åæ§ã«
èµ·å ããåæ¤åºææ®µã«ããæ¤åºå
éã®å·®ç°ãäºã測å®ã
ã¦ãããæ¤åºå
éã®å·®ç°ã®è£æ£ã«ç¨ãããã¨ã§å®ç¾ã§ã
ããFurther, as described in claim 7, the correction means determines that the difference between the respective image information is caused by the wavelength dependence of the attenuation of the light transmitted or reflected by the image recording material by the image recording material. It is preferable to correct the difference in the amount of light detected by the detecting means. This correction includes, as optical characteristics related to the image recording material, the wavelength dependence of the attenuation of the transmitted or reflected light by the image recording material, or the difference in the amount of light detected by each detecting means due to the wavelength dependence of the attenuation. It can be realized by measuring in advance and using it for correcting the difference in the detected light amount.
ãï¼ï¼ï¼ï¼ãããã«ãããééåã¯åå°å
ã®ç»åè¨é²æ
æã«ããæ¸è¡°åº¦ã®æ³¢é·ä¾åæ§ã«èµ·å ããæ¤åºå
éã®å·®ç°
ã«ãã£ã¦ãç»åã®æ¬ é¥é¨ã®æ¤åºã«éãã¦æ¬ é¥é¨ã®ç¯å²ã
誤æ¤åºãããããæ¬ é¥é¨ã®ä¿®æ£ã«éãã¦ä¿®æ£å¼·åº¦ãä¸é©
æ£ã«è¨å®ããããã¨ã§æ¬ é¥é¨ã®ä¿®æ£ãä¸é©æ£ã«ãªããã¨
ã鲿¢ãããã¨ãã§ãããAs a result, due to the difference in the detected light amount due to the wavelength dependence of the attenuation of the transmitted or reflected light by the image recording material, the range of the defective portion may be erroneously detected when the defective portion of the image is detected, or the defective portion may be detected. Improper setting of the correction strength at the time of correction can prevent incorrect correction of the defective portion.
ãï¼ï¼ï¼ï¼ãè«æ±é
ï¼è¨è¼ã®çºæã«ä¿ãç»åå¦çæ¹æ³
ã¯ãç»åè¨é²ææã®ç»åè¨é²é åã«å¯è¦å
ãç
§å°ãåè¨
ç»åè¨é²é åãééåã¯åå°ããå¯è¦å
ãç¬¬ï¼æ¤åºææ®µ
ã«ãã£ã¦æ¤åºãããã¨ã§å¾ãããå¯è¦ç»åæ
å ±ãåã³å
è¨ç»åè¨é²é åã«éå¯è¦å
ãç
§å°ãåè¨ç»åè¨é²é åã
ééåã¯åå°ããéå¯è¦å
ãç¬¬ï¼æ¤åºææ®µã«ãã£ã¦æ¤åº
ãããã¨ã§å¾ãããéå¯è¦ç»åæ
å ±ãåå¾ããåè¨å¯è¦
ç»åæ
å ±åã³åè¨éå¯è¦ç»åæ
å ±ã®å°ãªãã¨ã䏿¹ã«å¯¾
ããå
å¦ç¹æ§ã«èµ·å ããåè¨åæ¹ã®æ
å ±ã®å·®ç°ãè£æ£ã
ãã®ã§ãè«æ±é
ï¼ã®çºæã¨åæ§ã«ãè£
ç½®ã®é«ã³ã¹ãåã
æããã¨ãªãæ¬ é¥é¨ã®ä¿®æ£ç²¾åº¦ãåä¸ããããã¨ãå¯è½
ã¨ãªããIn the image processing method according to the ninth aspect of the present invention, the image recording area of the image recording material is irradiated with visible light, and the visible light transmitted or reflected by the image recording area is detected by the first detecting means. Obtain the obtained visible image information and the invisible image information obtained by irradiating the image recording area with invisible light and detecting the invisible light transmitted or reflected by the image recording area by the second detection unit. Then, a difference between the at least one of the visible image information and the non-visible image information due to the optical characteristics is corrected, thereby increasing the cost of the apparatus as in the first aspect of the present invention. It is possible to improve the accuracy of repairing a defective portion without any problem.
ãï¼ï¼ï¼ï¼ãè«æ±é
ï¼ï¼è¨è¼ã®çºæã«ä¿ãè¨é²åªä½ã¯ã
ç»åè¨é²ææã®ç»åè¨é²é åã«å¯è¦å
ãç
§å°ãåè¨ç»å
è¨é²é åãééåã¯åå°ããå¯è¦å
ãç¬¬ï¼æ¤åºææ®µã«ã
ã£ã¦æ¤åºãããã¨ã§å¾ãããå¯è¦ç»åæ
å ±ãåã³åè¨ç»
åè¨é²é åã«éå¯è¦å
ãç
§å°ãåè¨ç»åè¨é²é åãéé
åã¯åå°ããéå¯è¦å
ãç¬¬ï¼æ¤åºææ®µã«ãã£ã¦æ¤åºãã
ãã¨ã§å¾ãããéå¯è¦ç»åæ
å ±ãåå¾ãã第ï¼ã®ã¹ãã
ããåã³ãåè¨å¯è¦ç»åæ
å ±åã³åè¨éå¯è¦ç»åæ
å ±ã®
å°ãªãã¨ã䏿¹ã«å¯¾ããå
å¦ç¹æ§ã«èµ·å ããåè¨åæ¹ã®
æ
å ±ã®å·®ç°ãè£æ£ãã第ï¼ã®ã¹ããããå«ãå¦çãã³ã³
ãã¥ã¼ã¿ã«å®è¡ãããããã®ããã°ã©ã ãè¨é²ããã¦ã
ãã[0034] The recording medium according to claim 10 is:
Visible image information obtained by irradiating visible light to the image recording area of the image recording material and detecting visible light transmitted or reflected by the image recording area by the first detection means, and invisible light to the image recording area A first step of acquiring invisible image information obtained by detecting invisible light transmitted or reflected by the image recording area by the second detection unit by irradiating the visible image information, and the visible image information and the invisible A program for causing a computer to execute a process including a second step of correcting a difference between the two types of information due to optical characteristics with respect to at least one of the image information is recorded.
ãï¼ï¼ï¼ï¼ãè«æ±é
ï¼ï¼è¨è¼ã®çºæã«ä¿ãè¨é²åªä½ã«
ã¯ãä¸è¨ç¬¬ï¼åã³ç¬¬ï¼ã®ã¹ããããå«ãå¦çãããªãã¡
ã³ã³ãã¥ã¼ã¿ããè«æ±é
ï¼ã«è¨è¼ã®ç»åå¦çè£
ç½®ã¨ãã¦
æ©è½ãããããã®ããã°ã©ã ãè¨é²ããã¦ããã®ã§ãã³
ã³ãã¥ã¼ã¿ãåè¨è¨é²åªä½ã«è¨é²ãããããã°ã©ã ãèª
ã¿åºãã¦å®è¡ãããã¨ã«ãããè«æ±é
ï¼ã®çºæã¨åæ§
ã«ãä½ã³ã¹ããªæ§æã§æ¬ é¥é¨ã®ä¿®æ£ç²¾åº¦ãåä¸ãããã
ã¨ãå¯è½ã¨ãªããAccording to a tenth aspect of the present invention, a program for causing a computer to function as the image processing apparatus according to the first aspect is recorded on the recording medium according to the tenth aspect of the present invention. Since the computer reads and executes the program recorded on the recording medium, the accuracy of correcting the defective portion can be improved with a low-cost configuration as in the first aspect of the invention.
ãï¼ï¼ï¼ï¼ã[0036]
ãçºæã®å®æ½ã®å½¢æ
ã以ä¸ãå³é¢ãåç
§ããæ¬çºæã®å®
æ½å½¢æ
ã®ä¸ä¾ã詳細ã«èª¬æããããªãã以ä¸ã§ã¯ä¸ä¾ã¨
ãã¦ãåçãã£ã«ã ã«ä»ãã¦ããå·ãç°ç©ã«èµ·å ããæ¬
é¥é¨ãä¿®æ£ããå ´åã説æãããDETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Hereinafter, an example of an embodiment of the present invention will be described in detail with reference to the drawings. In the following, as an example, a case where a defective portion caused by a scratch or a foreign substance on a photographic film is corrected will be described.
ãï¼ï¼ï¼ï¼ãå³ï¼ã«ã¯ãæ¬å®æ½å½¢æ
ã«ä¿ãç»åå¦çã·ã¹
ãã ï¼ï¼ã示ããã¦ãããç»åå¦çã·ã¹ãã ï¼ï¼ã¯ãã
ã£ã«ã ã¹ãã£ãï¼ï¼ãç»åå¦çè£
ç½®ï¼ï¼åã³ããªã³ã¿ï¼
ï¼ãç´åã«æ¥ç¶ããã¦æ§æããã¦ããããªãããã£ã«ã
ã¹ãã£ãï¼ï¼åã³ç»åå¦çè£
ç½®ï¼ï¼ã¯æ¬çºæã«ä¿ãç»å
å¦çè£
ç½®ã«å¯¾å¿ãã¦ãããFIG. 1 shows an image processing system 10 according to the present embodiment. The image processing system 10 includes a film scanner 12, an image processing device 14, and a printer 1.
6 are connected in series. Note that the film scanner 12 and the image processing device 14 correspond to the image processing device according to the present invention.
ãï¼ï¼ï¼ï¼ããã£ã«ã ã¹ãã£ãï¼ï¼ã¯ãåçãã£ã«ã
ï¼ä¾ãã°ãã¬ãã£ã«ã ããªãã¼ãµã«ãã£ã«ã ï¼çã®åç
æå
ææï¼ä»¥ä¸åã«åçãã£ã«ã ã¨ç§°ããï¼ã«è¨é²ãã
ã¦ããç»åï¼è¢«åä½ãæ®å½±å¾ãç¾åå¦çããããã¨ã§å¯
è¦åããããã¬ç»ååã¯ãã¸ç»åï¼ãèªã¿åãã該èªã¿
åãã«ãã£ã¦å¾ãããç»åãã¼ã¿ãåºåãããã®ã§ã
ããå³ï¼ã«ã示ãããã«ãããã²ã³ã©ã³ãçããæãå
çãã£ã«ã ï¼ï¼ã«å
ãç
§å°ããå
æºï¼ï¼ãåãã¦ããã
ãªããå
æºããå°åºãããå
ã¯å¯è¦å
åã®æ³¢é·ã®å
åã³
赤å¤åã®æ³¢é·ã®å
ãåã
å«ãã§ãããThe film scanner 12 visualizes an image recorded on a photographic material (hereinafter simply referred to as a photographic film) such as a photographic film (for example, a negative film or a reversal film) by photographing a subject and developing the photographed object. A negative light source or a positive light source image) is read, and image data obtained by the reading is output. As shown in FIG. Have.
The light emitted from the light source includes light having a wavelength in the visible light range and light having a wavelength in the infrared range.
ãï¼ï¼ï¼ï¼ãå
æºï¼ï¼ã®å
å°åºå´ã«ã¯ãåçãã£ã«ã ï¼
ï¼ã«ç
§å°ããå
ã®å
éã調ç¯ããããã®çµãï¼ï¼ããã£
ã«ã¿ã¦ãããï¼ï¼ãåçãã£ã«ã ï¼ï¼ã«ç
§å°ããå
ãæ¡
æ£å
ã¨ããå
æ¡æ£ããã¯ã¹ï¼ï¼ãé ã«é
ç½®ããã¦ããã
ãã£ã«ã¿ã¦ãããï¼ï¼ã¯ãå
¥å°å
ã®ãã¡ï¼²ã«ç¸å½ããæ³¢
é·åã®å
ï¼ï¼²å
ï¼ã®ã¿ééããããã£ã«ã¿ï¼ï¼ï¼£ã¨ãå
¥
å°å
ã®ãã¡ï¼§ã«ç¸å½ããæ³¢é·åã®å
ï¼ï¼§å
ï¼ã®ã¿ééã
ãããã£ã«ã¿ï¼ï¼ï¼ã¨ãå
¥å°å
ã®ãã¡ï¼¢ã«ç¸å½ããæ³¢é·
åã®å
ï¼ï¼¢å
ï¼ã®ã¿ééããããã£ã«ã¿ï¼ï¼ï¼¹ã¨ãå
¥å°
å
ã®ãã¡èµ¤å¤åã®å
ï¼ï¼©ï¼²å
ï¼ãééããããã£ã«ã¿ï¼
ï¼ï¼©ï¼²ã®ï¼åã®ãã£ã«ã¿ããå³ï¼ç¢å°ï¼¡æ¹åã«æ²¿ã£ã¦å
転å¯è½ã¨ãããã¿ã¼ã¬ããï¼ï¼ï¼¡ã«åµãè¾¼ã¾ãã¦æ§æã
ãã¦ãããOn the light exit side of the light source 20, a photographic film 2
A stop 21, a filter unit 23, and a light diffusion box 22 for diffusing the light irradiated on the photographic film 26 are arranged in this order.
The filter unit 23 includes a filter 23C that transmits only light in a wavelength range corresponding to R (R light) of incident light, and a filter 23M that transmits only light in a wavelength range corresponding to G (G light) of incident light. A filter 23Y that transmits only the light in the wavelength range corresponding to B (B light) of the incident light, and a filter 2 that transmits the infrared light (IR light) of the incident light.
Four filters of 3IR are fitted into a turret 23A rotatable in the direction of arrow A in FIG.
ãï¼ï¼ï¼ï¼ãåçãã£ã«ã ï¼ï¼ãæãã§å
æºï¼ï¼ã¨å対
å´ã«ã¯ãå
è»¸ï¼¬ã«æ²¿ã£ã¦ãåçãã£ã«ã ï¼ï¼ãééãã
å
ãçµåãããçµåã¬ã³ãºï¼ï¼ãã¨ãªã¢ï¼£ï¼£ï¼¤ï¼ï¼ãé
ã«é
ç½®ããã¦ãããã¨ãªã¢ï¼£ï¼£ï¼¤ï¼ï¼ã¯ãåã
å¯è¦å
å
åã³èµ¤å¤åã«æåº¦ãæãã夿°ã®ï¼£ï¼£ï¼¤ã»ã«ããããªã¯
ã¹ç¶ã«é
åãããã¢ãã¯ãã®ï¼£ï¼£ï¼¤ã§ãããåå
é¢ãçµ
åã¬ã³ãºï¼ï¼ã®çµåç¹ä½ç½®ã«ç¥ä¸è´ããããã«é
ç½®ãã
ã¦ãããã¾ããã¨ãªã¢ï¼£ï¼£ï¼¤ï¼ï¼ã¨çµåã¬ã³ãºï¼ï¼ã¨ã®
éã«ã¯ã·ã£ãã¿ï¼å³ç¤ºçç¥ï¼ãè¨ãããã¦ãããOn the opposite side of the photographic film 26 from the light source 20, an imaging lens 28 for forming an image of light transmitted through the photographic film 26 and an area CCD 30 are arranged in order along the optical axis L. The area CCD 30 is a monochrome CCD in which a large number of CCD cells each having sensitivity in the visible light region and the infrared region are arranged in a matrix, and the light receiving surface substantially coincides with the imaging point position of the imaging lens 28. Are located. A shutter (not shown) is provided between the area CCD 30 and the imaging lens 28.
ãï¼ï¼ï¼ï¼ãã¨ãªã¢ï¼£ï¼£ï¼¤ï¼ï¼ã¯ï¼£ï¼£ï¼¤ãã©ã¤ãï¼ï¼ã
ä»ãã¦ã¹ãã£ãå¶å¾¡é¨ï¼ï¼ã«æ¥ç¶ããã¦ãããã¹ãã£ã
å¶å¾¡é¨ï¼ï¼ã¯ï¼£ï¼°ï¼µãROï¼ï¼ä¾ãã°è¨æ¶å
å®¹ãæ¸æã
å¯è½ãªï¼²ï¼¯ï¼ï¼ãRAï¼åã³å
¥åºåãã¼ããåãããã
ãããã¹çãä»ãã¦äºãã«æ¥ç¶ããã¦æ§æããã¦ããã
ã¹ãã£ãå¶å¾¡é¨ï¼ï¼ã¯ãã£ã«ã ã¹ãã£ãï¼ï¼ã®åé¨ã®å
ä½ãå¶å¾¡ãããã¾ããCCDãã©ã¤ãï¼ï¼ã¯ã¨ãªã¢ï¼£ï¼£
Dï¼ï¼ãé§åããããã®é§åä¿¡å·ãçæããã¨ãªã¢ï¼£ï¼£
Dï¼ï¼ã®é§åãå¶å¾¡ãããThe area CCD 30 is connected to a scanner controller 33 via a CCD driver 31. The scanner control unit 33 includes a CPU, a ROM (for example, a ROM whose storage content is rewritable), a RAM, and an input / output port, which are connected to each other via a bus or the like.
The scanner control unit 33 controls the operation of each unit of the film scanner 12. In addition, the CCD driver 31 has an area CC.
A drive signal for driving D30 is generated, and area CC is generated.
The driving of D30 is controlled.
ãï¼ï¼ï¼ï¼ãåçãã£ã«ã ï¼ï¼ã¯ãã£ã«ã ãã£ãªã¢ï¼ï¼
ï¼å³ï¼åç
§ãå³ï¼ã§ã¯å³ç¤ºçç¥ï¼ã«ãã£ã¦æ¬éãããç»
åã®ç»é¢ä¸å¿ãå
軸Lã«ä¸è´ããä½ç½®ï¼èªåä½ç½®ï¼ã«ä½
置決ãããããã¾ããã¹ãã£ãå¶å¾¡é¨ï¼ï¼ã¯ç»åãèªå
ä½ç½®ã«ä½ç½®æ±ºãããã¦ããç¶æ
ã§ããã£ã«ã¿ï¼ï¼ï¼©ï¼²ã
å«ãå
¨ã¦ã®ãã£ã«ã¿ï¼ï¼ãé ã«å
軸Lä¸ã«ä½ç½®ãããã
ã«ãã£ã«ã¿ã¦ãããï¼ï¼ã®ã¿ã¼ã¬ããï¼ï¼ï¼¡ãå転é§å
ãããã¨å
±ã«ãæå®ã®èªåæ¡ä»¶ã«å¯¾å¿ããã¨ãªã¢ï¼£ï¼£ï¼¤
ï¼ï¼ã®é»è·èç©æéãCCDãã©ã¤ãï¼ï¼ã¸è¨å®ããçµ
ãï¼ï¼ãåè¨æå®ã®èªåæ¡ä»¶ã«å¯¾å¿ããä½ç½®ã¸ç§»åãã
ããThe photographic film 26 is a film carrier 24
(See FIG. 1; not shown in FIG. 2), and is positioned at a position (reading position) where the screen center of the image coincides with the optical axis L. Further, the scanner control unit 33 rotates the turret 23A of the filter unit 23 so that all the filters 23 including the filter 23IR are sequentially positioned on the optical axis L in a state where the image is positioned at the reading position. Area CCD corresponding to predetermined reading conditions
The charge accumulation time of 30 is set in the CCD driver 31, and the diaphragm 21 is moved to a position corresponding to the predetermined reading condition.
ãï¼ï¼ï¼ï¼ãããã«ãããåçãã£ã«ã ï¼ï¼ä¸ã®ç»åè¨
é²é åã«åãã£ã«ã¿ï¼ï¼ã«å¯¾å¿ããæ³¢é·åï¼ï¼²åã¯ï¼§å
ã¯ï¼¢åã¯ï¼©ï¼²ï¼ã®å
ãé ã«ç
§å°ãããåçãã£ã«ã ï¼ï¼
ä¸ã®ç»åè¨é²é åãééããå
ã¯ã¨ãªã¢ï¼£ï¼£ï¼¤ï¼ï¼ã«ã
ã£ã¦æ¤åºããï¼è©³ããã¯å
é»å¤æããï¼ãééå
éã表
ãä¿¡å·ã¨ãã¦ã¨ãªã¢ï¼£ï¼£ï¼¤ï¼ï¼ããåºåããããã¨ãªã¢
CCDï¼ï¼ããåºåãããä¿¡å·ã¯ãAï¼ï¼¤å¤æå¨ï¼ï¼ã«
ãã£ã¦ééå
éã表ããã¸ã¿ã«ãã¼ã¿ã«å¤æããã¦ç»å
å¦çè£
ç½®ï¼ï¼ã«å
¥åãããããã®ããã«ããã£ã«ã ã¹ã
ã£ãï¼ï¼ã¯æ¬çºæã«ä¿ãç¬¬ï¼æ¤åºææ®µåã³ç¬¬ï¼æ¤åºææ®µ
ã¨ãã¦ã®æ©è½ãå
¼ãåãã¦ãããAs a result, the image recording area on the photographic film 26 is irradiated with light in the wavelength range (R, G, B, or IR) corresponding to each filter 23 in order.
The light transmitted through the upper image recording area is detected by the area CCD 30 (more specifically, subjected to photoelectric conversion), and is output from the area CCD 30 as a signal representing the amount of transmitted light. The signal output from the area CCD 30 is converted into digital data representing the amount of transmitted light by the A / D converter 32 and input to the image processing device 14. As described above, the film scanner 12 has both functions as the first detecting means and the second detecting means according to the present invention.
ãï¼ï¼ï¼ï¼ããªããï¼²ãï¼§ãï¼¢ã®åæ³¢é·åã®å
ã®ééå
éã¯ãç»åè¨é²é åã«è¨é²ããã¦ããç»åã®ï¼²ï¼ï¼§ï¼ï¼¢
æ¿åº¦ã«å¿ãã¦å¤åããï¼åçãã£ã«ã ï¼ï¼ã«å·ãç°ç©ã
ä»ãã¦ããå ´åã«ã¯ãããã«ãã£ã¦ãå¤åããããIR
å
ã®ééå
éã¯ç»åæ¿åº¦ã®å½±é¿ãåãããå·ãç°ç©çã®
ã¿ã«ãã£ã¦å¤åããï¼ãå¾ã£ã¦ãï¼²ãï¼§ãï¼¢ã®åæ³¢é·å
ã®ééå
ãå
é»å¤æãããã¨ã¯ç»åãèªã¿åããã¨ã«ç¸
å½ãã以ä¸ã§ã¯ãç»åå¦çè£
ç½®ï¼ï¼ã«å
¥åãããï¼²ã
ï¼§ãï¼¢ãIRã®åæ³¢é·åã®ãã¼ã¿ã®ãã¡ï¼©ï¼²ãé¤ãï¼²ã
ï¼§ãï¼¢ã®åãã¼ã¿ãç»åãã¼ã¿ã¨ç§°ããããªããï¼²ï¼
ï¼§ï¼ï¼¢ã®ç»åãã¼ã¿ã¯æ¬çºæã«ä¿ãå¯è¦ç»åæ
å ±ï¼ãã
詳ããã¯è«æ±é
ï¼ã«è¨è¼ã®è¤æ°ã®æ³¢é·åæ¯ã®å¯è¦ç»åæ
å ±ï¼ã«å¯¾å¿ãã¦ãããIRãã¼ã¿ã¯æ¬çºæã«ä¿ãéå¯è¦
ç»åæ
å ±ã«å¯¾å¿ãã¦ãããThe amount of transmitted light in each of the R, G, and B wavelength ranges depends on the R, G, and B of the image recorded in the image recording area.
It changes according to the density (if the photographic film 26 has scratches or foreign matter, it also changes due to these.
The amount of transmitted light is not affected by the image density and changes only due to scratches or foreign matter. Therefore, photoelectrically converting the transmitted light in each of the R, G, and B wavelength ranges corresponds to reading an image. In the following, R, G, and B input to the image processing device 14 will be described.
R, excluding IR, of data of each wavelength range of G, B, IR
Each data of G and B is called image data. Note that R,
The G and B image data correspond to the visible image information according to the present invention (more specifically, the visible image information for each of a plurality of wavelength ranges according to claim 2), and the IR data corresponds to the invisible image according to the present invention. It corresponds to information.
ãï¼ï¼ï¼ï¼ãã¾ããä¸è¬ã«åçãã£ã«ã ã®ãã£ã«ã ã¹ã¼
ã¹ã¯ãééå
ã®æ³¢é·ãé·ããªãã«å¾ã£ã¦å±æçãå°ãã
ãªãç¹æ§ãæãã¦ãããIRå
ã®æ³¢é·ãé·æ³¢é·å´ã«åå
ããã«å¾ã£ã¦ãåçãã£ã«ã ã«ä»ãã¦ããå·ãç°ç©ã«èµ·
å ããæ¬ é¥é¨ã®æ¤åºç²¾åº¦ã¯ä½ä¸ãããæ¬å®æ½å½¢æ
ã§ã¯ã
IRå
ã«ããæ¬ é¥é¨ã®æ¤åºç²¾åº¦ã確ä¿ããããã«ããã£
ã«ã¿ï¼ï¼ï¼©ï¼²ã¨ãã¦ãä¾ã¨ãã¦å³ï¼ã«ã示ãããã«ãã
ã£ã«ã¿ï¼ï¼ï¼©ï¼²ãééãã¦åçãã£ã«ã ã«ç
§å°ãããI
ï¼²å
ã®æ³¢é·åãå¯è¦åå´ï¼çæ³¢é·å´ï¼ã«ååããåå
ç¹
æ§ãæãããã£ã«ã¿ãç¨ãã¦ãããIRå
ã¨ãã¦åçã
ã£ã«ã ã«ç
§å°ãããå
ã«ã¯å¯è¦åã®æ³¢é·ã®å
ãå«ã¾ãã¦
ãããGenerally, the film base of a photographic film has a characteristic that the refractive index decreases as the wavelength of transmitted light increases, and as the wavelength of IR light shifts toward the longer wavelength side, the photographic film The detection accuracy of a defective portion caused by a scratch or a foreign substance attached to the device decreases. In this embodiment,
In order to ensure the accuracy of detecting a defective portion by IR light, as shown in FIG. 3, for example, the filter 23IR is transmitted through the filter 23IR and irradiated onto a photographic film.
A filter having a spectral characteristic in which the wavelength range of the R light is deviated to the visible range side (short wavelength side) is used, and the light irradiated to the photographic film as the IR light includes light of the visible range wavelength. .
ãï¼ï¼ï¼ï¼ã䏿¹ãç»åå¦çè£
ç½®ï¼ï¼ã®ã¹ãã£ãè£æ£é¨
ï¼ï¼ã¯ãå
¥åãããç»åãã¼ã¿ï¼åã³ï¼©ï¼²ãã¼ã¿ï¼ã«å¯¾
ããæè£æ£ãæ¿åº¦å¤æãã·ã§ã¼ãã£ã³ã°è£æ£çã®å種ã®
è£æ£å¦çãé ã«è¡ããã¹ãã£ãè£æ£é¨ï¼ï¼ã®åºå端ã¯ï¼©
ï¼ï¼¯ã³ã³ããã¼ã©ï¼ï¼ã®å
¥åç«¯ã«æ¥ç¶ããã¦ãããã¹ã
ã£ãè£æ£é¨ï¼ï¼ã§åè¨åå¦çãæ½ãããç»åãã¼ã¿ã¯ï¼©
ï¼ï¼¯ã³ã³ããã¼ã©ï¼ï¼ã«å
¥åããããIï¼ï¼¯ã³ã³ããã¼
ã©ï¼ï¼ã®å
¥å端ã¯ãã¤ã¡ã¼ã¸ããã»ããµï¼ï¼ã®ãã¼ã¿åº
å端ã«ãæ¥ç¶ããã¦ãããã¤ã¡ã¼ã¸ããã»ããµï¼ï¼ãã
ç»åå¦çï¼è©³ç´°ã¯å¾è¿°ï¼ãè¡ãããç»åãã¼ã¿ãå
¥åã
ãããOn the other hand, the scanner correction unit 36 of the image processing device 14 sequentially performs various correction processes such as dark correction, density conversion, shading correction, and the like on the input image data (and IR data). The output terminal of the scanner correction unit 36 is I
The image data, which is connected to the input terminal of the I / O controller 38 and has been subjected to the above-described processing by the scanner correction unit 36, is
/ O controller 38. An input terminal of the I / O controller 38 is also connected to a data output terminal of the image processor 40, and receives image data subjected to image processing (details will be described later) from the image processor 40.
ãï¼ï¼ï¼ï¼ãã¾ããIï¼ï¼¯ã³ã³ããã¼ã©ï¼ï¼ã®å
¥å端ã¯
å¶å¾¡é¨ï¼ï¼ã«ãæ¥ç¶ããã¦ãããå¶å¾¡é¨ï¼ï¼ã¯æ¡å¼µã¹ã
ããï¼å³ç¤ºçç¥ï¼ãåãã¦ããããã®æ¡å¼µã¹ãããã«
ã¯ããã¸ã¿ã«ã¹ãã«ã«ã¡ã©ã«è£
å¡«å¯è½ãªï¼°ï¼£ã«ã¼ããI
ï¼£ã«ã¼ãï¼ä»¥ä¸ããããããã¸ã¿ã«ã«ã¡ã©ã«ã¼ãã¨ç·ç§°
ããï¼ãCDâROï¼ãï¼ï¼¯ãCDâï¼²çã®æ
å ±è¨æ¶åª
ä½ã«å¯¾ãã¦ãã¼ã¿ï¼æãã¯ããã°ã©ã ï¼ã®èªåºãï¼æ¸è¾¼
ã¿ãè¡ããã©ã¤ãï¼å³ç¤ºçç¥ï¼ããä»ã®æ
å ±å¦çæ©å¨ã¨
éä¿¡ãè¡ãããã®éä¿¡å¶å¾¡è£
ç½®ãæ¥ç¶ããããæ¡å¼µã¹ã
ãããä»ãã¦å¤é¨ããå
¥åãããç»åãã¼ã¿ã¯ï¼©ï¼ï¼¯ã³
ã³ããã¼ã©ï¼ï¼ã¸å
¥åããããThe input terminal of the I / O controller 38 is also connected to the control unit 42. The control unit 42 has an expansion slot (not shown). The expansion slot includes a PC card or an I / O card that can be loaded into the digital still camera.
A driver (not shown) for reading / writing data (or a program) from / to an information storage medium such as a C card (hereinafter collectively referred to as a digital camera card), a CD-ROM, an MO, and a CD-R; A communication control device for communicating with another information processing device is connected. Image data input from outside via the expansion slot is input to the I / O controller 38.
ãï¼ï¼ï¼ï¼ãIï¼ï¼¯ã³ã³ããã¼ã©ï¼ï¼ã®åºå端ã¯ãã¤ã¡
ã¼ã¸ããã»ããµï¼ï¼ã®ãã¼ã¿å
¥å端åã³å¶å¾¡é¨ï¼ï¼ã«å
ã
æ¥ç¶ããã¦ãããæ´ã«ï¼©ï¼ï¼¦åè·¯ï¼ï¼ãä»ãã¦ããªã³
ã¿ï¼ï¼ã«æ¥ç¶ããã¦ãããIï¼ï¼¯ã³ã³ããã¼ã©ï¼ï¼ã¯ã
å
¥åãããç»åãã¼ã¿ããåºåç«¯ã«æ¥ç¶ãããåè¨åæ©
å¨ã«é¸æçã«åºåãããThe output terminal of the I / O controller 38 is connected to the data input terminal of the image processor 40 and the control unit 42, respectively, and further connected to the printer 16 via the I / F circuit 54. The I / O controller 38
The input image data is selectively output to each of the devices connected to the output terminal.
ãï¼ï¼ï¼ï¼ãæ¬å®æ½å½¢æ
ã§ã¯ãåçãã£ã«ã ï¼ï¼ã«è¨é²
ããã¦ããåã
ã®ç»åã«å¯¾ãããã£ã«ã ã¹ãã£ãï¼ï¼ã«
ããã¦ç°ãªãè§£å度ã§ï¼åã®èªã¿åããè¡ãããªããæ¬
宿½å½¢æ
ã®ããã«èªåã»ã³ãµã¨ãã¦ã¨ãªã¢ã»ã³ãµï¼ã¨ãª
ã¢ï¼£ï¼£ï¼¤ï¼ï¼ï¼ãç¨ããæ
æ§ã«ããã¦ãèªã¿åãã®è§£å
度ã®åãæ¿ãï¼ååã®èªã¿åãã§ç°ãªãè§£å度ã®ç»åã
ã¼ã¿ãå¾ããã¨ï¼ã¯ãä¾ãã°ãã¬ã¹ãã£ã³æããã¡ã¤ã³
ã¹ãã£ã³æã¨åä¸ã®é«è§£å度ã§èªã¿åããè¡ããå¾ãã
ãç»åãã¼ã¿ã«å¯¾ãã¦ç»ç´ ã®éå¼ãåã¯ç»ç´ ã®çµ±åçã®
å¾å¦çãè¡ãããæãã¯ãã¡ã¤ã³ã¹ãã£ã³æã«ã¯ã¨ãªã¢
ã»ã³ãµã«ãã£ã¦èªã¿åããè¤æ°åè¡ãã¨å
±ã«ãååã®èª
ã¿åãæã«ãã¨ã¾ç´ åçã®ã¢ã¯ãã¥ã¨ã¼ã¿ã«ããç»ç´ é
éã®æ´æ°åã®ï¼ã«ç¸å½ããè·é¢ã ãã¨ãªã¢ã»ã³ãµãç§»å
ããããã¨ã§å®ç¾ã§ãããIn the present embodiment, each image recorded on the photographic film 26 is read twice by the film scanner 12 at different resolutions. In the embodiment using the area sensor (area CCD 30) as the reading sensor as in the present embodiment, switching of the reading resolution (obtaining image data of different resolutions in each reading) is performed, for example, in fine scanning even during pre-scanning. Scanning is performed at the same high resolution as during scanning, and post-processing such as pixel thinning or pixel integration is performed on the obtained image data. The reading can be realized by moving the area sensor by a distance corresponding to an integer fraction of the pixel interval by an actuator such as a piezo element at the time of reading.
ãï¼ï¼ï¼ï¼ãï¼åç®ã®æ¯è¼çä½è§£å度ã§ã®èªã¿åãï¼ã
ã¬ã¹ãã£ã³ï¼ã§ã¯ãç»åã®æ¿åº¦ãé常ã«ä½ãå ´åã«ãã
ã¨ãªã¢ï¼£ï¼£ï¼¤ï¼ï¼ã§èç©é»è·ã®é£½åãçããªãããã«æ±º
å®ããèªåæ¡ä»¶ï¼åçãã£ã«ã ï¼ï¼ã«ç
§å°ããå
ã®ï¼²ã
ï¼§ãï¼¢ã®åæ³¢é·åæ¯ã®å
éãã¨ãªã¢ï¼£ï¼£ï¼¤ï¼ï¼ã®é»è·è
ç©æéï¼ã§åç»åã®èªã¿åããè¡ãããããªããæ¬å®æ½
å½¢æ
ã§ã¯ãã¬ã¹ãã£ã³æã«ã¯ï¼©ï¼²èªã¿åãã¯è¡ãªããª
ãããã®ãã¬ã¹ãã£ã³ã«ãã£ã¦å¾ããããã¼ã¿ï¼ãã¬ã¹
ãã£ã³ç»åãã¼ã¿ï¼ã¯ãIï¼ï¼¯ã³ã³ããã¼ã©ï¼ï¼ããå¶
御é¨ï¼ï¼ã¸å
¥åããããIn the first reading at a relatively low resolution (pre-scan), even when the image density is very low,
Reading conditions (R, R of light applied to the photographic film 26, determined so as not to cause saturation of accumulated charges in the area CCD 30)
Each image is read based on the amount of light for each of the G and B wavelength ranges and the charge accumulation time of the area CCD 30). In this embodiment, IR reading is not performed at the time of prescan. Data (pre-scan image data) obtained by this pre-scan is input from the I / O controller 38 to the control unit 42.
ãï¼ï¼ï¼ï¼ãå¶å¾¡é¨ï¼ï¼ã¯ãCPUï¼ï¼ãRAï¼ï¼ï¼ã
ROï¼ï¼ï¼ï¼ä¾ãã°è¨æ¶å
å®¹ãæ¸æãå¯è½ãªï¼²ï¼¯ï¼ï¼ã
å
¥åºåãã¼ãï¼ï¼ãåãããããããã¹ãä»ãã¦äºãã«
æ¥ç¶ããã¦æ§æããã¦ãããå¶å¾¡é¨ï¼ï¼ã¯ãIï¼ï¼¯ã³ã³
ããã¼ã©ï¼ï¼ããå
¥åããããã¬ã¹ãã£ã³ç»åãã¼ã¿ã«
åºã¥ãã¦ç»åã®æ¿åº¦çã®ç»åç¹å¾´éãæ¼ç®ããåç»åã«
対ãããã£ã«ã ã¹ãã£ãï¼ï¼ãæ¯è¼çé«è§£å度ã§ã®å度
ã®èªã¿åãï¼ãã¡ã¤ã³ã¹ãã£ã³ï¼ãè¡ãéã®èªåæ¡ä»¶ã
決å®ããæ±ºå®ããèªåæ¡ä»¶ããã£ã«ã ã¹ãã£ãï¼ï¼ã«åº
åãããThe control unit 42 includes a CPU 46, a RAM 48,
ROM 50 (for example, a ROM whose storage content can be rewritten),
An input / output port 52 is provided, and these are connected to each other via a bus. The control unit 42 calculates an image feature amount such as image density based on the pre-scan image data input from the I / O controller 38, and for each image, the film scanner 12 re-executes at a relatively high resolution. The reading conditions for reading (fine scan) are determined, and the determined reading conditions are output to the film scanner 12.
ãï¼ï¼ï¼ï¼ãã¾ãå¶å¾¡é¨ï¼ï¼ã¯ããã¬ã¹ãã£ã³ç»åãã¼
ã¿ã«åºã¥ãã¦ãç»åä¸ã®ä¸»è¦ç»åé åï¼ä¾ãã°äººç©ã®é¡
ã«ç¸å½ããé åï¼é¡é å))ã®æ½åºãå«ãç»åç¹å¾´éã®æ¼
ç®ãè¡ãããã£ã«ã ã¹ãã£ãï¼ï¼ããã¡ã¤ã³ã¹ãã£ã³ã
è¡ããã¨ã«ãã£ã¦å¾ãããç»åãã¼ã¿ï¼ãã¡ã¤ã³ã¹ãã£
ã³ç»åãã¼ã¿ï¼ã«å¯¾ããå種ã®ç»åå¦çã®å¦çæ¡ä»¶ãæ¼
ç®ã«ããèªåçã«æ±ºå®ãï¼ã»ããã¢ããæ¼ç®ï¼ã決å®ã
ãå¦çæ¡ä»¶ãã¤ã¡ã¼ã¸ããã»ããµï¼ï¼ã¸åºåãããFurther, the control unit 42 calculates an image feature amount including extraction of a main image area (for example, an area corresponding to a person's face (face area)) in the image based on the prescanned image data. The scanner 12 automatically determines the processing conditions of various image processing for the image data (fine scan image data) obtained by performing the fine scan by calculation (setup calculation), and outputs the determined processing conditions to the image processor 40. I do.
ãï¼ï¼ï¼ï¼ããªããå¶å¾¡é¨ï¼ï¼ã¯ããã£ã«ã ã¹ãã£ãï¼
ï¼ããå
¥åãããIRãã¼ã¿ã«åºã¥ãã¦ãç»åãã¼ã¿ã
表ãç»åä¸ã«ãåçãã£ã«ã ï¼ï¼ã«ä»ãã¦ããå·ã塵å
çã®ç°ç©ã«èµ·å ããæ¬ é¥é¨ãçãã¦ãããå¦ããæ¢ç´¢ã
ãæ©è½ãã¤ã¡ã¼ã¸ããã»ããµï¼ï¼ãæ¬ é¥é¨ä¿®æ£å¦çãè¡
ãããã®ãã©ã¡ã¼ã¿ãè¨å®ããæ©è½ãæãã¦ãããã¾
ããå¶å¾¡é¨ï¼ï¼ã®ãã¹ã«ã¯ãã£ã¹ãã¬ã¤ï¼ï¼ããã¼ãã¼
ãï¼ï¼åã³ãã¦ã¹ï¼å³ç¤ºçç¥ï¼ãæ¥ç¶ããã¦ãããNote that the control unit 42 controls the film scanner 1
A function for searching whether or not a defective portion caused by a foreign matter such as a scratch or dust attached to the photographic film 26 is present in an image represented by the image data, based on the IR data input from Step 2 Reference numeral 40 has a function of setting parameters for performing the defective portion correcting process. A display 43, a keyboard 44, and a mouse (not shown) are connected to the bus of the control unit 42.
ãï¼ï¼ï¼ï¼ãå¶å¾¡é¨ï¼ï¼ã¯ãæ¼ç®ããç»åå¦çã®å¦çæ¡
ä»¶ã«åºã¥ãããã¡ã¤ã³ã¹ãã£ã³ç»åãã¼ã¿ã対象ã¨ãã¦
ã¤ã¡ã¼ã¸ããã»ããµï¼ï¼ã§è¡ãããç»åå¦çã¨ç価ãªç»
åå¦çããã¬ã¹ãã£ã³ç»åãã¼ã¿ã«å¯¾ãã¦è¡ã£ã¦ã·ãã¥
ã¬ã¼ã·ã§ã³ç»åãã¼ã¿ãçæãããããã¦ãçæããã·
ãã¥ã¬ã¼ã·ã§ã³ç»åãã¼ã¿ãããã£ã¹ãã¬ã¤ï¼ï¼ã«ç»å
ã表示ããããã®ä¿¡å·ã«å¤æãã該信å·ã«åºã¥ãã¦ãã£
ã¹ãã¬ã¤ï¼ï¼ã«ã·ãã¥ã¬ã¼ã·ã§ã³ç»åã表示ãããã¾
ãã表示ãããã·ãã¥ã¬ã¼ã·ã§ã³ç»åã«å¯¾ããªãã¬ã¼ã¿
ã«ãã£ã¦ç»è³ªçã®æ¤å®ãè¡ãããæ¤å®çµæã¨ãã¦å¦çæ¡
ä»¶ã®ä¿®æ£ãæç¤ºããæ
å ±ããã¼ãã¼ãï¼ï¼ããã¦ã¹ãä»
ãã¦å
¥åãããã¨ãå
¥åãããæ
å ±ã«åºã¥ãã¦ç»åå¦ç
ã®å¦çæ¡ä»¶ã®åæ¼ç®çãè¡ããThe control unit 42 performs image processing equivalent to the image processing performed by the image processor 40 on the fine scan image data on the pre-scan image data based on the calculated processing conditions of the image processing. Generate data. Then, the generated simulation image data is converted into a signal for displaying an image on the display 43, and the simulation image is displayed on the display 43 based on the signal. In addition, when the operator checks the displayed simulation image for image quality and the like, and information indicating correction of processing conditions is input as a result of the inspection via the keyboard 44 or the mouse, the operator performs a test based on the input information. Recalculation of the processing conditions of the image processing is performed.
ãï¼ï¼ï¼ï¼ã䏿¹ããã£ã«ã ã¹ãã£ãï¼ï¼ã§ç»åã«å¯¾ã
ã¦ãã¡ã¤ã³ã¹ãã£ã³ãè¡ããããã¨ã«ãã£ã¦ï¼©ï¼ï¼¯ã³ã³
ããã¼ã©ï¼ï¼ã«å
¥åãããç»åãã¼ã¿ï¼ãã¡ã¤ã³ã¹ãã£
ã³ç»åãã¼ã¿ï¼ã¯ãIï¼ï¼¯ã³ã³ããã¼ã©ï¼ï¼ããã¤ã¡ã¼
ã¸ããã»ããµï¼ï¼ã¸å
¥åããããOn the other hand, the image data (fine scan image data) input to the I / O controller 38 by performing the fine scan on the image by the film scanner 12 is input from the I / O controller 38 to the image processor 40. Is done.
ãï¼ï¼ï¼ï¼ãã¤ã¡ã¼ã¸ããã»ããµï¼ï¼ã¯ãéèª¿å¤æãè²
夿ãå«ãè²ã»æ¿åº¦è£æ£å¦çãç»ç´ å¯åº¦å¤æå¦çãç»å
ã®è¶
ä½å¨æ³¢è¼åº¦æåã®é調ãå§ç¸®ãããã¤ãã¼ãã¼ã³å¦
çãç²ç¶ãæå¶ããªããã·ã£ã¼ããã¹ã強調ãããã¤ã
ã¼ã·ã£ã¼ããã¹å¦ççã®å種ã®ç»åå¦çãè¡ãç»åå¦ç
åè·¯ãåã
åãã¦ãããå
¥åãããç»åãã¼ã¿ã«å¯¾ãã
å¶å¾¡é¨ï¼ï¼ã«ãã£ã¦åç»åæ¯ã«æ±ºå®ããã¦éç¥ãããå¦
çæ¡ä»¶ã«å¾ã£ã¦ç¨®ã
ã®ç»åå¦çãè¡ããã¾ããã¤ã¡ã¼ã¸
ããã»ããµï¼ï¼ã¯å¶å¾¡é¨ï¼ï¼ã«ãã£ã¦è¨å®ããããã©ã¡
ã¼ã¿ã«å¾ã£ã¦æ¬ é¥é¨ä¿®æ£å¦çãè¡ãæ©è½ãæãã¦ãããThe image processor 40 performs color / density correction processing including gradation conversion and color conversion, pixel density conversion processing, hypertone processing for compressing the gradation of an ultra-low frequency luminance component of an image, and sharpness while suppressing graininess. Image processing circuits for performing various image processing such as hyper-sharpness processing for enhancing the image data.
Various image processing is performed according to the processing conditions determined and notified for each image by the control unit 42. Further, the image processor 40 has a function of performing a defective portion correcting process according to the parameters set by the control unit 42.
ãï¼ï¼ï¼ï¼ãã¤ã¡ã¼ã¸ããã»ããµï¼ï¼ã§ç»åå¦çãè¡ã
ããç»åãã¼ã¿ãå°ç»ç´ã¸ã®ç»åã®è¨é²ã«ç¨ããå ´åã«
ã¯ãã¤ã¡ã¼ã¸ããã»ããµï¼ï¼ã§ç»åå¦çãè¡ãããç»å
ãã¼ã¿ã¯ãIï¼ï¼¯ã³ã³ããã¼ã©ï¼ï¼ããIï¼ï¼¦åè·¯ï¼ï¼
ãä»ãè¨é²ç¨ç»åãã¼ã¿ã¨ãã¦ããªã³ã¿ï¼ï¼ã¸åºåãã
ããã¾ããç»åå¦çå¾ã®ç»åãã¼ã¿ãç»åãã¡ã¤ã«ã¨ã
ã¦å¤é¨ã¸åºåããå ´åã¯ãIï¼ï¼¯ã³ã³ããã¼ã©ï¼ï¼ãã
å¶å¾¡é¨ï¼ï¼ã¸ç»åãã¼ã¿ãåºåããããããã«ãããå¶
御é¨ï¼ï¼ã§ã¯ãå¤é¨ã¸ã®åºåç¨ã¨ãã¦ï¼©ï¼ï¼¯ã³ã³ããã¼
ã©ï¼ï¼ããå
¥åãããç»åãã¼ã¿ããæ¡å¼µã¹ããããä»
ãã¦ç»åãã¡ã¤ã«ã¨ãã¦å¤é¨ï¼åè¨ãã©ã¤ããéä¿¡å¶å¾¡
è£
ç½®çï¼ã«åºåãããWhen the image data processed by the image processor 40 is used for recording an image on photographic paper, the image data processed by the image processor 40 is transmitted from the I / O controller 38 to the I / O controller 38. / F circuit 54
Is output to the printer 16 as image data for recording via the. When the image data after the image processing is output to the outside as an image file, the image data is output from the I / O controller 38 to the control unit 42. As a result, the control unit 42 outputs the image data input from the I / O controller 38 for output to the outside to the outside (the driver, the communication control device, or the like) as an image file via the expansion slot.
ãï¼ï¼ï¼ï¼ãããªã³ã¿ï¼ï¼ã¯ãç»åã¡ã¢ãªï¼ï¼ãï¼²ï¼
ï¼§ï¼ï¼¢ã®ã¬ã¼ã¶å
æºï¼ï¼ã該ã¬ã¼ã¶å
æºï¼ï¼ã®ä½åãå¶
御ããã¬ã¼ã¶ãã©ã¤ãï¼ï¼ãåãã¦ãããç»åå¦çè£
ç½®
ï¼ï¼ããå
¥åãããè¨é²ç¨ç»åãã¼ã¿ã¯ç»åã¡ã¢ãªï¼ï¼
ã«ä¸æ¦è¨æ¶ãããå¾ã«èªã¿åºãããã¬ã¼ã¶å
æºï¼ï¼ãã
å°åºãããï¼²ï¼ï¼§ï¼ï¼¢ã®ã¬ã¼ã¶å
ã®å¤èª¿ã«ç¨ããããã
ã¬ã¼ã¶å
æºï¼ï¼ããå°åºãããã¬ã¼ã¶å
ã¯ãããªã´ã³ã
ã©ã¼ï¼ï¼ãï½Î¸ã¬ã³ãºï¼ï¼ãä»ãã¦å°ç»ç´ï¼ï¼ä¸ãèµ°æ»
ãããå°ç»ç´ï¼ï¼ã«ç»åãé²å
è¨é²ããããç»åãé²å
è¨é²ãããå°ç»ç´ï¼ï¼ã¯ãããã»ããµé¨ï¼ï¼ã¸éããã¦
çºè²ç¾åãæ¼ç½å®çãæ°´æ´ãä¹¾ç¥ã®åå¦çãæ½ãããã
ããã«ãããå°ç»ç´ï¼ï¼ã«é²å
è¨é²ãããç»åãå¯è¦å
ããããThe printer 16 includes an image memory 58, R,
G and B laser light sources 60 and a laser driver 62 for controlling the operation of the laser light sources 60 are provided. The recording image data input from the image processing device 14 is stored in an image memory 58.
Is read out after being stored once, and is used for modulating the R, G, B laser light emitted from the laser light source 60.
The laser light emitted from the laser light source 60 is scanned on a printing paper 68 via a polygon mirror 64 and an fθ lens 66, and an image is exposed and recorded on the printing paper 68. The photographic paper 68 on which the image has been exposed and recorded is sent to the processor section 18 and subjected to color development, bleach-fixing, washing and drying.
Thus, the image recorded on the printing paper 68 by exposure is visualized.
ãï¼ï¼ï¼ï¼ãæ¬¡ã«æ¬å®æ½å½¢æ
ã®ä½ç¨ã¨ãã¦ãã¹ãã£ãï¼
ï¼ããç»åå¦çè£
ç½®ï¼ï¼ã«ãã¡ã¤ã³ã¹ãã£ã³ç»åãã¼ã¿
ãå
¥åãããã¨å¶å¾¡é¨ï¼ï¼ã§å®è¡ãããæ¬ é¥é¨ä¿®æ£å¤æ±º
å®å¦çã«ã¤ãã¦èª¬æããããã®æ¬ é¥é¨ä¿®æ£å¤æ±ºå®å¦ç
ã¯ãè«æ±é
ï¼ã«è¨è¼ã®ç»åå¦çæ¹æ³ãé©ç¨ãããå¦çã§
ãããå¶å¾¡é¨ï¼ï¼ã®ï¼£ï¼°ï¼µï¼ï¼ã«ãããæ¬ é¥é¨ä¿®æ£å¤æ±º
å®ããã°ã©ã ãå®è¡ããããã¨ã«ããå®ç¾ããããæ¬ é¥
é¨ä¿®æ£å¤æ±ºå®ããã°ã©ã ã¯ããã®ä»ã®å¦çãCPUï¼ï¼
ã§å®è¡ãããããã®ããã°ã©ã ã¨å
±ã«ãå½åã¯ãæ
å ±è¨
æ¶åªä½ï¼ï¼ï¼å³ï¼åç
§ï¼ã«è¨æ¶ããã¦ããããªããå³ï¼
ã§ã¯æ
å ±è¨æ¶åªä½ï¼ï¼ãããããã¼ãã£ã¹ã¯ã¨ãã¦ç¤ºã
ã¦ããããCDâROï¼ãã¡ã¢ãªã«ã¼ãçã§æ§æãã¦ã
ãããNext, as an operation of the present embodiment, the scanner 1
Next, a description will be given of a defective portion correction value determination process performed by the control unit 42 when the fine scan image data is input to the image processing apparatus 14 from the second unit. This defective part correction value determination processing is processing to which the image processing method according to claim 9 is applied, and is realized by the CPU 46 of the control unit 42 executing a defective part correction value determination program. The defective part correction value determination program executes other processing by the CPU 46.
The program is initially stored in the information storage medium 72 (see FIG. 1) together with the program to be executed. FIG.
Although the information storage medium 72 is shown as a floppy disk in the embodiment, it may be constituted by a CD-ROM, a memory card, or the like.
ãï¼ï¼ï¼ï¼ãå¶å¾¡é¨ï¼ï¼ã«æ¥ç¶ãããæ
å ±èªåºè£
ç½®ï¼å³
示çç¥ï¼ã«æ
å ±è¨æ¶åªä½ï¼ï¼ãè£
å¡«ãããæ
å ±è¨æ¶åªä½
ï¼ï¼ããç»åå¦çè£
ç½®ï¼ï¼ã¸ã®ããã°ã©ã ã®ç§»å
¥ï¼ã¤ã³
ã¹ãã¼ã«ï¼ãæç¤ºãããã¨ãæ
å ±èªåºè£
ç½®ã«ãã£ã¦æ
å ±
è¨æ¶åªä½ï¼ï¼ããæ¬ é¥é¨ä¿®æ£å¤æ±ºå®ããã°ã©ã çãèªã¿
åºãããè¨æ¶å
å®¹ãæ¸æãå¯è½ãªï¼²ï¼¯ï¼ï¼ï¼ã«è¨æ¶ãã
ããããã¦ãæ¬ é¥é¨ä¿®æ£å¤æ±ºå®å¦çãå®è¡ãã¹ãã¿ã¤ã
ã³ã°ãå°æ¥ããã¨ãROï¼ï¼ï¼ããæ¬ é¥é¨ä¿®æ£å¤æ±ºå®ã
ãã°ã©ã ãèªã¿åºããã該ããã°ã©ã ãCPUï¼ï¼ã«ã
ã£ã¦å®è¡ããããããã«ãããç»åå¦çè£
ç½®ï¼ï¼ã¯è«æ±
é
ï¼ã«è¨è¼ã®ç»åå¦çè£
ç½®ã¨ãã¦æ©è½ããããã®ãã
ã«ãæ¬ é¥é¨ä¿®æ£å¤æ±ºå®ããã°ã©ã çãè¨æ¶ãã¦ããæ
å ±
è¨æ¶åªä½ï¼ï¼ã¯è«æ±é
ï¼ï¼ã«è¨è¼ã®è¨é²åªä½ã«å¯¾å¿ãã¦
ãããWhen an information storage medium 72 is loaded in an information reading device (not shown) connected to the control unit 42 and transfer of a program (installation) from the information storage medium 72 to the image processing apparatus 14 is instructed, the information The reading device reads a defective part correction value determination program or the like from the information storage medium 72 and stores the stored content in the rewritable ROM 50. When the timing to execute the defective part correction value determination processing comes, the defective part correction value determination program is read from the ROM 50 and executed by the CPU 46. Thus, the image processing device 14 functions as the image processing device according to the first aspect. As described above, the information storage medium 72 storing the defective portion correction value determination program and the like corresponds to the recording medium according to claim 10.
ãï¼ï¼ï¼ï¼ã以ä¸ãæ¬ é¥é¨ä¿®æ£å¤æ±ºå®å¦çã«ã¤ãã¦ãå³
ï¼ã®ããã¼ãã£ã¼ããåç
§ãã¦èª¬æãããã¹ãããï¼ï¼
ï¼ã§ã¯ãå¶å¾¡é¨ï¼ï¼ã«å
¥åãããåä¸ã®ç»åï¼å¦ç対象
ã®ç»åï¼ã®ï¼²ï¼ï¼§ï¼ï¼¢ã®ç»åãã¼ã¿åã³ï¼©ï¼²ãã¼ã¿ãï¼²
Aï¼ï¼ï¼çã«åãè¾¼ããã¹ãããï¼ï¼ï¼ã¯æ¬çºæã®åå¾
ææ®µã«å¯¾å¿ãã¦ãããã¾ããæ¬¡ã®ã¹ãããï¼ï¼ï¼ãã¹ã
ããï¼ï¼ï¼ã¯æ¬çºæã®è£æ£ææ®µã«å¯¾å¿ãã¦ãããHereinafter, the defective part correction value determination processing will be described with reference to the flowchart of FIG. Step 10
0, the R, G, and B image data and IR data of a single image (image to be processed) input to the control unit 42 are represented by R
Import to AM48 and so on. Step 100 corresponds to the acquisition unit of the present invention. Further, the following steps 102 to 118 correspond to the correcting means of the present invention.
ãï¼ï¼ï¼ï¼ãããªãã¡ãã¹ãããï¼ï¼ï¼ã§ã¯ããã£ã«ã
ã¹ãã£ãï¼ï¼ã®çµåã¬ã³ãºï¼ï¼ã®æªæ²åå·®ã»åçè²åå·®
ãè£æ£ããããã®è£æ£ãã¼ã¿ãROï¼ï¼ï¼ããåãè¾¼
ããæ¬å®æ½å½¢æ
ã«ä¿ãæªæ²åå·®è£æ£ãã¼ã¿ã¯ãçµåã¬ã³
ãºï¼ï¼ã®æªæ²åå·®ã«èµ·å ããç»åã®å¹¾ä½å¦çæªã¿ãè£æ£
ããæªæ²åå·®è£æ£ã«ç¨ãããã¼ã¿ã§ãããçµåã¬ã³ãºï¼
ï¼ã®æªæ²åå·®ã«èµ·å ããç»åä¸ã®åä½ç½®ã§ã®ç»ç´ ä½ç½®ã®
å¤åæ¹ååã³å¤åéãæ¸¬å®ããçµæã«åºã¥ãã¦è¨å®ãã
ã¦ãããThat is, in step 102, correction data for correcting distortion and chromatic aberration of magnification of the imaging lens 28 of the film scanner 12 is fetched from the ROM 50. The distortion correction data according to the present embodiment is data used for distortion correction for correcting a geometric distortion of an image caused by distortion of the imaging lens 28, and is used for the imaging lens 2.
8 is set based on the measurement result of the change direction and the change amount of the pixel position at each position on the image caused by the distortion.
ãï¼ï¼ï¼ï¼ãæ¬å®æ½å½¢æ
ã§ã¯åºæºæ³¢é·åã¨ãã¦ï¼§ãæ¡ç¨
ããæªæ²åå·®è£æ£ãã¼ã¿ã¨ãã¦ãçµåã¬ã³ãºï¼ï¼ã®æªæ²
åå·®ã«èµ·å ããç»åä¸ã®åä½ç½®ã«ãããï¼§ã«ã¤ãã¦ã®ç»
ç´ ä½ç½®ã®å¤åéï¼æªæ²åå·®éï¼ã®æ¸¬å®çµæãï½æ¹åã¨ï½
æ¹åã«åè§£ããç»åä¸ã®åä½ç½®ã«ãããæªæ²åå·®éãã
ï½P ï½P 座æ¨ç³»ï¼ç»åã®ä¸å¿ä½ç½®ï¼ï½P0ï¼ï½P0ï¼ãåç¹
ï¼ï¼ï¼ï¼ï¼ï¼))ã¨ãã¦ç»åä¸ã®ä»»æã®ç»ç´ ã座æ¨å¤ï¼ï½
P ï¼ï½P ï¼ã§è¡¨ã座æ¨ç³»ï¼å³ï¼ï¼ï¼¢ï¼åç
§))ãåºæºã¨ã
ã¦ãï½æ¹åã®æªæ²åå·®éDï½ï¼ï½P ï¼ï½P ï¼åã³ï½æ¹å
ã®æªæ²åå·®éDï½ï¼ï½P ï¼ï½P ï¼ã§è¡¨ããã¼ã¿ãç¨ãã¦
ãããIn the present embodiment, G is adopted as the reference wavelength range, and the amount of change in the pixel position (the amount of distortion) of G at each position on the image caused by the distortion of the imaging lens 28 is used as distortion correction data. ) Measurement results in x direction and y
In the direction, and calculate the amount of distortion at each position on the image,
x P y P coordinate system (the center position of the image (x P0, y P0) the origin (= (0, 0)) coordinates of any pixel on the image as (x
P, y P) coordinate system represented by the reference (FIG. 5 (B))) as a reference, a distortion amount Dx (x P in the x-direction, y P) and y-direction distortion amount Dy (x P, y P ) Is used.
ãï¼ï¼ï¼ï¼ãã¾ããåçè²åå·®è£æ£ãã¼ã¿ã¯ãçµåã¬ã³
ãºï¼ï¼ã®åçè²åå·®ã«èµ·å ããç»åã®è²ãããè£æ£ãã
åçè²åå·®è£æ£ã«ç¨ãããã¼ã¿ã§ãããçµåã¬ã³ãºï¼ï¼
ã®åçè²åå·®ã«èµ·å ããç»åä¸ã®åä½ç½®ã§ã®åºæºæ³¢é·å
ã®ãã¼ã¿ã表ãç»åã®ç»ç´ ä½ç½®ã«å¯¾ããéåºæºæ³¢é·åã®
ãã¼ã¿ã表ãç»åã®ç»ç´ ä½ç½®ã®å¤åæ¹ååã³å¤åéãæ¸¬
å®ããçµæã«åºã¥ãã¦è¨å®ããã¦ãããThe chromatic aberration of magnification correction data is used for correcting chromatic aberration of magnification for correcting a color shift of an image caused by chromatic aberration of magnification of the imaging lens 28.
Based on the result of measuring the direction and amount of change in the pixel position of the image represented by the data in the non-reference wavelength range with respect to the pixel position of the image represented by the data in the reference wavelength region at each position on the image due to the chromatic aberration of magnification. Is set.
ãï¼ï¼ï¼ï¼ãæ¬å®æ½å½¢æ
ã§ã¯ãéåºæºæ³¢é·åã¨ãã¦ï¼²ï¼
ï¼¢åã³ï¼©ï¼²ãæ¡ç¨ããï¼²ã®åçè²åå·®è£æ£ãã¼ã¿ã¨ã
ã¦ãçµåã¬ã³ãºï¼ï¼ã®åçè²åå·®ã«èµ·å ããç»åä¸ã®å
ä½ç½®ã«ããããï¼§ã«å¯¾ããï¼²ã®ç»ç´ ä½ç½®ã®å¤åéï¼åç
è²åå·®éï¼ã®æ¸¬å®çµæãï½æ¹åã¨ï½æ¹åã«åè§£ããç»å
ä¸ã®åä½ç½®ã«ãããï¼²ã®åçè²åå·®éããï½P ï½P 座æ¨
ç³»ãåºæºã¨ãã¦ãï¼²ã®ï½æ¹åã®åçè²åå·®éÎï¼²ï½ï¼ï½
P ï¼ï½P ï¼åã³ï¼²ã®ï½æ¹åã®åçè²åå·®éÎï¼²ï½
ï¼ï½P ï¼ï½P ï¼ã§è¡¨ããã¼ã¿ãç¨ãã¦ãããIn this embodiment, R, R
B and IR are adopted, and as the magnification chromatic aberration correction data of R, a measurement result of a change amount (magnification chromatic aberration amount) of the pixel position of R with respect to G at each position on the image caused by the magnification chromatic aberration of the imaging lens 28 is obtained. decomposing the x and y directions, the lateral chromatic aberration amount of R at each position on the image, x P y P as the coordinate system relative to the magnification chromatic aberration amount in the x direction R DerutaRx (x
P , y P ) and the amount of lateral chromatic aberration ÎRy of R in the y direction
(X P, y P) is using the data represented by.
ãï¼ï¼ï¼ï¼ãã¾ããï¼¢ã®åçè²åå·®è£æ£ãã¼ã¿ã¨ãã¦ã
çµåã¬ã³ãºï¼ï¼ã®åçè²åå·®ã«èµ·å ããç»åä¸ã®åä½ç½®
ã«ããããï¼§ã«å¯¾ããï¼¢ã®ç»ç´ ä½ç½®ã®å¤åéï¼åçè²å
å·®éï¼ã®æ¸¬å®çµæãï½æ¹åã¨ï½æ¹åã«åè§£ããç»åä¸ã®
åä½ç½®ã«ãããï¼¢ã®åçè²åå·®éããï½P ï½P 座æ¨ç³»ã
åºæºã¨ãã¦ãï¼¢ã®ï½æ¹åã®åçè²åå·®éÎï¼¢ï½ï¼ï½Pï¼
ï½P ï¼åã³ï¼¢ã®ï½æ¹åã®åçè²åå·®éÎï¼¢ï½ï¼ï½P ï¼ï½
P ï¼ã§è¡¨ããã¼ã¿ãç¨ãã¦ãããFurther, as magnification chromatic aberration correction data of B,
At each position on the image caused by the chromatic aberration of magnification of the imaging lens 28, the measurement result of the change amount of the pixel position of B with respect to G (the amount of chromatic aberration of magnification) is decomposed in the x direction and the y direction. the magnification chromatic aberration amount of B, x P y P coordinate system as a reference, magnification chromatic aberration amount in the x direction B ÎBx (x P,
y P ) and the lateral chromatic aberration amount ÎBy (x P , y) of B in the y direction.
P ) is used.
ãï¼ï¼ï¼ï¼ãæ´ã«ãIRã®åçè²åå·®è£æ£ãã¼ã¿ã¨ã
ã¦ãçµåã¬ã³ãºï¼ï¼ã®åçè²åå·®ã«èµ·å ããç»åä¸ã®å
ä½ç½®ã«ããããï¼§ã«å¯¾ããIRã®ç»ç´ ä½ç½®ã®å¤åéï¼å
çè²åå·®éï¼ã®æ¸¬å®çµæãï½æ¹åã¨ï½æ¹åã«åè§£ããç»
åä¸ã®åä½ç½®ã«ãããIRã®åçè²åå·®éããï½P ï½P
座æ¨ç³»ãåºæºã¨ãã¦ãIRã®ï½æ¹åã®åçè²åå·®éÎI
ï¼²ï½ï¼ï½P ï¼ï½P ï¼åã³ï¼©ï¼²ã®ï½æ¹åã®åçè²åå·®éÎ
IRï½ï¼ï½P ï¼ï½P ï¼ã§è¡¨ããã¼ã¿ãç¨ãã¦ãããFurther, as the magnification chromatic aberration correction data for IR, the measurement result of the amount of change in the pixel position of IR with respect to G (magnitude of chromatic aberration of magnification) at each position on the image caused by the chromatic aberration of magnification of the imaging lens 28 is taken in the x direction. And the y direction, and the amount of chromatic aberration of magnification at each position on the image is represented by x P y P
With respect to the coordinate system, the chromatic aberration of magnification ÎI in the x direction of IR
Rx (x P , y P ) and IR lateral chromatic aberration amount Î in the y direction
IRy (x P, y P) is using the data represented by.
ãï¼ï¼ï¼ï¼ã次ã®ã¹ãããï¼ï¼ï¼ã§ã¯ãROï¼ï¼ï¼ãã
åãè¾¼ãã æªæ²åå·®è£æ£ãã¼ã¿åã³åçè²åå·®è£æ£ãã¼
ã¿ã«åºã¥ããï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åãã¼ã¿ã«å¯¾ããæªæ²
åå·®è£æ£ã»åçè²åå·®è£æ£ãï½æ¹ååã³ï½æ¹åã«ã¤ãã¦
åã
è¡ããIn the next step 104, distortion correction and magnification chromatic aberration correction are performed in the y direction and x direction for each of R, G, B, and IR data based on the distortion aberration correction data and the magnification chromatic aberration correction data fetched from the ROM 50. Perform each of the directions.
ãï¼ï¼ï¼ï¼ãããªãã¡ãã¾ãç»åã®ä¸å¿ä½ç½®ãåºæºã¨ã
ã¦ãç»åãã¼ã¿ã®åç»ç´ ã®åº§æ¨å¤ï¼ï½ï¼ï½ï¼ãï½P ï½P
座æ¨ç³»ï¼å³ï¼ï¼ï¼¢ï¼åç
§ï¼ã§ã®åº§æ¨å¤ï¼ï½P ï¼ï½P ï¼ã«
夿ï¼ï½P ï¼ï½âï½P0ãï½P ï¼ï½âï½P0ï¼ããªãã¡è¦æ ¼
åï¼ããå¾ã«ãè¦æ ¼åå¾ã®åº§æ¨å¤ãï¼ï½P ï¼ï½P ï¼ã®ç»
ç´ ã«å¯¾ãã座æ¨ï¼ï½P ï¼ï½P ï¼ããã¼ã«ãã¦ãã¹ããã
ï¼ï¼ï¼ã§åãè¾¼ãã æªæ²åå·®è£æ£ãã¼ã¿ã®ä¸ãã対å¿ã
ãï½æ¹åã«ã¤ãã¦ã®æªæ²åå·®éDï½ï¼ï½P ï¼ï½P ï¼ãæ¤
ç´¢ãã座æ¨ï¼ï½P ï¼ï½P ï¼ã®ç»ç´ ã®åãã¼ã¿ï¼²ï¼ï½P ï¼
ï½P ï¼ãï¼§ï¼ï½P ï¼ï½P ï¼ãï¼¢ï¼ï½P ï¼ï½P ï¼ãIR
ï¼ï½P ï¼ï½P ï¼ã®åº§æ¨ã次å¼ã«å¾ã£ã¦å¤æãããã¨ãã
å
¨ã¦ã®ç»ç´ ã«ã¤ãã¦è¡ãã ï¼²ï¼ï½P ï¼ï½PR')âï¼²ï¼ï½P ï¼ï½P ï¼ ï¼§ï¼ï½P ï¼
ï½PGï¼âï¼§ï¼ï½P ï¼ï½P ï¼ ï¼¢ï¼ï½P ï¼ï½PB')âï¼¢ï¼ï½P ï¼ï½P ï¼ï¼©ï¼²ï¼ï½P ï¼ï½
PIR')âIRï¼ï½P ï¼ï½Pï¼ ä½ããï½PRâï¼ï½PGï¼ï½PBâï¼ï½PIRâï¼ï½P ï¼ï¼¤ï½
ï¼ï½P ï¼ï½P ï¼That is, first, with reference to the center position of the image, the coordinate value (x, y) of each pixel of the image data is represented by x P y P
Coordinate system coordinate values in (see FIG. 5 (B) refer) (x P, y P) converted into (x P = x-x P0 , y P = y-y P0: ie normalized) after, after normalization the coordinate values with respect to pixels (x P, y P), the coordinates (x P, y P) to the key, distortion amount for the corresponding y-direction from the distortion correction data fetched in step 102 dy (x P, y P) searching the coordinates (x P, y P) each data R (x P pixels of,
y P ), G (x P , y P ), B (x P , y P ), IR
Converting the coordinates of (x P , y P ) according to the following equation:
This is performed for all pixels. R (x P, y PR ' ) â R (x P, y P) G (x P,
y PG) â G (x P , y P) B (x P, y PB ') â B (x P, y P) IR (x P, y
PIR ') â IR (x P , y P ) where y PR ' = y PG = y PB '= y PIR ' = y P + Dy
(X P, y P)
ãï¼ï¼ï¼ï¼ãã¾ããè¦æ ¼åå¾ã®åº§æ¨å¤ãï¼ï½P ï¼ï½P ï¼
ã®ç»ç´ ï¼ï½æ¹åã«ã¤ãã¦ã®æªæ²åå·®è£æ£å¾ã®åº§æ¨å¤ã
ï¼ï½P ï¼ï½PR')ã®ç»ç´ ï¼ã®ï¼²ã®ãã¼ã¿ã«å¯¾ãã座æ¨ï¼ï½
P ï¼ï½ P ï¼ããã¼ã«ãã¦ãã¹ãããï¼ï¼ï¼ã§åãè¾¼ãã
ï¼²ã®åçè²åå·®è£æ£ãã¼ã¿ã®ä¸ããã対å¿ããï¼²ã®ï½æ¹
åã«ã¤ãã¦ã®åçè²åå·®éÎï¼²ï½ï¼ï½P ï¼ï½P ï¼ãæ¤ç´¢
ããï½æ¹åã«ã¤ãã¦ã®æªæ²åå·®è£æ£å¾ã®åº§æ¨å¤ã
ï¼ï½P ï¼ï½PR')ã®ç»ç´ ã®ï¼²ã®ãã¼ã¿ï¼²ï¼ï½P ï¼ï½PR')ã®
座æ¨ã次å¼ã«å¾ã£ã¦å¤æãããã¨ããå
¨ã¦ã®ç»ç´ ã«ã¤ã
ã¦è¡ããï¼²ï¼ï½P ï¼ï½PRï¼âï¼²ï¼ï½P ï¼ï½PR') ä½ããï½PRï¼ï½PRâï¼Îï¼²ï½ï¼ï½P ï¼ï½P ï¼ï¼ï½P ï¼ï¼¤
ï½ï¼ï½P ï¼ï½P ï¼ï¼Îï¼²ï½ï¼ï½P ï¼ï½P ï¼The coordinate value after the standardization is (xP, YP)
Pixel (the coordinate value after the distortion correction in the y direction is
(XP, YPRThe coordinates (x
P, Y P) As a key and imported in step 102
From the magnification chromatic aberration correction data of R, the corresponding y direction of R
Magnification chromatic aberration amount ÎRy (xP, YP)search for
Then, the coordinate value after the distortion correction in the y direction is
(XP, YPR') Pixel R data R (xP, YPR')of
The conversion of coordinates according to the following equation is required for all pixels.
Do it. R (xP, YPR) â R (xP, YPR') Where yPR= YPRâ² + ÎRy (xP, YP) = YP+ D
y (xP, YP) + ÎRy (xP, YP)
ãï¼ï¼ï¼ï¼ãã¾ããè¦æ ¼åå¾ã®åº§æ¨å¤ãï¼ï½P ï¼ï½P ï¼
ã®ç»ç´ ã®ï¼¢ã®ãã¼ã¿ã«å¯¾ãã座æ¨ï¼ï½P ï¼ï½P ï¼ããã¼
ã«ãã¦ãã¹ãããï¼ï¼ï¼ã§åãè¾¼ãã ï¼¢ã®åçè²åå·®è£
æ£ãã¼ã¿ã®ä¸ããã対å¿ããï¼¢ã®ï½æ¹åã«ã¤ãã¦ã®åç
è²åå·®éÎï¼¢ï½ï¼ï½P ï¼ï½Pï¼ãæ¤ç´¢ããï½æ¹åã«ã¤ã
ã¦ã®æªæ²åå·®è£æ£å¾ã®åº§æ¨å¤ãï¼ï½P ï¼ï½PB')ã®ç»ç´ ã®
ï¼¢ã®ãã¼ã¿ï¼¢ï¼ï½P ï¼ï½PB')ã®åº§æ¨ã次å¼ã«å¾ã£ã¦å¤æ
ãããã¨ããå
¨ã¦ã®ç»ç´ ã«ã¤ãã¦è¡ãã ï¼¢ï¼ï½P ï¼ï½PBï¼âï¼¢ï¼ï½P ï¼ï½PB') ä½ããï½PBï¼ï½PBâï¼Îï¼¢ï½ï¼ï½P ï¼ï½P ï¼ï¼ï½P ï¼ï¼¤
ï½ï¼ï½P ï¼ï½P ï¼ï¼Îï¼¢ï½ï¼ï½P ï¼ï½P ï¼The coordinate values after the standardization are (x P , y P )
With respect to the B data of the pixel No., using the coordinates (x P , y P ) as a key, the chromatic aberration of magnification ÎBy ( x P , y P ) are searched, and the coordinates of the B data B (x P , y PB â²) of the pixel having the coordinate value (x P , y PB â²) after the distortion correction in the y direction are expressed by the following equation. Is performed for all pixels. B (x P, y PB) â B (x P, y PB ') where, y PB = y PB' + ÎBy (x P, y P) = y P + D
y (x P, y P) + ÎBy (x P, y P)
ãï¼ï¼ï¼ï¼ãæ´ã«ãè¦æ ¼åå¾ã®åº§æ¨å¤ãï¼ï½P ï¼ï½P ï¼
ã®ç»ç´ ã®ï¼©ï¼²ã®ãã¼ã¿ã«å¯¾ãã座æ¨ï¼ï½P ï¼ï½P ï¼ãã
ã¼ã«ãã¦ãã¹ãããï¼ï¼ï¼ã§åãè¾¼ãã IRã®åçè²å
å·®è£æ£ãã¼ã¿ã®ä¸ããã対å¿ããIRã®ï½æ¹åã«ã¤ãã¦
ã®åçè²åå·®éÎIRï½ï¼ï½ P ï¼ï½P ï¼ãæ¤ç´¢ããï½æ¹
åã«ã¤ãã¦ã®æªæ²åå·®è£æ£å¾ã®åº§æ¨å¤ãï¼ï½P ï¼
ï½PI R')ã®ç»ç´ ã®ï¼©ï¼²ã®ãã¼ã¿ï¼©ï¼²ï¼ï½P ï¼ï½PIR')ã®åº§
æ¨ã次å¼ã«å¾ã£ã¦å¤æãããã¨ããå
¨ã¦ã®ç»ç´ ã«ã¤ãã¦
è¡ãã IRï¼ï½P ï¼ï½PIRï¼âIRï¼ï½P ï¼ï½PIR') ä½ããï½PIRï¼ï½PIRâï¼ÎIRï½ï¼ï½P ï¼ï½P ï¼ï¼ï½P
ï¼ï¼¤ï½ï¼ï½P ï¼ï½P ï¼ï¼ÎIRï½ï¼ï½P ï¼ï½P ï¼Further, the coordinate value after the standardization is (xP, YP)
The coordinates (xP, YP)
And the color magnification of the IR captured in step 102
From the difference correction data, the corresponding IR y-direction
Magnification chromatic aberration amount ÎIRy (x P, YP) Search and y direction
The coordinate value of the direction after the distortion correction is (xP,
yPI R') IR data IR (xP, YPIR') Seat
The conversion of the target according to the following equation is performed for all pixels.
Do. IR (xP, YPIR) â IR (xP, YPIR') Where yPIR= YPIRâ² + ÎIRy (xP, YP) = YP
+ Dy (xP, YP) + ÎIRy (xP, YP)
ãï¼ï¼ï¼ï¼ãä¸è¨ã«ãããï½æ¹åã«ã¤ãã¦ã®æªæ²åå·®è£
æ£ãåã³ï½æ¹åã«ã¤ãã¦ã®ï¼²ãï¼¢åã³ï¼©ï¼²ã®åçè²åå·®
è£æ£ãè¡ãããï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åãã¼ã¿ã表ãç»å
ã®åç»ç´ ã®ä½ç½®ã¯åã
ç¬ç«ã«ï½æ¹åã«åã
ç§»åãããã
ã¨ã«ãªããAs described above, correction of distortion in the y direction and correction of chromatic aberration of magnification in the y direction of R, B, and IR are performed, and the position of each pixel of the image represented by each data of R, G, B, and IR is performed. Are independently moved in the y direction.
ãï¼ï¼ï¼ï¼ã次ã«ãç»åã®åç»ç´ ã®ï½æ¹åã«ã¤ãã¦ã®æ¬
æ¥ã®ä½ç½®ï¼ä»¥ä¸ããã®ä½ç½®ã座æ¨å¤ï¼ï½P ï¼ï½P0ï¼ã§è¡¨
ãï¼ãæ±ãããããã¦ã座æ¨å¤ï¼ï½P ï¼ï½P0ï¼ã®ä½ç½®ã«
ãããï¼²ã®å¤ããæªæ²åå·®è£æ£åã³åçè²åå·®è£æ£ãçµ
ããã¼ã¿ï¼²ï¼ï½P ï¼ï½PRï¼ã®ãã¡ãï½æ¹åã«æ²¿ã£ã¦åº§æ¨
å¤ï¼ï½P ï¼ï½P0ï¼ãæãã§é£ãåã£ã¦ããï¼ã¤ã®ä½ç½®ã«
ãããï¼²ã®ãã¼ã¿ã«åºã¥ãã¦è£éæ¼ç®ã«ãã£ã¦æ±ããã
ã¾ãã座æ¨å¤ï¼ï½P ï¼ï½P0ï¼ã®ä½ç½®ã«ãããï¼§ã®å¤ãã
æªæ²åå·®è£æ£åã³åçè²åå·®è£æ£ãçµããã¼ã¿ï¼§
ï¼ï½P ï¼ï½PGï¼ã®ãã¡ãï½æ¹åã«æ²¿ã£ã¦åº§æ¨å¤ï¼ï½P ï¼
ï½P0ï¼ãæãã§é£ãåã£ã¦ããï¼ã¤ã®ä½ç½®ã«ãããï¼§ã®
ãã¼ã¿ã«åºã¥ãã¦è£éæ¼ç®ã«ãã£ã¦æ±ãã座æ¨å¤
ï¼ï½P ï¼ï½P0ï¼ã®ä½ç½®ã«ãããï¼¢ã®å¤ããæªæ²åå·®è£æ£
åã³åçè²åå·®è£æ£ãçµããã¼ã¿ï¼¢ï¼ï½P ï¼ï½PBï¼ã®ã
ã¡ãï½æ¹åã«æ²¿ã£ã¦åº§æ¨å¤ï¼ï½P ï¼ï½P0ï¼ãæãã§é£ã
åã£ã¦ããï¼ã¤ã®ä½ç½®ã«ãããï¼¢ã®ãã¼ã¿ã«åºã¥ãã¦è£
éæ¼ç®ã«ãã£ã¦æ±ãã座æ¨å¤ï¼ï½P ï¼ï½P0ï¼ã®ä½ç½®ã«ã
ããIRã®å¤ããæªæ²åå·®è£æ£åã³åçè²åå·®è£æ£ãçµ
ããã¼ã¿ï¼©ï¼²ï¼ï½P ï¼ï½PIRï¼ã®ãã¡ãï½æ¹åã«æ²¿ã£ã¦
座æ¨å¤ï¼ï½P ï¼ï½P0ï¼ãæãã§é£ãåã£ã¦ããï¼ã¤ã®ä½
ç½®ã«ãããIRã®ãã¼ã¿ã«åºã¥ãã¦è£éæ¼ç®ã«ãã£ã¦æ±
ãããä¸è¨å¦çãç»åã®å
¨ã¦ã®ç»ç´ ã«ã¤ãã¦è¡ããã¨ã«
ãããï½æ¹åã«ã¤ãã¦ã®æªæ²åå·®è£æ£ãåçè²åå·®è£æ£
ãå®äºãããNext, the original position of each pixel of the image in the y direction (hereinafter, this position is represented by a coordinate value (x P , y P0 )) is obtained. The coordinate values (x P, y P0) the value of R at the position of, the distortion correction and the magnification chromatic aberration correction menstrual data R (x P, y PR) among the coordinate values along the y-direction (x P , Y P0 ) are obtained by an interpolation operation based on the data of R at two positions adjacent to each other with the interposition therebetween.
Further, the value of G at the position of the coordinate value (x P , y P0 ) is
Data G after distortion and magnification chromatic aberration correction
(X P, y PG) of the coordinate values along the y-direction (x P,
The value of B at the position of the coordinate value (x P , y P0 ) is obtained by interpolation based on the G data at two positions adjacent to each other with y P0 ) therebetween. Among the passed data B (x P , y PB ), the coordinates are obtained by interpolation based on the B data at two positions adjacent to each other across the coordinate value (x P , y P0 ) along the y direction. The IR value at the position of the value (x P , y P0 ) is converted to the coordinate value (x P , y P0 ) along the y direction of the data IR (x P , y PIR ) that has undergone distortion correction and chromatic aberration of magnification correction. ) Are obtained by interpolation based on IR data at two positions adjacent to each other. By performing the above process for all the pixels of the image, the distortion correction and the chromatic aberration of magnification in the y direction are completed.
ãï¼ï¼ï¼ï¼ãç¶ãã¦ãæªæ²åå·®è£æ£åã³åçè²åå·®è£æ£
ãï½æ¹åã«ã¤ãã¦è¡ããããªãã¡ãç»åã®ä¸å¿ä½ç½®ãåº
æºã¨ãã¦ãç»åãã¼ã¿ã®åº§æ¨å¤ãï¼ï½P ï¼ï½P0ï¼ã®ç»ç´
ã«å¯¾ãã座æ¨ï¼ï½P ï¼ï½P0ï¼ããã¼ã«ãã¦ãæªæ²åå·®è£
æ£ãã¼ã¿ã®ä¸ãã対å¿ããï½æ¹åã«ã¤ãã¦ã®æªæ²åå·®é
Dï½ï¼ï½P ï¼ï½P0ï¼ãæ¤ç´¢ãï¼ãªãã座æ¨ï¼ï½P ï¼
ï½ P0ï¼ã«ãããæªæ²åå·®éããã¼ã¿ã¨ãã¦è¨æ¶ããã¦ã
ãªãå ´åã«ã¯ãåè¨åº§æ¨ã®å¨å²ã®ä½ç½®ã«ãããæªæ²åå·®
éã«åºã¥ãã¦ã座æ¨ï¼ï½P ï¼ï½P0ï¼ã«ãããæªæ²åå·®é
ãè£éæ¼ç®ã«ãã£ã¦æ±ããï¼ã座æ¨ï¼ï½P ï¼ï½P0ï¼ã®ç»
ç´ ã®ãã¼ã¿ï¼²ï¼ï½Pï¼ï½P0ï¼ãï¼§ï¼ï½P ï¼ï½P0ï¼ãï¼¢
ï¼ï½P ï¼ï½P0ï¼ãIRï¼ï½P ï¼ï½P0ï¼ã®åº§æ¨ã次å¼ã«å¾
ã£ã¦å¤æãããã¨ããå
¨ã¦ã®ç»ç´ ã«ã¤ãã¦è¡ãã ï¼²ï¼ï½PR',ï½P0ï¼âï¼²ï¼ï½P ï¼ï½P0ï¼ ï¼§ï¼ï½PGï¼
ï½P0ï¼âï¼§ï¼ï½P ï¼ï½P0ï¼ ï¼¢ï¼ï½PB',ï½P0ï¼âï¼¢ï¼ï½P ï¼ï½P0ï¼ï¼©ï¼²ï¼ï½PIR',ï½
P0ï¼âIRï¼ï½P ï¼ï½P0ï¼ ä½ããï½PRâï¼ï½PGï¼ï½PBâï¼ï½PIRâï¼ï½P ï¼ï¼¤ï½
ï¼ï½P ï¼ï½P0ï¼Subsequently, distortion correction and lateral chromatic aberration correction
Is performed in the x direction. That is, based on the center position of the image
As a reference, the coordinate value of the image data is (xP, YP0) Pixel
For the coordinates (xP, YP0) As a key to compensate for distortion
Distortion amount in x direction corresponding to positive data
Dx (xP, YP0) And search for the coordinates (xP,
y P0) Is stored as data.
If not, distortion at positions around the coordinates
Based on the quantity, the coordinates (xP, YP0Amount of distortion in)
Is obtained by interpolation calculation), coordinates (xP, YP0) Painting
Raw data R (xP, YP0), G (xP, YP0), B
(XP, YP0), IR (xP, YP0) Is calculated according to the following equation.
Is performed for all the pixels. R (xPR', yP0) â R (xP, YP0) G (xPG,
yP0) â G (xP, YP0) B (xPB', yP0) â B (xP, YP0) IR (xPIR', y
P0) â IR (xP, YP0) Where xPRâ= XPG= XPBâ= XPIRâ= XP+ Dx
(XP, YP0)
ãï¼ï¼ï¼ï¼ãã¾ããï½æ¹åã«ã¤ãã¦ã®æªæ²åå·®è£æ£åã®
座æ¨å¤ãï¼ï½P ï¼ï½P0ï¼ã®ç»ç´ ï¼ï½æ¹åã«ã¤ãã¦ã®æªæ²
åå·®è£æ£å¾ã®åº§æ¨å¤ãï¼ï½PR',ï½P0ï¼ã®ç»ç´ ï¼ã®ï¼²ã®ã
ã¼ã¿ã«å¯¾ãã座æ¨ï¼ï½P ï¼ï½P0ï¼ããã¼ã«ãã¦ãï¼²ã®å
çè²åå·®è£æ£ãã¼ã¿ã®ä¸ãã対å¿ããï¼²ã®ï½æ¹åã«ã¤ã
ã¦ã®åçè²åå·®éÎï¼²ï½ï¼ï½P ï¼ï½P0ï¼ãæ¤ç´¢ãï¼ãª
ãã座æ¨ï¼ï½P ï¼ï½P0ï¼ã«ãããåçè²åå·®éããã¼ã¿
ã¨ãã¦è¨æ¶ããã¦ããªãå ´åã«ã¯ãåè¿°ã®ããã«è£éæ¼
ç®ã«ãã£ã¦åçè²åå·®éãæ¼ç®ããï¼ãï½æ¹åã«ã¤ãã¦
ã®æªæ²åå·®è£æ£å¾ã®åº§æ¨å¤ãï¼ï½PR',ï½P0ï¼ã®ç»ç´ ã®ï¼²
ã®ãã¼ã¿ï¼²ï¼ï½PR',ï½P0ï¼ã®åº§æ¨ã次å¼ã«å¾ã£ã¦å¤æã
ããã¨ããå
¨ã¦ã®ç»ç´ ã«ã¤ãã¦è¡ãã ï¼²ï¼ï½PRï¼ï½P0ï¼âï¼²ï¼ï½PR',ï½P0ï¼ ä½ããï½PRï¼ï½PRâï¼Îï¼²ï½ï¼ï½P ï¼ï½P0ï¼ï¼ï½P ï¼ï¼¤
ï½ï¼ï½P ï¼ï½P0ï¼ï¼Îï¼²ï½ï¼ï½P ï¼ï½P0ï¼A pixel having a coordinate value (x P , y P0 ) before distortion correction in the x direction (a pixel having a coordinate value (x PR ', y P0 ) after distortion correction in the x direction) For the R data, the coordinate (x P , y P0 ) is used as a key to search the corresponding chromatic aberration correction data of R from the corresponding R chromatic aberration amount ÎRx (x P , y P0 ) in the x direction. (If the chromatic aberration of magnification at the coordinates (x P , y P0 ) is not stored as data, the chromatic aberration of magnification is calculated by interpolation as described above.) Of the pixel whose coordinate value is (x PR ', y P0 )
The conversion of the coordinates of the data R (x PR ', y P0 ) according to the following equation is performed for all the pixels. R (x PR, y P0) â R (x PR ', y P0) However, x PR = x PR' + ÎRx (x P, y P0) = x P + D
x (x P , y P0 ) + ÎRx (x P , y P0 )
ãï¼ï¼ï¼ï¼ãã¾ããï½æ¹åã«ã¤ãã¦ã®æªæ²åå·®è£æ£åã®
座æ¨å¤ãï¼ï½P ï¼ï½P0ï¼ã®ç»ç´ ã®ï¼¢ã®ãã¼ã¿ã«å¯¾ãã座
æ¨ï¼ï½P ï¼ï½P0ï¼ããã¼ã«ãã¦ãï¼¢ã®åçè²åå·®è£æ£ã
ã¼ã¿ã®ä¸ãã対å¿ããï¼¢ã®ï½æ¹åã«ã¤ãã¦ã®åçè²åå·®
éÎï¼¢ï½ï¼ï½P ï¼ï½P0ï¼ãæ¤ç´¢ããï½æ¹åã«ã¤ãã¦ã®æª
æ²åå·®è£æ£å¾ã®åº§æ¨å¤ãï¼ï½PB',ï½P0ï¼ã®ç»ç´ ã®ï¼¢ã®ã
ã¼ã¿ï¼¢ï¼ï½PB',ï½P0ï¼ã®åº§æ¨ã次å¼ã«å¾ã£ã¦å¤æããã
ã¨ããå
¨ã¦ã®ç»ç´ ã«ã¤ãã¦è¡ãã ï¼¢ï¼ï½PBï¼ï½P0ï¼âï¼¢ï¼ï½PB',ï½P0ï¼ ä½ããï½PBï¼ï½PBâï¼Îï¼¢ï½ï¼ï½P ï¼ï½P0ï¼ï¼ï½P ï¼ï¼¤
ï½ï¼ï½P ï¼ï½P0ï¼ï¼Îï¼¢ï½ï¼ï½P ï¼ï½P0ï¼Further, for the B data of the pixel having the coordinate value (x P , y P0 ) before the distortion correction in the x direction, the chromatic aberration of magnification of B is set by using the coordinates (x P , y P0 ) as a key. From the correction data, a corresponding chromatic aberration of magnification ÎBx (x P , y P0 ) in the x direction of B is searched, and a pixel having a coordinate value (x PB ', y P0 ) after the distortion correction in the x direction is obtained. The conversion of the coordinates of the B data B (x PB ', y P0 ) according to the following equation is performed for all the pixels. B (x PB, y P0) â B (x PB ', y P0) where, x PB = x PB' + ÎBx (x P, y P0) = x P + D
x (x P , y P0 ) + ÎBx (x P , y P0 )
ãï¼ï¼ï¼ï¼ãæ´ã«ãï½æ¹åã«ã¤ãã¦ã®æªæ²åå·®è£æ£åã®
座æ¨å¤ãï¼ï½P ï¼ï½P0ï¼ã®ç»ç´ ã®ï¼©ï¼²ã®ãã¼ã¿ã«å¯¾ãã
座æ¨ï¼ï½P ï¼ï½P0ï¼ããã¼ã«ãã¦ãIRã®åçè²åå·®è£
æ£ãã¼ã¿ã®ä¸ãã対å¿ããIRã®ï½æ¹åã«ã¤ãã¦ã®åç
è²åå·®éÎIRï½ï¼ï½P ï¼ï½ P0ï¼ãæ¤ç´¢ããï½æ¹åã«ã¤
ãã¦ã®æªæ²åå·®è£æ£å¾ã®åº§æ¨å¤ãï¼ï½PIR',ï½P0ï¼ã®ç»
ç´ ã®ï¼©ï¼²ã®ãã¼ã¿ï¼©ï¼²ï¼ï½PIR',ï½P0ï¼ã®åº§æ¨ã次å¼ã«
å¾ã£ã¦å¤æãããã¨ããå
¨ã¦ã®ç»ç´ ã«ã¤ãã¦è¡ãã IRï¼ï½PIRï¼ï½P0ï¼âIRï¼ï½PIR',ï½P0ï¼ ä½ããï½PIRï¼ï½PIRâï¼ÎIRï½ï¼ï½P ï¼ï½P0ï¼ï¼ï½P
ï¼ï¼¤ï½ï¼ï½P ï¼ï½P0ï¼ï¼ÎIRï½ï¼ï½P ï¼ï½P0ï¼Further, before distortion correction in the x direction,
If the coordinate value is (xP, YP0) For the IR data of the pixel
Coordinates (xP, YP0) As a key to compensate for IR chromatic aberration
Magnification in the x direction of the corresponding IR from the positive data
Chromatic aberration amount ÎIRx (xP, Y P0) And search in the x direction
The coordinate value after distortion correction is (xPIR', yP0) Painting
Raw IR data IR (xPIR', yP0) To the following equation
Therefore, the conversion is performed for all the pixels. IR (xPIR, YP0) â IR (xPIR', yP0) Where xPIR= XPIRâ² + ÎIRx (xP, YP0) = XP
+ Dx (xP, YP0) + ÎIRx (xP, YP0)
ãï¼ï¼ï¼ï¼ãä¸è¨ã«ãããï½æ¹åã«ã¤ãã¦ã®æªæ²åå·®è£
æ£ãåã³ï½æ¹åã«ã¤ãã¦ã®ï¼²ãï¼¢åã³ï¼©ï¼²ã®åçè²åå·®
è£æ£ãè¡ãããï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åãã¼ã¿ã表ãç»å
ã®åç»ç´ ã®ä½ç½®ã¯åã
ç¬ç«ã«ï½æ¹åã«åã
ç§»åããããAs described above, the distortion correction in the x direction and the chromatic aberration of magnification of R, B and IR in the x direction are performed, and the position of each pixel of the image represented by each data of R, G, B and IR Are independently moved in the x direction.
ãï¼ï¼ï¼ï¼ã次ã«ãç»åã®åç»ç´ ã®ï½æ¹åã«ã¤ãã¦ã®æ¬
æ¥ã®ä½ç½®ï¼ä»¥ä¸ããã®ä½ç½®ã座æ¨å¤ï¼ï½P0ï¼ï½P0ï¼ã§è¡¨
ãï¼ãæ±ãããããã¦ã座æ¨å¤ï¼ï½P0ï¼ï½P0ï¼ã®ä½ç½®ã«
ãããï¼²ã®å¤ããæªæ²åå·®è£æ£åã³åçè²åå·®è£æ£ãçµ
ããã¼ã¿ï¼²ï¼ï½PRï¼ï½P0ï¼ã®ãã¡ãï½æ¹åã«æ²¿ã£ã¦åº§æ¨
å¤ï¼ï½P0ï¼ï½P0ï¼ãæãã§é£ãåã£ã¦ããï¼ã¤ã®ä½ç½®ã«
ãããï¼²ã®ãã¼ã¿ã«åºã¥ãã¦è£éæ¼ç®ã«ãã£ã¦æ±ããã
ã¾ãã座æ¨å¤ï¼ï½P0ï¼ï½P0ï¼ã®ä½ç½®ã«ãããï¼§ã®å¤ãã
æªæ²åå·®è£æ£åã³åçè²åå·®è£æ£ãçµããã¼ã¿ï¼§
ï¼ï½PGï¼ï½P0ï¼ã®ãã¡ãï½æ¹åã«æ²¿ã£ã¦åº§æ¨å¤ï¼ï½P0ï¼
ï½P0ï¼ãæãã§é£ãåã£ã¦ããï¼ã¤ã®ä½ç½®ã«ãããï¼§ã®
ãã¼ã¿ã«åºã¥ãã¦è£éæ¼ç®ã«ãã£ã¦æ±ãã座æ¨å¤
ï¼ï½P0ï¼ï½P0ï¼ã®ä½ç½®ã«ãããï¼¢ã®å¤ããæªæ²åå·®è£æ£
åã³åçè²åå·®è£æ£ãçµããã¼ã¿ï¼¢ï¼ï½PBï¼ï½P0ï¼ã®ã
ã¡ãï½æ¹åã«æ²¿ã£ã¦åº§æ¨å¤ï¼ï½P0ï¼ï½P0ï¼ãæãã§é£ã
åã£ã¦ããï¼ã¤ã®ä½ç½®ã«ãããï¼¢ã®ãã¼ã¿ã«åºã¥ãã¦è£
éæ¼ç®ã«ãã£ã¦æ±ãã座æ¨å¤ï¼ï½P0ï¼ï½P0ï¼ã®ä½ç½®ã«ã
ããIRã®å¤ããæªæ²åå·®è£æ£åã³åçè²åå·®è£æ£ãçµ
ããã¼ã¿ï¼©ï¼²ï¼ï½PIRï¼ï½P0ï¼ã®ãã¡ãï½æ¹åã«æ²¿ã£ã¦
座æ¨å¤ï¼ï½P0ï¼ï½P0ï¼ãæãã§é£ãåã£ã¦ããï¼ã¤ã®ä½
ç½®ã«ãããIRã®ãã¼ã¿ã«åºã¥ãã¦è£éæ¼ç®ã«ãã£ã¦æ±
ãããNext, the original position of each pixel of the image in the x direction (hereinafter, this position is represented by a coordinate value (x P0 , y P0 )) is obtained. Then, the value of R at the position of the coordinate value (x P0 , y P0 ) is converted into the coordinate value (x P0 ) along the x direction of the data R (x PR , y P0 ) after the distortion aberration correction and the magnification chromatic aberration correction. , Y P0 ) are obtained by an interpolation operation based on the data of R at two positions adjacent to each other with the interposition therebetween.
The value of G at the position of the coordinate value (x P0 , y P0 ) is
Data G after distortion and magnification chromatic aberration correction
Of (x PG , y P0 ), coordinate values (x P0 , y P0 , y
The value of B at the position of the coordinate value (x P0 , y P0 ) is obtained by interpolation based on the G data at two positions adjacent to each other with y P0 ) therebetween. Among the passed data B (x PB , y P0 ), the coordinates are obtained by interpolation based on the B data at two positions adjacent to each other across the coordinate value (x P0 , y P0 ) along the x direction. The IR value at the position of the value (x P0 , y P0 ) is converted into the coordinate value (x P0 , y P0 ) along the x direction of the data IR (x PIR , y P0 ) after the distortion correction and the chromatic aberration of magnification correction. ) Are obtained by interpolation based on IR data at two positions adjacent to each other.
ãï¼ï¼ï¼ï¼ãä¸è¨å¦çãç»åã®å
¨ã¦ã®ç»ç´ ã«ã¤ãã¦è¡ã
ãã¨ã«ãããï½æ¹åã«ã¤ãã¦ã®æªæ²åå·®è£æ£åã³åçè²
åå·®è£æ£ãè¡ãããæªæ²åå·®è£æ£å¦çåã³åçè²åå·®è£
æ£å¦çãå®äºãããä¸è¨ã®æªæ²åå·®è£æ£åã³åçè²åå·®
è£æ£ã«ãããçµåã¬ã³ãºï¼ï¼ã®æªæ²åå·®åã³åçè²åå·®
ã«èµ·å ããï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åãã¼ã¿ã表ãç»åã®ç»
ç´ ãããè£æ£ãããããªããã¹ãããï¼ï¼ï¼ãï¼ï¼ï¼ã¯
è«æ±é
ï¼ã«è¨è¼ã®è£æ£ææ®µã«å¯¾å¿ãã¦ãããBy performing the above processing for all the pixels of the image, distortion correction and chromatic aberration of magnification in the x direction are performed, and the distortion correction processing and chromatic aberration of magnification correction processing are completed. By the above-described distortion correction and magnification chromatic aberration correction, the pixel shift of the image represented by each of the R, G, B, and IR data due to the distortion and magnification chromatic aberration of the imaging lens 28 is corrected. Steps 102 and 104 correspond to the correcting means described in claim 3.
ãï¼ï¼ï¼ï¼ã次ã®ã¹ãããï¼ï¼ï¼ã§ã¯ãIRå
ã¨ãã¦å
çãã£ã«ã ã«ç
§å°ããå
ã«å«ã¾ãã¦ããå¯è¦åã®å
ã«ã
ããIRå
ã®ééå
éã®æ¤åºå¤ã®å¤åãè£æ£ããï¼ä»¥
ä¸ãæ³¢é·åé¢è£æ£ã¨ããï¼ããã®è£æ£ãã©ã¡ã¼ã¿ãåã
è¾¼ããããã¦ã¹ãããï¼ï¼ï¼ã§ã¯ãåãè¾¼ãã è£æ£ãã©
ã¡ã¼ã¿ã«åºã¥ããIRãã¼ã¿ã«å¯¾ãã¦æ³¢é·åé¢è£æ£ãè¡
ãããã®æ³¢é·åé¢è£æ£ã¯ãæªæ²åå·®è£æ£åã³åçè²åå·®
è£æ£ãçµãï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®ãã¼ã¿ã«åºã¥ãåç»ç´ æ¯
ã«æ¬¡å¼ã®æ¼ç®ãè¡ããå
¨ã¦ã®ç»ç´ ã®ï¼©ï¼²ãã¼ã¿ã次å¼ã
ãæ±ã¾ããã¼ã¿ï¼©ï¼²âã«ç½®ãæãããã¨ã§å®ç¾ã§ãããIn the next step 106, the change in the detected value of the transmitted light amount of the IR light due to the visible light included in the light irradiated to the photographic film as the IR light is corrected (hereinafter referred to as wavelength separation correction). Correction parameters are taken. In step 108, wavelength separation correction is performed on the IR data based on the acquired correction parameters. In this wavelength separation correction, the following calculation is performed for each pixel based on the R, G, B, and IR data that have been subjected to the distortion correction and the chromatic aberration of magnification correction, and the IR data of all pixels is obtained from the following formula. Can be realized by replacing
ãï¼ï¼ï¼ï¼ã[0083]
ãæ°ï¼ã (Equation 1)
ãï¼ï¼ï¼ï¼ããªãä¸å¼ã«ããã¦ãï¼¢ï¼ï¼§ï¼ï¼²ï¼ï¼©ï¼²ã¯æ¼
ç®å¯¾è±¡ã®ç»ç´ ã®ï¼¢ï¼ï¼§ï¼ï¼²ï¼ï¼©ï¼²ã®å¤ãIRâã¯æ³¢é·å
é¢è£æ£å¾ã®ï¼©ï¼²ã®å¤ã表ããã¾ããï½0ï¼ï½1ï¼ï½2ï¼ï½3
ã¯æ³¢é·åé¢è£æ£ãè¡ãããã®è£æ£ãã©ã¡ã¼ã¿ã§ãããå
ã
æ¼ç®å¯¾è±¡ã®ç»ç´ ã®ï¼¢ï¼ï¼§ï¼ï¼²ï¼ï¼©ï¼²ã®å¤ã«å¯¾ããä¿æ°
ã§ãããæ¬å®æ½å½¢æ
ã§ã¯ããã£ã«ã¿ï¼ï¼ï¼©ï¼²ã®ç¹æ§ã«åº
ã¥ãã¦ãIRå
ã¨ãã¦åçãã£ã«ã ã«ç
§å°ãããå
ã®ã
ã¡å¯è¦å
ã®å²ååã³ãã®æ³¢é·åãæ±ããæ¼ç®çµæã«åºã¥
ãã¦ä¸è¨è£æ£ãã©ã¡ã¼ã¿ã®åä¿æ°ã®å¤ãè¨å®ããROï¼
ï¼ï¼ã«è¨æ¶ãã¦ãããIn the above equation, B, G, R, and IR represent the B, G, R, and IR values of the pixel to be calculated, and IR â² represents the IR value after wavelength separation correction. A 0 , a 1 , a 2 , a 3
Is a correction parameter for performing wavelength separation correction, and is a coefficient for each of B, G, R, and IR values of a pixel to be calculated. In the present embodiment, based on the characteristics of the filter 23IR, the ratio of the visible light in the light irradiated to the photographic film as the IR light and the wavelength range thereof are obtained, and the value of each coefficient of the correction parameter is calculated based on the calculation result. Set, ROM
50.
ãï¼ï¼ï¼ï¼ãå³ï¼ã«ã¯ãIRå
ã¨ãã¦åçãã£ã«ã ã«ç
§
å°ãããå
ã®åå
ç¹æ§ã®ä¸ä¾ã示ããï¼å³ï¼ã«ãIRã
ã¨è¡¨è¨ãã¦ç¤ºãç¹æ§ï¼ããã®ç¹æ§ã§ã¯ãåè¨ï¼©ï¼²å
ã¨ã
ã¦ç
§å°ãããå
ã®ãã¡å¯è¦å
ã®å²åã¯æ°ï¼
ç¨åº¦ã§ããã
ãã®æ³¢é·åã¯ï¼²ã«ç¸å½ããæ³¢é·åã«ã®ã¿åå¸ãã¦ãã
ï¼ãã®å ´åãIRå
ã¨ãã¦ç
§å°ãããå
ã«å«ã¾ããå¯è¦
å
ã®ééå
éã¯ç»åã®ï¼²æ¿åº¦ã«å¿ãã¦å¤åããï¼ã®ã§ã
ä¸ä¾ã¨ãã¦ï½0ï¼ï¼ï¼ï½1ï¼ï¼ï¼ï½2ï¼âï¼ï¼ï¼ï¼ï¼ï½3ï¼
ï¼çã®å¤ãè¨å®ãããã¨ãã§ãããFIG. 3 shows an example of the spectral characteristics of the light irradiated to the photographic film as IR light (FIG. 3 shows âIRâ).
In this characteristic, the ratio of visible light in the light irradiated as the IR light is about several percent,
Since the wavelength range is distributed only in the wavelength range corresponding to R (in this case, the transmitted light amount of visible light included in the light irradiated as IR light changes according to the R density of the image).
As an example, a 0 = 0, a 1 = 0, a 2 = â0.03, a 3 =
A value such as 1 can be set.
ãï¼ï¼ï¼ï¼ãä¸è¨ã®æ³¢é·åé¢è£æ£ãè¡ããã¨ã«ãããI
ï¼²å
ã¨ãã¦ç
§å°ãããå
ã«å¯è¦å
ãå«ã¾ãã¦ãããã¨ã«
èµ·å ããIRå
ã®ééå
éã®æ¤åºå¤ã®å¤åãè£æ£ãã
ãããªããã¹ãããï¼ï¼ï¼ãï¼ï¼ï¼ã¯è«æ±é
ï¼ã«è¨è¼ã®
è£æ£ææ®µã«å¯¾å¿ãã¦ãããBy performing the above-described wavelength separation correction, I
A change in the detection value of the transmitted light amount of the IR light due to the fact that the light irradiated as the R light includes visible light is corrected. Steps 106 and 108 correspond to the correcting means described in claim 6.
ãï¼ï¼ï¼ï¼ãã¨ããã§ãåçãã£ã«ã ãééããå
ã¯ã
å·ãç°ç©çãä»ãã¦ããããã¤ç»åãè¨é²ããã¦ããªã
é¨åï¼æè¬ç´ æãé¨åï¼ãééããå ´åã§ãã£ã¦ããã
ã£ã«ã ãã¼ã¹ã«ããã屿ã«ãã£ã¦ééå
éãæ¸è¡°ãã
ãããã£ã«ã ãã¼ã¹ã«ãããå
ã®å±æçï¼ééå
éã®æ¸
衰度ï¼ã¯ééå
ã®æ³¢é·ã«ãã£ã¦ç°ãªã£ã¦ããï¼æ¸è¡°åº¦ã®
æ³¢é·ä¾åæ§ï¼ãä¾ã¨ãã¦å³ï¼ã«ç¤ºãããã«ãééå
ã®æ³¢
é·ãé·ããªãã«å¾ã£ã¦ééå
ã®æ¸è¡°åº¦ãå°ãããªãç¹æ§
ã示ããã¨ãä¸è¬çã§ãããããã¦ããã®æ³¢é·âæ¸è¡°åº¦
ç¹æ§ã¯ãã£ã«ã ãã¼ã¹ã®æè³ªï¼åçãã£ã«ã ã®ç¨®é¡ï¼ã«
ãã£ã¦ç°ãªã£ã¦ãããThe light transmitted through the photographic film is
Even in the case where light passes through a portion where no scratch or foreign matter is attached and an image is not recorded (a so-called plain portion), the amount of transmitted light attenuates due to refraction at the film base, but the refraction of light at the film base The rate (attenuation of the amount of transmitted light) differs depending on the wavelength of the transmitted light (wavelength dependence of the attenuation). As shown in FIG. 6, for example, as the wavelength of the transmitted light increases, the attenuation of the transmitted light decreases. It is common to show certain characteristics. The wavelength-attenuation characteristics differ depending on the material of the film base (the type of photographic film).
ãï¼ï¼ï¼ï¼ããã®ãããæ¬å®æ½å½¢æ
ã§ã¯åçãã£ã«ã ã®
æ³¢é·âæ¸è¡°åº¦ç¹æ§ãåçãã£ã«ã ã®ç¨®é¡æ¯ã«åã
測å®
ããæ³¢é·âæ¸è¡°åº¦ç¹æ§ã®æ¸¬å®çµæã«åºã¥ãã¦ãåçãã£
ã«ã ã«ããæ¸è¡°åº¦ã®æ³¢é·ä¾åæ§ã«èµ·å ããï¼²ï¼ï¼§ï¼ï¼¢ï¼
IRã®ééå
éã®å¤åãè£æ£ããããã®æ¸è¡°åº¦è£æ£éã
åçãã£ã«ã ã®ç¨®é¡æ¯ã«åã
決å®ããæ±ºå®ããæ¸è¡°åº¦è£
æ£éãåçãã£ã«ã ã®ç¨®é¡ã表ãæ
å ±ã¨å¯¾å¿ããã¦æ¸è¡°
åº¦è£æ£ãã¼ã¿ã¨ãã¦ï¼²ï¼¯ï¼ï¼ï¼ã«äºãè¨æ¶ãã¦ãããFor this reason, in this embodiment, the wavelength-attenuation characteristic of the photographic film is measured for each type of photographic film, and the wavelength dependence of the attenuation by the photographic film is determined based on the measurement result of the wavelength-attenuation characteristic. R, G, B, due to
Attenuation correction amounts for correcting variations in the amount of transmitted IR light are determined for each type of photographic film, and the determined attenuation correction amounts are made to correspond to information representing the types of photographic films, and are stored in the ROM 50 as attenuation correction data. Is stored in advance.
ãï¼ï¼ï¼ï¼ãã¹ãããï¼ï¼ï¼ã§ã¯ããã£ã«ã ã¹ãã£ãï¼
ï¼ã«ããåçãã£ã«ã èªåæã«æ¤åºããããèªå対象ã®
åçãã£ã«ã ã®ç¨®é¡ã表ãæ
å ±ãåãè¾¼ãããªããåç
ãã£ã«ã ã®ç¨®é¡ã¯ãåçãã£ã«ã ã®è£½é æã«ãã¼ã³ã¼ã
çã®å½¢æ
ã§åçãã£ã«ã ã®å´ç¸ã«è¨é²ãããDXã³ã¼ã
ãèªã¿åããã¨ã§æ¤åºãããã¨ãã§ãããæ¬¡ã®ã¹ããã
ï¼ï¼ï¼ã§ã¯ãã¹ãããï¼ï¼ï¼ã§åãè¾¼ãã æ
å ±ã«åºã¥
ããèªå対象ã®åçãã£ã«ã ã®ç¨®é¡ã«å¯¾å¿ããæ¸è¡°åº¦è£
æ£ãã¼ã¿ãROï¼ï¼ï¼ããåãè¾¼ããããã¦ã¹ãããï¼
ï¼ï¼ã§ã¯ãã¹ãããï¼ï¼ï¼ã§åãè¾¼ãã æ¸è¡°åº¦è£æ£ãã¼
ã¿ã«å¾ããï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åãã¼ã¿ã«å¯¾ããåçã
ã£ã«ã ã«ããæ¸è¡°åº¦ã®æ³¢é·ä¾åæ§ã«èµ·å ããï¼²ï¼ï¼§ï¼
ï¼¢ï¼ï¼©ï¼²ã®ééå
éã®å¤åãç»ç´ æ¯ã«è£æ£ãããAt step 110, the film scanner 1
The information indicating the type of the photographic film to be read, which is detected at the time of reading the photographic film by No. 2, is fetched. The type of the photographic film can be detected by reading a DX code recorded on a side edge of the photographic film in the form of a bar code or the like when the photographic film is manufactured. In the next step 112, attenuation correction data corresponding to the type of the photographic film to be read is fetched from the ROM 50 based on the information fetched in step 110. And step 1
At 14, in accordance with the attenuation correction data taken in step 112, the R, G, B, and IR data are compared with the R, G, and R due to the wavelength dependence of the attenuation by the photographic film.
The variation of the transmitted light amounts of B and IR is corrected for each pixel.
ãï¼ï¼ï¼ï¼ãããã«ãããï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åãã¼ã¿
ã®å¤ã¯ãåçãã£ã«ã ã®ãã£ã«ã ãã¼ã¹ã«ããã屿ã«
ããééå
ã®æ¸è¡°åº¦ãæ³¢é·ã«æãããä¸å®ï¼ä¾ãã°æ¸è¡°
度ï¼ï¼ï¼ã§ããã¨ä»®å®ããã¨ãã®ï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®é
éå
éã«å¯¾å¿ããå¤ã«è£æ£ããããã¨ã«ãªããä¸è¨ã®ã¹
ãããï¼ï¼ï¼ä¹è³ï¼ï¼ï¼ã¯è«æ±é
ï¼ã«è¨è¼ã®è£æ£ææ®µã«
対å¿ãã¦ãããThus, the values of the R, G, B, and IR data assume that the attenuation of transmitted light due to refraction at the film base of a photographic film is constant (eg, attenuation = 0) regardless of wavelength. Is corrected to a value corresponding to the amount of transmitted light of R, G, B, and IR. The above steps 110 to 114 correspond to the correcting means according to claim 7.
ãï¼ï¼ï¼ï¼ãã¨ããã§ãã¬ã³ãºã®ç¦ç¹è·é¢ã¯ã¬ã³ãºãé
éããå
ã®æ³¢é·ã«å¿ãã¦å¤åï¼ç¦ç¹è·é¢ã®æ³¢é·ä¾åæ§ï¼
ããã®ã§ãåä¸ã®ç»åã«å¯¾ãã¦ï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åæ³¢
é·åã«ã¤ãã¦ã®èªã¿åããåã
è¡ãå ´åãçµåã¬ã³ãºï¼
ï¼ã«ããç»åã®çµåä½ç½®ã¯ãã¨ãªã¢ï¼£ï¼£ï¼¤ï¼ï¼ã®åå
é¢
ä½ç½®ã«å¯¾ãã¦ééå
ã®æ³¢é·åæ¯ã«å¤åããåæ³¢é·åã«ã¤
ãã¦ã®èªã¿åãï¼åæ³¢é·åæ¯ã®ééå
éã®æ¤åºï¼ã«ãã£
ã¦å¾ãããï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åãã¼ã¿ã表ãç»åã®é®®
é度ãåã
ç¸éãããã¨ã«ãªããIncidentally, the focal length of the lens changes according to the wavelength of light passing through the lens (wavelength dependence of the focal length).
Therefore, when reading a single image in each of the R, G, B, and IR wavelength ranges, the imaging lens 2
The image formation position of the image 8 changes with respect to the light receiving surface position of the area CCD 30 for each wavelength region of the transmitted light, and R, R obtained by reading each wavelength region (detecting the amount of transmitted light for each wavelength region). The sharpness of the image represented by the data of G, B, and IR is also different.
ãï¼ï¼ï¼ï¼ãæ¬å®æ½å½¢æ
ã®ããã«åæ³¢é·åã«ã¤ãã¦ã®èª
ã¿åããé æ¬¡è¡ãå ´åãä¸è¨ã®é®®é度ã®ç¸éã¯ãä¾ãã°
åæ³¢é·åã«ã¤ãã¦ã®èªã¿åããè¡ãåã«ãèªã¿åããè¡
ãæ³¢é·åã«ã¤ãã¦ã®ãã£ã«ã ã¹ãã£ãï¼ï¼ã®çµåã¬ã³ãº
ï¼ï¼ã«ããç»åã®çµåä½ç½®ãã¨ãªã¢ï¼£ï¼£ï¼¤ï¼ï¼ã®åå
é¢
ä½ç½®ã«èªåçã«ä¸è´ããããªã¼ããã©ã¼ã«ã¹ï¼ï¼¡ï¼¦ï¼å¦
çãæ¯åè¡ããã¨ã§è§£æ±ºã§ãããã代ããã«ç»åã®èªã¿
åãã«æéããããã¨ããæ°ããªåé¡ãçãããWhen reading in each wavelength range is sequentially performed as in the present embodiment, the difference in sharpness is caused, for example, by reading the film scanner 12 in the wavelength range to be read before reading in each wavelength range. Can be solved by performing an auto focus (AF) process for automatically matching the image forming position of the image formed by the image forming lens 28 with the light receiving surface position of the area CCD 30, but it takes a long time to read the image instead. Problems arise.
ãï¼ï¼ï¼ï¼ãæ¬å®æ½å½¢æ
ã§ã¯ãåä¸ã®ç»åã«å¯¾ããèªã¿
åããéå§ããã¨ãã«ãæå®ã®åºæºæ³¢é·åï¼ä¾ãã°ï¼§ï¼
ã«ã¤ãã¦ã®ã¿ï¼¡ï¼¦å¦çãè¡ããåæ³¢é·åã«ã¤ãã¦ï¼¡ï¼¦å¦
çãåã
è¡ããã¨ã«ä»£ãã¦ã次ã®ã¹ãããï¼ï¼ï¼ãï¼ï¼
ï¼ã§ï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åãã¼ã¿ã表ãç»åã®é®®é度ã
ä¸è´ããããã«åãã¼ã¿ãè£æ£ããé®®éåº¦è£æ£å¦çãè¡
ããIn this embodiment, when reading of a single image is started, a predetermined reference wavelength range (for example, G)
Instead of performing the AF process only on each wavelength region and performing the AF process on each wavelength region, the following steps 116 and 11 are performed.
In step 8, a sharpness correction process is performed to correct each data so that the sharpness of the image represented by each of the R, G, B, and IR data coincides.
ãï¼ï¼ï¼ï¼ãããªãã¡ãã¹ãããï¼ï¼ï¼ã§ã¯ãçµåã¬ã³
ãºï¼ï¼ã®ç¦ç¹è·é¢ã®æ³¢é·ä¾åæ§ã«èµ·å ããï¼²ï¼ï¼§ï¼ï¼¢ï¼
IRã®åãã¼ã¿ã®é®®é度ã®ã°ãã¤ããè£æ£ããããã®é®®
éåº¦è£æ£å¤ãROï¼ï¼ï¼ããåãè¾¼ãããã®é®®éåº¦è£æ£
å¤ã¯ãåãã¼ã¿ã表ãç»åã«å¯¾ããé«å¨æ³¢æåã®å¼·èª¿åº¦
ã表ãã¦ããï¼é®®é度ã®ä½ä¸ã¯ç©ºé卿³¢æ°ã®åå¸ã«ãã
ã¦é«å¨æ³¢æåã®æ¸è¡°ã¨ãã¦ç¾ããï¼ãåãã¼ã¿ã表ãç»
åã«ãããé«å¨æ³¢æåã®æ¸è¡°åº¦åããé«ããªãã«å¾ã£ã¦
強調度ãé«ããªãããã«è¨å®ããã¦ï¼²ï¼¯ï¼ï¼ï¼ã«äºãè¨
æ¶ããã¦ãããThat is, in step 116, R, G, B, and R due to the wavelength dependence of the focal length of the imaging lens 28.
A sharpness correction value for correcting a variation in the sharpness of each data of the IR is taken in from the ROM 50. This sharpness correction value represents the degree of emphasis of the high-frequency component with respect to the image represented by each data (a decrease in the sharpness appears as attenuation of the high-frequency component in the spatial frequency distribution). The degree of emphasis is set to increase as the degree of attenuation increases, and is stored in the ROM 50 in advance.
ãï¼ï¼ï¼ï¼ã詳ããã¯ãï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åæ³¢é·åæ¯
ã®çµåã¬ã³ãºï¼ï¼ã®ç¦ç¹è·é¢ãããåºæºæ³¢é·åã®å
ã«ã¤
ãã¦ã®çµåã¬ã³ãºï¼ï¼ã®ç¦ç¹è·é¢ã¨éåºæºæ³¢é·åï¼ä¾ã
ã°ï¼²ï¼ï¼¢ï¼ï¼©ï¼²ï¼ã®å
ã«ã¤ãã¦ã®çµåã¬ã³ãºï¼ï¼ã®ç¦ç¹
è·é¢ã¨ã®åå·®ãåéåºæºæ³¢é·åã«ã¤ãã¦åã
æ±ãããã®
ç¦ç¹è·é¢ã®åå·®ã«åºã¥ãã¦é«å¨æ³¢æåã®æ¸è¡°åº¦åããæ¨
å®ããåéåºæºæ³¢é·åæ¯ã«ãåè¨é«å¨æ³¢æåã®æ¸è¡°åº¦å
ãã«å¿ãã¦é®®éåº¦è£æ£å¤ãåã
è¨å®ãã¦ãããMore specifically, from the focal length of the imaging lens 28 for each of the R, G, B, and IR wavelength ranges, the focal length of the imaging lens 28 for light in the reference wavelength range and the non-reference wavelength range (for example, R , B, IR) with respect to the focal length of the imaging lens 28 for each non-reference wavelength range, and based on the focal length deviation, the degree of attenuation of the high-frequency component is estimated. A sharpness correction value is set for each region in accordance with the degree of attenuation of the high frequency component.
ãï¼ï¼ï¼ï¼ã次ã®ã¹ãããï¼ï¼ï¼ã§ã¯ãã¹ãããï¼ï¼ï¼
ã§åãè¾¼ãã é®®éåº¦è£æ£å¤ã«åºã¥ããéåºæºæ³¢é·åã®ã
ã¼ã¿ã«å¯¾ãã¦é®®éåº¦è£æ£ãåã
å®è¡ããããã®é®®é度è£
æ£ã¯ãä¾ãã°æ¬¡ã®ï¼ï¼ï¼å¼ã®æ¼ç®ãè¡ããã¨ã§å®ç¾ã§ã
ãã QLï¼ï¼±ï¼Î²ï¼ï¼±âï¼±ï¼µï¼³ï¼ â¦ï¼ï¼ï¼ ä½ããï¼±ã¯è£æ£å¯¾è±¡ã®ç»åãã¼ã¿ï¼éåºæºæ³¢é·åï¼ä¾ã
ã°ï¼²åã¯ï¼¢åã¯ï¼©ï¼²ã®ãã¼ã¿ï¼ãQLã¯é®®éåº¦è£æ£å¾ã®
ç»åãã¼ã¿ãQUSã¯éé®®éãã¹ã¯ç»åãã¼ã¿ãβã¯é«
卿³¢æåã®å¼·èª¿åº¦ï¼é®®éåº¦è£æ£å¤ï¼ã§ãããIn the next step 118, step 116
Based on the sharpness correction values captured in step (1), the sharpness correction is performed on the data in the non-reference wavelength range. This sharpness correction can be realized, for example, by performing the calculation of the following equation (1). QL = Q + β (QâQUS) (1) where Q is image data to be corrected (non-reference wavelength region (for example, R or B or IR data), QL is image data after sharpness correction, and QUS is The sharp mask image data, β, is the degree of enhancement of the high frequency component (sharpness correction value).
ãï¼ï¼ï¼ï¼ãéé®®éãã¹ã¯ç»åãã¼ã¿ï¼±ï¼µï¼³ã¯ãè£æ£å¯¾
象ã®ç»åãã¼ã¿ï¼±ã«å¯¾ããä¾ãã°ç»åã®è§é¨ã«åå¨ãã
ï½Ãï½ç»ç´ ï¼ä¾ãã°ï½ï¼ï¼ç¨åº¦ã®å¤ãç¨ãããã¨ãã§ã
ãï¼ã®å¹³åå¤ããåè¨ï½Ãï½ç»ç´ ããæãé åã®ä¸å¿ã«
ç¸å½ããç»ç´ ã®éé®®éãã¹ã¯ä¿¡å·ã¨ãããã¨ããå¦ç対
象ã®ï½Ãï½ç»ç´ ã®é åãï¼ç»ç´ åãã¤ç§»åãããªããç¹°
ãè¿ããã¨ã§å¾ããã¨ãã§ãããããã«ãããä¾ã¨ãã¦
å³ï¼ï¼ï¼¡ï¼ã«å®ç·ã§ç¤ºãå¦ç対象ã®ç»åãã¼ã¿ï¼±ã®ã¬ã¹
ãã³ã¹ç¹æ§ï¼ç©ºé卿³¢æ°ã®åå¸ï¼ã«å¯¾ããå³ï¼ï¼ï¼¡ï¼ã«
ç ´ç·ã§ç¤ºãããã«ãé«å¨æ³¢å¸¯åã§ã®å¿çãç»åãã¼ã¿ï¼±
ãããä½ããããã¬ã¹ãã³ã¹ç¹æ§ã®éé®®éãã¹ã¯ç»åã
ã¼ã¿ï¼±ï¼µï¼³ãå¾ããããThe unsharp mask image data QUS is obtained by, for example, calculating the average value of n à n pixels (for example, a value of about n = 5) existing at the corner of the image with respect to the image data Q to be corrected. The non-sharp mask signal of the pixel corresponding to the center of the area composed of the n à n pixels can be obtained by repeating the processing of the n à n pixel area by one pixel. As a result, the response in the high-frequency band as shown by the broken line in FIG. 7A with respect to the response characteristic (spatial frequency distribution) of the image data Q to be processed indicated by the solid line in FIG. Image data Q
As a result, unsharp mask image data QUS having lower response characteristics can be obtained.
ãï¼ï¼ï¼ï¼ãï¼ï¼ï¼å¼ã«ãããï¼ï¼±âQUSï¼ã¯ãç»å
ãã¼ã¿ï¼±ã¨éé®®éãã¹ã¯ç»åãã¼ã¿ï¼±ï¼µï¼³ã¨ã®å·®ã§ãã
ã®ã§ãå³ï¼ï¼ï¼¢ï¼ã«ç¤ºãããã«é«å¨æ³¢å¸¯åã«å¿çã®ãã¼
ã¯ãçããã¬ã¹ãã³ã¹ç¹æ§ï¼ããªãã¡ãå³ï¼ï¼ï¼¡ï¼ã«å®
ç·ã§ç¤ºãã¬ã¹ãã³ã¹ç¹æ§ã¨ç ´ç·ã§ç¤ºãã¬ã¹ãã³ã¹ç¹æ§ã®
å·®ã«ç¸å½ããã¬ã¹ãã³ã¹ç¹æ§ï¼ã¨ãªããå¾ã£ã¦ãï¼ï¼ï¼
å¼ã«ãã£ã¦æ±ã¾ãé®®éåº¦è£æ£å¾ã®ç»åãã¼ã¿ï¼±ï¼¬ã®ã¬ã¹
ãã³ã¹ç¹æ§ã¯ãå³ï¼ï¼ï¼£ï¼ã«å®ç·ã§ç¤ºãç»åãã¼ã¿ï¼±ã®
ã¬ã¹ãã³ã¹ç¹æ§ã«å¯¾ããå³ï¼ï¼ï¼£ï¼ã«ç ´ç·ã§ç¤ºããã
ã«ãé«å¨æ³¢å¸¯åã«ããã¦ã®ã¿å¿çãé«ããããç¹æ§ã«ãª
ãã¨å
±ã«ãè©²ç¹æ§ã«ããã¦é«å¨æ³¢å¸¯åã«ãããå¿çé«ã
ï½ã¯é«å¨æ³¢æåã®å¼·èª¿åº¦ï¼é®®éåº¦è£æ£å¤ï¼Î²ã®å¤ã«ä¾å
ããé®®éåº¦è£æ£å¤Î²ã®å¤ãé«ããªãã«å¾ã£ã¦å¿çé«ãï½
ï¼é«å¨æ³¢æåã®å¼·èª¿åº¦åãï¼ãé«ããªããSince (Q-QUS) in the equation (1) is the difference between the image data Q and the unsharp mask image data QUS, the response in which a response peak occurs in a high frequency band as shown in FIG. The characteristic (ie, the response characteristic corresponding to the difference between the response characteristic indicated by the solid line and the response characteristic indicated by the broken line in FIG. 7A). Therefore, (1)
The response characteristic of the image data QL after the sharpness correction obtained by the equation is different from the response characteristic of the image data Q indicated by the solid line in FIG. 7C only in the high frequency band as indicated by the broken line in FIG. As the response becomes higher, the response height h in the high-frequency band depends on the value of the degree of enhancement (sharpness correction value) β of the high-frequency component, and as the value of the sharpness correction value β increases, Response height h
(The degree of enhancement of the high-frequency component) also increases.
ãï¼ï¼ï¼ï¼ãä¸è¨ã®é®®éåº¦è£æ£ãéåºæºæ³¢é·åã®åãã¼
ã¿ã«å¯¾ãã¦åã
è¡ããã¨ã«ãããçµåã¬ã³ãºï¼ï¼ã®ç¦ç¹
è·é¢ã®æ³¢é·ä¾åæ§ã«èµ·å ããï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åãã¼
ã¿ã表ãç»åã®é®®é度ãè£æ£ãããã¨ãã§ããããã®ã
ãã«ãã¹ãããï¼ï¼ï¼ãï¼ï¼ï¼ã¯è«æ±é
ï¼ã«è¨è¼ã®è£æ£
ææ®µã«å¯¾å¿ãã¦ãããBy performing the above-described sharpness correction on each data in the non-reference wavelength range, each data of R, G, B, and IR due to the wavelength dependence of the focal length of the imaging lens 28 is represented. The sharpness of the image can be corrected. As described above, steps 116 and 118 correspond to the correcting means described in claim 5.
ãï¼ï¼ï¼ï¼ããªããä¸è¨ã§ã¯çµåã¬ã³ãºï¼ï¼ã®ç¦ç¹è·é¢
ã®æ³¢é·ä¾åæ§ã«èµ·å ããé®®é度ã®ã°ãã¤ããè£æ£ãã¦ã
ãããçµåã¬ã³ãºï¼ï¼ã®å颿¹¾æ²ã«èµ·å ããåä¸ç»åå
ã®é®®é度ã®ã°ãã¤ããä½µãã¦è£æ£ããããã«ãã¦ãã
ãããã®è£æ£ã¯ãï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åãã¼ã¿ã表ãç»
åä¸ã®åé¨ã«ãããé«å¨æ³¢æåã®æ¸è¡°åº¦åããæ¨å®ãã
é«å¨æ³¢æåã®æ¸è¡°ãåé¨ã«ãããæ¸è¡°åº¦åãã«å¿ãã¦è£
æ£ããããã®é®®éåº¦è£æ£å¤ããï¼æ¬¡å
ã®ãã¼ãã«ä¸ã®å
é¨ã®ä½ç½®ã«å¯¾å¿ããã¢ãã¬ã¹ã«åã
è¨æ¶ãããã¨ã«ãã
é®®éåº¦è£æ£ãã¼ãã«ãè¨å®ããé®®é度ã®è£æ£ã«ãããã
åç»ç´ æ¯ã«ãé®®éåº¦è£æ£ãã¼ãã«ã®å¯¾å¿ããã¢ãã¬ã¹ã«
è¨æ¶ããã¦ããé®®éåº¦è£æ£å¤ãç¨ãã¦åè¿°ã®ï¼ï¼ï¼å¼ã®
æ¼ç®ãè¡ããã¨ã§å®ç¾ã§ãããIn the above description, the variation in sharpness caused by the wavelength dependence of the focal length of the imaging lens 28 is corrected. However, the sharpness in the same image caused by the curvature of field of the imaging lens 28 is corrected. The variation may be corrected together. This correction estimates the degree of attenuation of the high-frequency component in each part in the image represented by the R, G, B, and IR data,
A sharpness correction table is set by storing a sharpness correction value for correcting the attenuation of the high frequency component according to the degree of attenuation in each part at an address corresponding to the position of each part on the two-dimensional table, and the sharpness correction table is set. In correcting the degree,
This can be realized by performing the calculation of the above-described equation (1) using the sharpness correction value stored at the corresponding address of the sharpness correction table for each pixel.
ãï¼ï¼ï¼ï¼ã次ã®ã¹ãããï¼ï¼ï¼ã§ã¯ä¸è¨ã®åç¨®è£æ£ã
çµãï¼²ï¼ï¼§ï¼ï¼¢ã®ç»åãã¼ã¿åã³ï¼©ï¼²ãã¼ã¿ã«åºã¥ãã
ï¼²ï¼ï¼§ï¼ï¼¢ã®ç»åãã¼ã¿ã表ãå¦ç対象ã®ç»åã®æ¬ é¥é¨
ãæ¤åºããæ¬ é¥é¨æ¤åºå¦çãè¡ããã¾ãæ¬ é¥é¨æ¤åºå¦ç
ã®èª¬æã«å
ç«ã¡ãåçãã£ã«ã ã«å·ãç°ç©ã®ä»ãã¦ãã
ç®æã®ï¼©ï¼²å
ã«ããæ¤åºã®åçã«ã¤ãã¦èª¬æãããIn the next step 120, based on the R, G, B image data and IR data that have undergone the various corrections described above,
A defect detection process is performed to detect a defect in the image to be processed represented by the R, G, and B image data. Prior to the description of the defect detection processing, the principle of detection of a spot on a photographic film with a scratch or a foreign substance using IR light will be described.
ãï¼ï¼ï¼ï¼ãå³ï¼ï¼ï¼¡ï¼ã«ç¤ºãããã«ãåçãã£ã«ã ä¸
ã®è¡¨é¢ã«å·ãç°ç©ãä»ãã¦ããªãç®æã«å
ãç
§å°ããã¨
ãã®ééå
éã¯ãåçãã£ã«ã ã¸ã®å
¥å°å
éã«å¯¾ããå
çãã£ã«ã ã«ããå
ã®å¸åã«å¿ããæ¸è¡°éã ãæ¸è¡°ã
ãããªããåçãã£ã«ã ã§å
ã®å¸åãçããæ³¢é·åã¯ã
ãããå¯è¦å
åã§ããã赤å¤åã®ï¼©ï¼²å
ã«ã¤ãã¦ã¯æ®ã©
å¸åãããªãã®ã§ãåè¨å·ãç°ç©ãä»ãã¦ããªãç®æã«
IRå
ãç
§å°ããå ´åã®ééå
éã¯å
¥å°å
éããå
ãã«
å¤åããã®ã¿ã§ãããAs shown in FIG. 8A, the amount of light transmitted when light is applied to a portion of the surface of the photographic film where no scratch or foreign matter is attached is larger than the amount of light incident on the photographic film by the photographic film. Attenuates by the amount of attenuation according to light absorption. The wavelength range in which light is absorbed in the photographic film is approximately the visible light range, and IR light in the infrared range is hardly absorbed. The light quantity only slightly changes from the incident light quantity.
ãï¼ï¼ï¼ï¼ã䏿¹ãåçãã£ã«ã ä¸ã®å·ãä»ãã¦ããç®
æã«å
ãç
§å°ããå ´åãç
§å°ãããå
ã®ä¸é¨ã¯å·ã«ãã£
ã¦å±æããã®ã§ãåè¨å·ãä»ãã¦ããç®æã«å
ãç
§å°ã
ãã¨ãã®ééå
éï¼åè¨ç®æãç´ç·çã«ééããå
ã®å
éï¼ã¯ãåçãã£ã«ã ã¸ã®å
¥å°å
éã«å¯¾ããåè¿°ããå
çãã£ã«ã ã«ããå
ã®å¸åã«èµ·å ããæ¸è¡°ã«ãå·ã«ãã
å
ã®å±æã«èµ·å ããæ¸è¡°ãå ããæ¸è¡°éã ãæ¸è¡°ããã
ãªããå³ï¼ï¼ï¼¡ï¼ã§ã¯å
ã®å
¥å°å´ã«å·ãä»ãã¦ããå ´å
ã示ãã¦ããããå
ã®å°åºå´ã«å·ãä»ãã¦ããå ´åãå
æ§ã§ãããOn the other hand, when light is applied to a scratched portion on a photographic film, a part of the irradiated light is refracted by the scratch. The amount of transmitted light (the amount of light transmitted linearly through the above-mentioned portion) is determined by the amount of light incident on the photographic film, the amount of attenuation caused by the absorption of light by the photographic film, and the amount of attenuation caused by refraction of light by flaws. Attenuates by the added amount of attenuation.
Note that FIG. 8A illustrates a case where the light incident side has a flaw, but the same applies to a case where the light exit side has a flaw.
ãï¼ï¼ï¼ï¼ãå·ã«ããå
ã®å±æã¯ï¼©ï¼²å
ã§ãçããã®
ã§ãåè¨å·ãä»ãã¦ããç®æã«ï¼©ï¼²å
ãç
§å°ããå ´åã®
IRå
ã®ééå
éã¯ãå·ã«ããå
ã®å±æã«èµ·å ããæ¸è¡°
ã«å¿ããæ¸è¡°éã ãæ¸è¡°ããããªãå·ã«ããå
ã®å±æ
ã¯ãä¾ã¨ãã¦å³ï¼ï¼ï¼¢ï¼ã«ã示ãããã«ãå·ã®è¦æ¨¡ï¼æ·±
ãçï¼ã大ãããªãã«ä¼´ã£ã¦é¡èã¨ãªãï¼å¯è¦å
ãIR
å
ãåæ§ï¼ã®ã§ãåè¨å·ãä»ãã¦ããç®æã«ï¼©ï¼²å
ãç
§
å°ããå ´åã®ééå
éã¯å·ã®è¦æ¨¡ã大ãããªãã«å¾ã£ã¦
å°ãããªããå¾ã£ã¦ãIRå
ã®ééå
éã®æ¸è¡°éã«åºã¥
ãã¦ãåçãã£ã«ã ã«ä»ãã¦ããå·ã®è¦æ¨¡ãæ¤ç¥ããã
ã¨ãã§ãããSince the refraction of light due to the scratch is also caused by the IR light, the amount of transmitted IR light when the above-mentioned scratched portion is irradiated with the IR light is attenuated according to the attenuation caused by the refraction of the light due to the scratch. Decay by an amount. As shown in FIG. 8B, the refraction of light due to the flaw becomes remarkable as the scale (depth, etc.) of the flaw increases (visible light also becomes IR light).
(The same applies to light.) Therefore, the amount of transmitted light when the above-mentioned scratched portion is irradiated with IR light decreases as the scale of the scratch increases. Therefore, the scale of the flaw on the photographic film can be detected based on the attenuation of the amount of transmitted IR light.
ãï¼ï¼ï¼ï¼ãã¾ããåçãã£ã«ã ä¸ã®å¡µåçã®ç°ç©ãã¤
ãã¦ããç®æã«å
ãç
§å°ããå ´åãç
§å°ããå
ã¯ç°ç©ã«
ãã£ã¦åå°ãããã®ã§ãç°ç©ã®å¤§ããã種é¡ï¼å
éé
çï¼ã«ãä¾åããããåè¨ç°ç©ãä»ãã¦ããç®æã«å
ã
ç
§å°ããå ´åã®å
ã®ééå
éã¯åè¨ç°ç©ã«ãã£ã¦å¤§ãã
æ¸è¡°ãããç°ç©ãä»ãã¦ããç®æã«å
ãç
§å°ããå ´åã®
ééå
éã®æ¸è¡°ã¯ãåè¨ç®æã«ï¼©ï¼²å
ãç
§å°ããå ´åã
åæ§ã§ãããFurther, when light is applied to a portion of the photographic film on which foreign matter such as dust is attached, the irradiated light is reflected by the foreign matter, and thus depends on the size and type (light transmittance) of the foreign matter. In addition, when light is applied to a portion where the foreign matter is attached, the amount of transmitted light is greatly attenuated by the foreign matter. Attenuation of the amount of transmitted light when light is applied to a portion where a foreign substance is attached is the same as when the IR light is applied to the portion.
ãï¼ï¼ï¼ï¼ãä¸è¨ã®ããã«ãåçãã£ã«ã ã«ï¼©ï¼²å
ãé
éããå ´åã®ééå
éã¯ãåçãã£ã«ã ä¸ã®å·åã¯ç°ç©
ãä»ãã¦ããç®æã§ã®ã¿å¤åããåçãã£ã«ã ã«ç»åã
è¨é²ããã¦ããã¨ãã¦ãã該ç»åã®ééæ¿åº¦ã®å¤åã®å½±
é¿ãåããªãã®ã§ãåçãã£ã«ã ã«ï¼©ï¼²å
ãç
§å°ãã¦é
éå
éãæ¤åºãããã¨ã§ãåçãã£ã«ã ã«ä»ãã¦ããå·
ãç°ç©ãæ¤åºã§ãããAs described above, when the IR light is transmitted through the photographic film, the amount of transmitted light changes only at a portion of the photographic film where a scratch or a foreign substance is attached, and even if an image is recorded on the photographic film. Since the image is not affected by the change in the transmission density of the image, the photographic film is irradiated with IR light to detect the amount of transmitted light, so that scratches and foreign substances on the photographic film can be detected.
ãï¼ï¼ï¼ï¼ãä¸è¨ã«åºã¥ããã¹ãããï¼ï¼ï¼ã§ã¯ä»¥ä¸ã®
ããã«ãã¦æ¬ é¥é¨æ¤åºå¦çãè¡ããåçãã£ã«ã ã«ï¼©ï¼²
å
ãç
§å°ããã¨ãã®ééå
éã¯ãåè¿°ã®ããã«é常ã¯ç»
åä¸ã®ä½ç½®ã«æãããç¥ä¸å®ã¨ãªããåçãã£ã«ã ã«å·
åã¯ç°ç©ãä»ãã¦ããç®æã§ã®ã¿ä½ä¸ããï¼å³ï¼å
ç
§ï¼ãIRãã¼ã¿ã¯å¦ç対象ã®ç»åä¸ã®åä½ç½®ã«ããã
IRå
ã®ééå
éã表ãã¦ããã®ã§ãå¦ç対象ã®ç»åä¸
ã®å·ãç°ç©ãä»ãã¦ããªãç®æã«ãããIRãã¼ã¿ã表
ãIRå
ã®ééå
éï¼ä¾ãã°ééå
éã®æå¤§å¤ï¼ãåºæº
å¤ã¨ãããããã¦ãåç»ç´ æ¯ã«ï¼©ï¼²å
ã®ééå
éãåºæº
å¤ã¨æ¯è¼ããåºæºå¤ã«å¯¾ããééå
éã®å¤åéï¼ä½ä¸
éï¼ãæå®å¤ï¼å·ãç°ç©ãä»ãã¦ããªãç®æã«ãããI
ï¼²å
ã®ééå
éã®è¥å¹²ã®å¤åãèæ
®ãã¦å®ããå¤ï¼ä»¥ä¸
ã®ç»ç´ ããä¿®æ£å¯¾è±¡ã®æ¬ é¥é¨ã«å±ããæ¬ é¥ç»ç´ ã¨ãã¦å
¨
ã¦æ¤åºãããOn the basis of the above, in step 120, a defective portion detection process is performed as follows. IR on photographic film
As described above, the amount of transmitted light upon irradiation with light is generally substantially constant irrespective of the position on the image, and is reduced only at a portion where the photographic film has a scratch or foreign matter (see FIG. 9). Since the IR data represents the amount of transmitted IR light at each position on the image to be processed, the amount of transmitted IR light (for example, transmitted (The maximum value of the amount of light) as a reference value. Then, the transmitted light amount of the IR light is compared with the reference value for each pixel, and a change amount (decrease amount) of the transmitted light amount with respect to the reference value is determined to be a predetermined value (I at a position where there is no scratch or foreign matter).
Pixels equal to or larger than a value determined in consideration of a slight change in the transmitted light amount of the R light are all detected as defective pixels belonging to the defective portion to be corrected.
ãï¼ï¼ï¼ï¼ãã¾ãã¹ãããï¼ï¼ï¼ã§ã¯ãæ¤åºããæ¬ é¥ç»
ç´ ããæ¬ é¥ç»ç´ ç¸äºã®ä½ç½®é¢ä¿ï¼ä¾ãã°é£æ¥ãã¦ããã
å¦ãï¼çã«åºã¥ãã¦ãåä¸ã®æ¬ é¥é¨ã«å±ããæ¬ é¥ç»ç´ æ¯
ã«åé¡ãã忬 é¥é¨ã«é¢ããæ
å ±ï¼ä¾ãã°åæ¬ é¥é¨ã«å±
ããæ¬ é¥ç»ç´ ã表ãæ
å ±ã忬 é¥ç»ç´ ã«ãããIRå
ã®
ééå
éã®ä½ä¸éçã®æ
å ±ï¼ãRAï¼ï¼ï¼çã«è¨æ¶ã
ããIn step 120, the detected defective pixels are classified into defective pixels belonging to the same defective portion based on the positional relationship between the defective pixels (for example, whether or not they are adjacent to each other), and each defective pixel is classified. (For example, information indicating defective pixels belonging to each defective portion or information such as a decrease in the amount of transmitted IR light at each defective pixel) is stored in the RAM 48 or the like.
ãï¼ï¼ï¼ï¼ããªããä¸è¨ã®æ¬ é¥é¨æ¤åºå¦çã¯å種ã®è£æ£
ãçµãï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åãã¼ã¿ãç¨ãã¦è¡ãããã®
ã§ãåçãã£ã«ã ã«ã¤ããå·ãç°ç©ã«ç¸å½ããæ¬ é¥é¨ã®
ã¿ã確å®ãã¤ç²¾åº¦è¯ãæ¤åºã§ããã¨å
±ã«ãæ¬ é¥é¨ã®ç¯å²
çã®èª¤æ¤åºã鲿¢ãããã¨ãã§ãããã¹ãããï¼ï¼ï¼ã¯
è«æ±é
ï¼ã«è¨è¼ã®æ¬ é¥é¨æ¤åºææ®µã«å¯¾å¿ãã¦ãããSince the above-described defective portion detection processing is performed using various corrected R, G, B, and IR data, only defective portions corresponding to scratches or foreign matters on the photographic film can be surely detected. The detection can be performed with high accuracy, and erroneous detection of the range of the defective portion can be prevented. Step 120 corresponds to the defective portion detecting means.
ãï¼ï¼ï¼ï¼ã次ã®ã¹ãããï¼ï¼ï¼ã§ã¯ãæ¬ é¥é¨æ¤åºå¦ç
ã«ãã£ã¦æ¤åºãããä¿®æ£å¯¾è±¡ã®æ¬ é¥é¨ã«å¯¾ãã忬 é¥é¨
ãä¿®æ£ããããã®ä¿®æ£å¤ãåã
æ¼ç®ãããã¾ããæ¬ é¥é¨
ä¿®æ£ã®åçã«ã¤ãã¦èª¬æãããIn the next step 122, a correction value for correcting each defective portion is calculated for the defective portion to be corrected detected by the defective portion detection processing. First, the principle of defect repair will be described.
ãï¼ï¼ï¼ï¼ãå³ï¼ï¼ï¼¢ï¼ã«ç¤ºãããã«ãåçãã£ã«ã ã®
ä¹³å¤å±¤ã¯ï¼²ï¼ï¼§ï¼ï¼¢ã®åæå
層ãå«ãã§æ§æããã¦ã
ããç»åãé²å
è¨é²ããç¾åçã®å¦çãè¡ãããåçã
ã£ã«ã ï¼ãã¬ãã£ã«ã ï¼ã¯ãï¼²æå
層ã«ï¼£ã®ãã¬åãå½¢
æãããï¼§ã®æå
層ã«ï¼ã®ãã¬åãå½¢æãããï¼¢ã®æå
層ã«ï¼¹ã®ãã¬åãå½¢æããããããã¦åçãã£ã«ã ãé
éããå¯è¦å
ã®ãã¡ãï¼²å
ã«ã¤ãã¦ã¯ï¼²æå
層ã«ããã¦
ï¼£ã®ãã¬åã®ééæ¿åº¦ã«å¿ããæ¸è¡°éã ãæ¸è¡°ï¼å¸åï¼
ãããï¼§å
ã«ã¤ãã¦ã¯ï¼§æå
層ã«ããã¦ï¼ã®ãã¬åã®é
éæ¿åº¦ã«å¿ããæ¸è¡°éã ãæ¸è¡°ï¼å¸åï¼ãããï¼¢å
ã«ã¤
ãã¦ã¯ï¼¢æå
層ã«ããã¦ï¼¹ã®ãã¬åã®ééæ¿åº¦ã«å¿ãã
æ¸è¡°éã ãæ¸è¡°ï¼å¸åï¼ããããAs shown in FIG. 8 (B), the emulsion layer of the photographic film includes R, G, and B photosensitive layers, and the image is exposed and recorded, and the photographic film is subjected to processing such as development. In the film (negative film), a negative C image is formed on the R photosensitive layer, a negative M image is formed on the G photosensitive layer, and a negative Y image is formed on the B photosensitive layer. Of the visible light transmitted through the photographic film, R light is attenuated (absorbed) by an amount corresponding to the transmission density of the negative C image in the R photosensitive layer.
The G light is attenuated (absorbed) by an amount corresponding to the transmission density of the negative M image in the G photosensitive layer, and the B light is attenuated by an amount corresponding to the transmission density of the negative Y image in the B photosensitive layer. Attenuated (absorbed).
ãï¼ï¼ï¼ï¼ãããã§ãä¾ã¨ãã¦å³ï¼ï¼ï¼¢ï¼ã«ç¤ºããã
ã«ãä¹³å¤é¢ã¨å対å´ã®ããã¯é¢ã«å·ãä»ãã¦ããå ´åã
ééå
ã«å¯¾ããï¼²ï¼ï¼§ï¼ï¼¢ã®åæå
層ã«ãããå
ã®å¸å
ã®æ¯çã¯å·ãä»ãã¦ããªãå ´åã¨åãã§ãããããªã
ã¡ãå³ï¼ï¼ï¼¢ï¼ã«ããã¦ãåçãã£ã«ã ã¸ã®å
¥å°å
éã
I0ãå·ãä»ãã¦ããªãã¨ãã®ï¼²å
ãï¼§å
ãï¼¢å
ã®éé
å
éãåã
I0Rï¼ï¼©0Gï¼ï¼©0Bã¨ããå·ãä»ããã¨ãã«å·
ãä»ãã¦ããç®æãç´ç·çã«ééãã¦ä¹³å¤å±¤ã«å
¥å°ãã
å
éãI1ï¼ï¼©1ï¼ï¼©0ï¼ï¼©0âI1ãå·ã«ããå
ã®æ¸è¡°
åï¼ãå·ãä»ãã¦ããã¨ãã®ï¼²å
ãï¼§å
ãï¼¢å
ã®ééå
éãåã
I1Rï¼ï¼©1Gï¼ï¼©1Bã¨ããã¨ã以ä¸ã®ï¼ï¼ï¼å¼ã®
é¢ä¿ãæãç«ã¤ã I0Rï¼ï¼©0âI1Rï¼ï¼©1 I0Gï¼ï¼©0âI1Gï¼ï¼©1 I0Bï¼ï¼©0âI1Bï¼ï¼©1 â¦ï¼ï¼ï¼Here, for example, as shown in FIG. 8B, when the back surface opposite to the emulsion surface is scratched,
The ratio of light absorption in each of the R, G, and B photosensitive layers to transmitted light is the same as that in the case where no damage is made. That is, in FIG. 8B, the amount of light incident on the photographic film is I 0 , and the amounts of transmitted R, G, and B light when there is no flaw are I 0R , I 0G , and I 0B , respectively. When the mark is attached, the amount of light that is linearly transmitted through the scratched portion and enters the emulsion layer is represented by I 1 (I 1 <I 0 : I 0 âI 1 is the amount of light attenuation due to the scratch). Assuming that the transmitted light amounts of the R light, the G light, and the B light when attached are I 1R , I 1G , and I 1B , respectively, the following equation (2) holds. I 0R / I 0 â I 1R / I 1 I 0G / I 0 â I 1G / I 1 I 0B / I 0 â I 1B / I 1 ... (2)
ãï¼ï¼ï¼ï¼ãå¾ã£ã¦ãããã¯é¢ã«å·ãä»ãã¦ããç®æã«
対å¿ããæ¬ é¥é¨ã¯ãå·ãä»ãã¦ããªãå ´åã¨æ¯è¼ãã¦è¼
度ã®ã¿ãå¤åããåçãã£ã«ã ã«è¨é²ããã¦ããç»åã®
è²æ
å ±ã¯ä¿åããã¦ããã®ã§ãè¼åº¦èª¿æ´æ¹æ³ãé©ç¨ãæ¬
é¥é¨é åã®è¼åº¦ã調æ´ãããã¨ã§ãç»åãã¼ã¿ã表ãç»
åã®æ¬ é¥é¨ãä¿®æ£ãããã¨ãã§ãããTherefore, the defective portion corresponding to the portion having a scratch on the back surface changes only in luminance as compared with the case without the scratch, and the color information of the image recorded on the photographic film is preserved. Therefore, the defective portion of the image represented by the image data can be corrected by adjusting the luminance of the defective portion region by applying the luminance adjusting method.
ãï¼ï¼ï¼ï¼ã䏿¹ãä¾ã¨ãã¦å³ï¼ï¼ï¼£ï¼ã«ç¤ºãããã«ä¹³
å¤é¢ã«å·ãä»ãã¦ããå ´åãæµ
ãå·ã§ããã°åæå
層ã®
ãã¡ã®ä¸é¨ã®æå
層ãåããããã¨ã§ãééå
ã«å¯¾ãã
ï¼²ï¼ï¼§ï¼ï¼¢ã®åæå
層ã«ãããå
ã®å¸åã®æ¯çã¯å·ãä»
ãã¦ããªãå ´åã¨å¤åãããã¾ããåæå
層ãå
¨ã¦å¥ã
åããã¦ãããããªéå¸¸ã«æ·±ãå·ã§ããã°ãééå
ã«å¯¾
ããåæå
層ã«ãããå
ã®å¸åã¯çããªããå¾ã£ã¦ãä½
ãã®å ´åãï¼ï¼ï¼å¼ã®é¢ä¿ã¯æç«ããªããOn the other hand, for example, as shown in FIG. 8C, when the emulsion surface is scratched, if the scratch is shallow, a part of each photosensitive layer is cut off, so that transmitted light is removed. The ratio of light absorption in each of the R, G, and B photosensitive layers with respect to is different from that in the case where there is no flaw. In addition, if the photosensitive layer is very deeply scratched such that all the photosensitive layers are peeled off, the light does not absorb the transmitted light in each photosensitive layer. Therefore, the relationship of equation (2) does not hold in any case.
ãï¼ï¼ï¼ï¼ããã®ããã«ãä¹³å¤é¢ã«å·ãä»ãã¦ããç®æ
ã«å¯¾å¿ããæ¬ é¥é¨ã¯ãå·ã®æ·±ãã«æããããå·ãä»ãã¦
ããªãå ´åã¨æ¯è¼ãã¦è¼åº¦åã³è²ãåã
å¤åãã¦ããã
åçãã£ã«ã ã«è¨é²ããã¦ããç»åã®è²æ
å ±ã失ããã¦
ããã®ã§ãè¼åº¦ã調æ´ãã¦ãæ¬ é¥é¨ã精度è¯ãä¿®æ£ãã
ãã¨ã¯å°é£ã§ããããã®ãããä¹³å¤é¢ã«å·ãä»ãã¦ãã
ç®æã«å¯¾å¿ããæ¬ é¥é¨ã®ä¿®æ£ã«ã¯ãæ¬ é¥é¨ã®å¨å²ã®é å
ã®æ
å ±ããè£éã«ãã£ã¦æ¬ é¥é¨ã®è¼åº¦åã³æ¿åº¦ã決å®ã
ãä¿®æ£æ¹æ³ï¼è£éæ¹æ³ï¼ãé©ãã¦ããããªããåçãã£
ã«ã ã«ç°ç©ãä»ãã¦ãããã¨ã«èµ·å ãã¦çããæ¬ é¥é¨ã«
ã¤ãã¦ããç°ç©ãä»ãã¦ããªãå ´åã¨æ¯è¼ãã¦è¼åº¦åã³
è²ãåã
å¤åããã®ã§ãä¸è¨ã®æ¬ é¥é¨ãä¿®æ£ããå ´åã«
ãè£éæ¹æ³ãé©ãã¦ãããAs described above, the defective portion corresponding to the portion having a scratch on the emulsion surface has a different luminance and color compared to the case without the scratch, regardless of the depth of the scratch. ,
Since the color information of the image recorded on the photographic film is also lost, it is difficult to accurately correct the defective portion even if the luminance is adjusted. For this reason, a correction method (interpolation method) for determining the luminance and density of a defective portion by interpolation from information on the area around the defective portion is suitable for correcting a defective portion corresponding to a portion where the emulsion surface is damaged. ing. In addition, since the brightness and the color of the defective portion caused by the presence of the foreign matter on the photographic film also change as compared with the case where the foreign matter is not attached, when the above-described defective portion is corrected, The interpolation method is also suitable.
ãï¼ï¼ï¼ï¼ãã¹ãããï¼ï¼ï¼ã§ã¯ãã¾ãä¿®æ£å¯¾è±¡ã®åæ¬
é¥é¨ã«å¯¾ããè£éæ¹æ³ãé©ç¨ãã¦ä¿®æ£ãããè¼åº¦èª¿æ´æ¹
æ³ãé©ç¨ãã¦ä¿®æ£ããããå¤å®ããããã®æå®ã®ç¹å¾´é
ãåã
æ¼ç®ãããæ¬å®æ½å½¢æ
ã§ã¯æå®ã®ç¹å¾´éã®ä¸ä¾ã¨
ãã¦ãæ¬ é¥é¨ã«ãããï¼²å
ãï¼§å
ãï¼¢å
ã®ééå
éã®å¤
åã®ç¸é¢ã表ãç¹å¾´éãç¨ãã¦ãããIn step 122, first, for each defective portion to be corrected, a predetermined characteristic amount for determining whether the correction is performed by applying the interpolation method or the luminance adjustment method is calculated. In the present embodiment, as an example of the predetermined feature amount, a feature amount indicating a correlation between changes in the transmitted light amounts of the R light, the G light, and the B light in the defect portion is used.
ãï¼ï¼ï¼ï¼ãä¾ãã°åçãã£ã«ã ã®ããã¯é¢ã«å·ãä»ã
ã¦ããå ´åãä¾ã¨ãã¦å³ï¼ï¼ï¼¡ï¼ã«ç¤ºãããã«ãå·ãä»
ãã¦ããç®æã«ããã¦ï¼²å
ãï¼§å
ãï¼¢å
ã®ééå
éã¯ç¥
åæ§ã®å¤åã示ãã®ã§ãï¼²å
ãï¼§å
ãï¼¢å
ã®ééå
éã®
å¤åã®ç¸é¢ã¯é«ãã䏿¹ãåçãã£ã«ã ã®ä¹³å¤é¢ã«å·ã
ã¤ãã¦ããå ´åãä¾ã¨ãã¦å³ï¼ï¼ï¼¢ï¼ã«ç¤ºãããã«ãå·
ãä»ãã¦ããç®æã«ãããï¼²å
ãï¼§å
ãï¼¢å
ã®ééå
é
ã®å¤åã¯ä¸å®ãããï¼²å
ãï¼§å
ãï¼¢å
ã®ééå
éã®å¤å
ã®ç¸é¢ã¯ä½ãï¼åçãã£ã«ã ã«ç°ç©ãä»ãã¦ããå ´åã
åæ§ï¼ãFor example, when the back surface of the photographic film is damaged, as shown in FIG. 9A, for example, the transmitted light amounts of the R light, the G light, and the B light are substantially the same at the damaged portion. , The change in the transmitted light amount of the R light, the G light, and the B light is highly correlated. On the other hand, when the emulsion surface of the photographic film is damaged, as shown in FIG. 9 (B) as an example, the change in the transmitted light amount of R light, G light, and B light at the damaged position is not constant. , R light, G light, and B light have a low correlation with the change in the amount of transmitted light (the same applies to a case where a foreign matter is attached to a photographic film).
ãï¼ï¼ï¼ï¼ãå³ï¼ã¯å
¸åçãªã±ã¼ã¹ã示ãããã®ã§ã
ããå®éã«ã¯åçãã£ã«ã ã®ä¸¡é¢ã«å·ãä»ãã¦ããçã®
ããã«ãä½ãã®ä¿®æ£æ¹æ³ãé©ç¨ãã¹ãããæç¢ºã§ãªãã±
ã¼ã¹ãå¤ã
åå¨ãã¦ããããï¼²ï¼ï¼§ï¼ï¼¢ã®ç»åãã¼ã¿ã«
被åä½ã®è²ã«é¢ããæ
å ±ãæ®ã£ã¦ããã°è¼åº¦èª¿æ´æ¹æ³ã
é©ç¨ãããã¨ã好ã¾ãããåè¨æ
å ±ãæ®ã£ã¦ããªãå ´å
ã«ã¯è£éæ¹æ³ãé©ç¨ãããã¨ãæã¾ããã®ã§ãæ¬ é¥é¨ã«
ãããï¼²å
ãï¼§å
ãï¼¢å
ã®ééå
éã®å¤åã®ç¸é¢ã表ã
æå®ã®ç¹å¾´éï¼ä¾ãã°ï¼²å
ãï¼§å
ãï¼¢å
ã®ééå
éã®å¤
åã®å¾®åå¤ã®å·®ãç©ç®ããå¤ï¼ã«åºã¥ãã¦ãé©ç¨ãã¹ã
ä¿®æ£æ¹æ³ãåã
ã®æ¬ é¥é¨æ¯ã«é©æ£ã«å¤å®ãããã¨ãã§ã
ããFIG. 9 shows a typical case, and there are many cases in which it is not clear which correction method should be applied, such as a case where both sides of a photographic film are actually scratched. However, it is preferable to apply the luminance adjustment method if information on the color of the subject remains in the R, G, B image data, and to apply the interpolation method if the information does not remain. Since it is desirable, a predetermined characteristic amount (for example, the difference between the differential values of the changes in the transmitted light amounts of the R, G, and B lights) representing the correlation of the changes in the transmitted light amounts of the R, G, and B lights in the defect portion is integrated. Value), it is possible to appropriately determine a repair method to be applied for each defective portion.
ãï¼ï¼ï¼ï¼ãä¿®æ£å¯¾è±¡ã®åã
ã®æ¬ é¥é¨ã«ã¤ãã¦ãä¸è¿°ã
ãæå®ã®ç¹å¾´éãåã
æ¼ç®ããã¨ãåæ¹ã®ä¿®æ£æ¹æ³ã®é©
ç¨ç¯å²ã®è¨å®å¤ï¼åæ¹ã®ä¿®æ£æ¹æ³ã®é©ç¨ç¯å²ã®å¢çã表
ãé¾å¤ï¼ãåãè¾¼ã¿ã忬 é¥é¨ã®æå®ã®ç¹å¾´éãåè¨è¨
å®å¤ã¨åã
æ¯è¼ãããã¨ã«ãããåã
ã®æ¬ é¥é¨ãåä½ã¨
ãã¦ãè£éæ¹æ³ãé©ç¨ãã¦ä¿®æ£ãè¡ããè¼åº¦èª¿æ´æ¹æ³ã
é©ç¨ãã¦ä¿®æ£ãè¡ãããåã
å¤å®ãããWhen the above-described predetermined feature values are calculated for each of the defective portions to be corrected, the set values of the application range of both the correction methods (thresholds indicating the boundaries between the application ranges of both the correction methods) are obtained. By comparing the predetermined feature amount of each defective portion with the set value, it is determined whether correction is performed by applying an interpolation method or correction is performed by applying a luminance adjustment method, with each defective portion as a unit. judge.
ãï¼ï¼ï¼ï¼ãããã¦ãè£éæ¹æ³ãé©ç¨ãã¦ä¿®æ£ãè¡ãã¨
å¤å®ããæ¬ é¥é¨ã«å¯¾ãã¦ã¯ãè£éæ¹æ³ãé©ç¨ãã¦ä¿®æ£å¤
ãæ¼ç®ãããããªãã¡ãä¿®æ£å¯¾è±¡ã®æ¬ é¥é¨ã®è¼åº¦åã³è²
ãè©²æ¬ é¥é¨ã®å¨å²ã®é åã®è¼åº¦åã³è²ããè£éã«ãã£ã¦
æ°ãã«æ¼ç®ããè£éæ¼ç®ã«ãã£ã¦æ±ã¾ãæ¬ é¥é¨å
ã®åç»
ç´ ã®å¤ï¼ï¼²ï¼ï¼§ï¼ï¼¢æ¯ã®æ¿åº¦å¤ã§ãããããè²ç¸ã»æåº¦
ã»å½©åº¦ã表ãå¤ã§ãããï¼ãDï¼ãåç»ç´ ã®å
ã®å¤ãD
ï¼ãä¿®æ£åº¦åããαã¨ããæ¬ é¥é¨å
ã®åç»ç´ ã®ä¿®æ£å¤ï¼¤
ï¼ãï¼ï¼ï¼å¼ã«å¾ã£ã¦æ±ããã Dï¼ï¼Î±ã»ï¼¤ï¼ï¼ï¼ï¼âαï¼ï¼¤ï¼ â¦ï¼ï¼ï¼ ä¸è¨å¦çããè£éæ¹æ³ãé©ç¨ãã¦ä¿®æ£ãè¡ãã¨å¤å®ãã
æ¬ é¥é¨ã«å¯¾ãã¦åã
è¡ã£ã¦ãåè¨æ¬ é¥é¨ã«å¯¾ããä¿®æ£å¤
ãåã
æ±ãããThen, for a defective portion determined to be corrected by applying the interpolation method, a correction value is calculated by applying the interpolation method. In other words, the luminance and color of the defect to be corrected are newly calculated by interpolation from the luminance and color of the area around the defect, and the value of each pixel in the defect (R, G, and B for each of the R, G, and B) obtained by the interpolation calculation. Density values or values representing hue, lightness, and saturation) may be D1, and the original value of each pixel may be D1.
2. The correction value is α, and the correction value D of each pixel in the defective portion is
3 is obtained according to the equation (3). D3 = α · D1 + (1âα) D2 (3) The above processing is performed on each of the defective portions determined to be corrected by applying the interpolation method, and correction values for the defective portions are obtained.
ãï¼ï¼ï¼ï¼ãã¾ããè¼åº¦èª¿æ´æ¹æ³ãé©ç¨ãã¦ä¿®æ£ãè¡ã
ã¨å¤å®ããæ¬ é¥é¨ã«å¯¾ãã¦ã¯ãè¼åº¦èª¿æ´æ¹æ³ãé©ç¨ãã¦
ä¿®æ£å¤ãæ¼ç®ãããããªãã¡ãä¿®æ£å¯¾è±¡ã®æ¬ é¥é¨ã«ãã
ãIRå
ã®ééå
éã®å¤åéã«åºã¥ãã¦æ¬ é¥é¨ã®è¼åº¦ä¿®
æ£éãæ¼ç®ããè¼åº¦ä¿®æ£éã«å¿ãã¦ä¿®æ£ããæ¬ é¥é¨å
ã®
åç»ç´ ã®è¼åº¦å¤ãLï¼ãåç»ç´ ã®å
ã®è¼åº¦å¤ãLï¼ã
ãæ¬ é¥é¨ä¿®æ£åº¦åããã®ç¾å¨ã®è¨å®å¤ãαã¨ããæ¬ é¥é¨
å
ã®åç»ç´ ã®ä¿®æ£å¤ï¼è¼åº¦å¤ï¼ï¼¬ï¼ãï¼ï¼ï¼å¼ã«å¾ã£ã¦
æ±ããã Lï¼ï¼Î±ã»ï¼¬ï¼ï¼ï¼ï¼âαï¼ï¼¬ï¼ â¦ï¼ï¼ï¼ ä¸è¨å¦çãè¼åº¦èª¿æ´æ¹æ³ãé©ç¨ãã¦ä¿®æ£ãè¡ãã¨å¤å®ã
ãæ¬ é¥é¨ã«å¯¾ãã¦åã
è¡ã£ã¦ãåè¨æ¬ é¥é¨ã«å¯¾ããä¿®æ£
å¤ãåã
æ±ããããªããä¸è¨ã®ã¹ãããï¼ï¼ï¼ã¯è«æ±é
ï¼ã«è¨è¼ã®ä¿®æ£ææ®µã«å¯¾å¿ãã¦ãããFor a defective part determined to be corrected by applying the brightness adjustment method, a correction value is calculated by applying the brightness adjustment method. That is, the luminance correction amount of the defective portion is calculated based on the amount of change in the amount of transmitted IR light in the defective portion to be corrected, and the luminance value of each pixel in the defective portion corrected according to the luminance correction amount is represented by L1, The original luminance value of L2,
Assuming that the current setting value of the âdefect portion correction degreeâ is α, a correction value (luminance value) L3 of each pixel in the defect portion is obtained according to the equation (4). L3 = α · L1 + (1âα) L2 (4) The above processing is performed on each defective portion determined to be corrected by applying the luminance adjustment method, and a correction value for each defective portion is obtained. Incidentally, the above-mentioned step 122 corresponds to the correcting means described in claim 8.
ãï¼ï¼ï¼ï¼ãä¿®æ£å¯¾è±¡ã®å
¨ã¦ã®æ¬ é¥é¨ã«å¯¾ãã¦ä¿®æ£å¤ã
åã
æ¼ç®ãããã¨ã¹ãããï¼ï¼ï¼ã¸ç§»è¡ãã忬 é¥é¨ã«
対ããä¿®æ£å¤ããæ¬ é¥é¨ã®ä½ç½®ã表ãæ
å ±ï¼ä¾ãã°åæ¬
é¥é¨ãæ§æããæ¬ é¥ç»ç´ ã®ã¢ãã¬ã¹ï¼ã¨å
±ã«ã¤ã¡ã¼ã¸ã
ãã»ããµï¼ï¼ã«éç¥ããæ¬ é¥é¨ä¿®æ£å¤æ±ºå®å¦çãçµäºã
ããWhen the correction values are calculated for all the defective portions to be corrected, the process proceeds to step 124, and the correction value for each defective portion is replaced with information indicating the position of the defective portion (for example, each of the defective portions is formed). This is notified to the image processor 40 together with the address of the defective pixel), and the defective portion correction value determination processing ends.
ãï¼ï¼ï¼ï¼ãã¤ã¡ã¼ã¸ããã»ããµï¼ï¼ã«ã¯ãä¸è¨ã®æ¬ é¥
é¨ä¿®æ£å¤æ±ºå®å¦çã«ãããã¹ãããï¼ï¼ï¼ãï¼ï¼ï¼ã®è£
æ£ãçµãç»åãã¼ã¿ãå
¥åããï¼åè¨è£æ£ãã¤ã¡ã¼ã¸ã
ãã»ããµï¼ï¼ãè¡ãããã«ãã¦ãããï¼ãå
¥åãããç»
åãã¼ã¿ã«å¯¾ããå¶å¾¡é¨ï¼ï¼ã§æ¬ é¥é¨ä¿®æ£å¤æ±ºå®å¦çã
è¡ããããã¨ã§å¶å¾¡é¨ï¼ï¼ããéç¥ãããä¿®æ£å¤ã«å¾ã£
ã¦æ¬ é¥é¨ãä¿®æ£ããï¼è©³ããã¯æ¬ é¥é¨ã«å±ãã忬 é¥ç»
ç´ ã®å¤ããéç¥ãããä¿®æ£ç»ç´ å¤ã«ç½®ãæããï¼æ¬ é¥é¨
ä¿®æ£å¦çãè¡ããããã«ãããä¿®æ£å¯¾è±¡ã®å
¨ã¦ã®æ¬ é¥é¨
ãèªåçã«ä¿®æ£ããããã¨ã«ãªãããã®ããã«ãã¤ã¡ã¼
ã¸ããã»ããµï¼ï¼ãè«æ±é
ï¼ã«è¨è¼ã®ä¿®æ£ææ®µã«å¯¾å¿ã
ã¦ãããImage data which has been corrected in steps 102 to 118 in the above-described defective portion correction value determination processing is input to the image processor 40 (the correction may be performed by the image processor 40). The defective portion is corrected in accordance with the correction value notified from the control unit 42 by performing the defective portion correction value determination processing on the image data by the control unit 42 (specifically, the value of each defective pixel belonging to the defective unit is notified. (Replaced by the corrected pixel value). As a result, all the defective portions to be corrected are automatically corrected. As described above, the image processor 40 also corresponds to the correcting unit according to the eighth aspect.
ãï¼ï¼ï¼ï¼ãã¾ãã¤ã¡ã¼ã¸ããã»ããµï¼ï¼ã¯ãæ¬ é¥é¨ä¿®
æ£å¦çãè¡ã£ãç»åãã¼ã¿ã«å¯¾ããå¶å¾¡é¨ï¼ï¼ã«ããã
ã»ããã¢ããæ¼ç®ã«ãã£ã¦æ±ºå®ãããå¦çæ¡ä»¶ã§å種ã®
ç»åå¦çãè¡ããIï¼ï¼¯ã³ã³ããã¼ã©ï¼ï¼åã³ï¼©ï¼ï¼¦å
è·¯ï¼ï¼ãä»ãã¦ããªã³ã¿ï¼ï¼ã¸åºåãããããã«ããã
å°ç»ç´ï¼ï¼ã«é²å
è¨é²ãããç»åãããä¿®æ£å¯¾è±¡ã¨ãã¦
鏿ãããæ¬ é¥é¨ãæ¶å»ããããThe image processor 40 performs various kinds of image processing on the image data on which the defective portion correction processing has been performed under the processing conditions determined by the setup calculation in the control section 42, and the I / O controller 38 and the I / F Output to the printer 16 via the circuit 54. This allows
A defective portion selected as a correction target is erased from the image recorded on the photographic paper 68 by exposure.
ãï¼ï¼ï¼ï¼ããªããä¸è¨ã§ã¯åä¸ã®ã¨ãªã¢ï¼£ï¼£ï¼¤ï¼ï¼ã«
ãã£ã¦ï¼²ï¼ï¼§ï¼ï¼¢ï¼ï¼©ï¼²ã®åæ³¢é·åã«ã¤ãã¦ã®èªã¿åã
ï¼ééå
éã®æ¤åºï¼ãåã
è¡ãæ§æã®ãã£ã«ã ã¹ãã£ã
ï¼ï¼ãç¨ããä¾ã説æããããããã«éå®ããããã®ã§
ã¯ãªããåæ³¢é·åã«ã¤ãã¦ã®èªã¿åããï¼å以ä¸ã®å
é»
å¤æç´ åã§è¡ãæ§æã®ãã£ã«ã ã¹ãã£ããç¨ãã¦ãã
ããä¾ã¨ãã¦å³ï¼ï¼ã«ã¯ãåæ³¢é·åã«ã¤ãã¦ã®èªã¿åã
ããåã
ç°ãªãã©ã¤ã³ã»ã³ãµï¼ï¼ï¼¢ãï¼ï¼ï¼§ãï¼ï¼ï¼²ã
ï¼ï¼ï¼©ï¼²ã§è¡ãæ§æã®ãã£ã«ã ã¹ãã£ãï¼ï¼ã示ããã¦
ãããIn the above description, an example is described in which the film scanner 12 is configured to perform reading (detection of the amount of transmitted light) for each of the R, G, B, and IR wavelength regions by a single area CCD 30. The present invention is not limited to this, and a film scanner having a configuration in which reading in each wavelength range is performed by two or more photoelectric conversion elements may be used. As an example, in FIG. 10, reading for each wavelength range is performed by different line sensors 70B, 70G, 70R, respectively.
Shown is a film scanner 72 configured for 70IR.
ãï¼ï¼ï¼ï¼ããã®ãã£ã«ã ã¹ãã£ãï¼ï¼ã¯ãå
æºï¼ï¼ã
ãå°åºãããå
ããå
æ¡æ£ããã¯ã¹ï¼ï¼ã«ãã£ã¦åçã
ã£ã«ã ï¼ï¼ã®å¹
æ¹åã«æ²¿ã£ã¦é·ãã¹ãªããç¶ã®å
æã«æ´
å½¢ããã¦åçãã£ã«ã ï¼ï¼ã«ç
§å°ãããåçãã£ã«ã ï¼
ï¼ãééããå
ãçµåã¬ã³ãºï¼ï¼ãä»ãã¦åã©ã¤ã³ã»ã³
ãµï¼ï¼ï¼¢ãï¼ï¼ï¼§ãï¼ï¼ï¼²ãï¼ï¼ï¼©ï¼²ã«åã
å
¥å°ããã
æ§æã¨ãªã£ã¦ãããåã©ã¤ã³ã»ã³ãµã®åå
é¢ã«ã¯ãå
¥å°
å
ã®ãã¡æ¤åºãã¹ãæ³¢é·åã®å
ã®ã¿ééããããã£ã«ã¿
ãåã
è¨ãããã¦ãããåçãã£ã«ã ï¼ï¼ãä¸å®é度ã§
æ¬éããã¦ããç¶æ
ã§ãåã©ã¤ã³ã»ã³ãµã«ããåæ³¢é·å
ã«ã¤ãã¦ã®èªã¿åãï¼åæ³¢é·åã®å
ã®ééå
éã®æ¤åºï¼
ãåã
è¡ããããIn the film scanner 72, the light emitted from the light source 20 is shaped into a long slit-like light beam along the width direction of the photographic film 26 by the light diffusion box 74, and is irradiated on the photographic film 26. 2
The light transmitted through 6 is incident on each of the line sensors 70B, 70G, 70R, and 70IR via the imaging lens 28. The light receiving surface of each line sensor is provided with a filter that transmits only light in the wavelength range to be detected among the incident light, and the photographic film 26 is conveyed at a constant speed. Reading of wavelength range (detection of transmitted light amount of light in each wavelength range)
Are performed respectively.
ãï¼ï¼ï¼ï¼ãä¸è¨ã®ãã£ã«ã ã¹ãã£ãï¼ï¼ã®ããã«ãå
æ³¢é·åã«ã¤ãã¦ã®èªã¿åããè¤æ°ã®å
é»å¤æç´ åã§è¡ã
å ´åãåå
é»å¤æç´ åã®é
ç½®ä½ç½®ã®ç¸éã«èµ·å ãã¦ãå
æ³¢é·åã«ã¤ãã¦ã®èªã¿åãã§åã
å¾ãããç»åæ
å ±ã«ç»
ç´ ãããçºçãããä¾ãã°ãã£ã«ã ã¹ãã£ãï¼ï¼ã§ã¯ã
é£ãåãã©ã¤ã³ã»ã³ãµã®åçãã£ã«ã ï¼ï¼ä¸ã§ã®èªåä½
ç½®ã®ééããèªã¿åãã®ï¼å¨æå½ãã«åçãã£ã«ã ï¼ï¼
ãæ¬éãããè·é¢ã®ï½åï¼ï½ã¯æ´æ°ï¼ã¨ãªãããã«åã©
ã¤ã³ã»ã³ãµãåã
é
ç½®ããã¦ããã°ãåã©ã¤ã³ã»ã³ãµã«
ããèªã¿åãã®ã¿ã¤ãã³ã°ãï½å¨æãã¤ãããã°ç»ç´ ã
ãã¯çºçããªãããä¾ãã°ã©ã¤ã³ã»ã³ãµã®åä»ä½ç½®ã®èª¤
å·®çã«ãããåè¨èªåä½ç½®ã®ééãåè¨ï¼å¨æå½ãã®ã
ã£ã«ã æ¬éè·é¢ã®éæ´æ°åã§ãã£ãå ´åã«ã¯ãåè¿°ã®ç»
ç´ ãããçºçãããWhen reading in each wavelength range is performed by a plurality of photoelectric conversion elements as in the film scanner 72 described above, each reading in each wavelength range is performed due to the difference in the arrangement position of each photoelectric conversion element. Pixel shift occurs in the obtained image information. For example, in the film scanner 72,
The interval between the reading positions of the adjacent line sensors on the photographic film 26 is determined by the photographic film 26 per reading cycle.
If each line sensor is arranged so as to be n times (n is an integer) the transporting distance, the pixel shift does not occur if the timing of reading by each line sensor is shifted by n periods. If the interval between the reading positions is a non-integer multiple of the film transport distance per one cycle due to an error in the mounting position of the sensor or the like, the above-described pixel shift occurs.
ãï¼ï¼ï¼ï¼ããã®ãããªå ´åã«ã¯ãåã©ã¤ã³ã»ã³ãµã®èª
åä½ç½®ã®ææã®ä½ç½®ã«å¯¾ããä½ç½®ããéãäºã測å®ãã¦
è¨æ¶ãã¦ãããä½ç½®ããè£æ£ãè¡ãããã«ãã¦ãããã
ãã®ä½ç½®ããè£æ£ã¯ãä¾ãã°åçãã£ã«ã ã®æ¬éæ¹åã¨
å¹³è¡ãªæ¹åï¼ä¾ãã°å³ï¼ï¼ï¼¡ï¼ã«ãããï½æ¹åï¼ã«ã¤ã
ã¦ã®æªæ²åå·®è£æ£åã³åçè²åå·®è£æ£ã«ããã¦ãä½ç½®ã
ãéã®æ¸¬å®å¤åã ããä½ç½®ããã®æ¹åã¨éæ¹åã«å
¨ç»ç´
ã®ä½ç½®ãç§»åããããã«è£æ£ãããã¨ã§å®ç¾ã§ãããã
ãã«ãããå
é»å¤æç´ åã®é
ç½®ä½ç½®ã®ç¸éã«èµ·å ããç»
ç´ ä½ç½®ã®ãããè£æ£ãããã¨ãã§ãããIn such a case, the displacement of the reading position of each line sensor with respect to the intended position may be measured and stored in advance, and the displacement may be corrected.
For example, in the distortion correction and the magnification chromatic aberration correction in a direction parallel to the transport direction of the photographic film (for example, the x direction in FIG. 5A), the positional deviation is corrected by the measured value of the positional deviation. This can be realized by correcting so that the positions of all pixels move in the direction opposite to the direction. This makes it possible to correct a pixel position shift caused by a difference in the arrangement position of the photoelectric conversion element.
ãï¼ï¼ï¼ï¼ããªããä¸è¿°ããè£æ£ã¯è«æ±é
ï¼ã«è¨è¼ã®è£
æ£ææ®µã«å¯¾å¿ãã¦ããããã£ã«ã ã¹ãã£ãï¼ï¼ã®ã©ã¤ã³
ã»ã³ãµï¼ï¼ï¼¢ãï¼ï¼ï¼§ãï¼ï¼ï¼²ã¯è«æ±é
ï¼ã«è¨è¼ã®ç¬¬ï¼
ã®å
é»å¤æç´ åã«ãã©ã¤ã³ã»ã³ãµï¼ï¼ï¼©ï¼²ã¯è«æ±é
ï¼ã«
è¨è¼ã®ç¬¬ï¼ã®å
é»å¤æç´ åã«å¯¾å¿ãã¦ãããNote that the above-described correction corresponds to the correction means described in claim 4, and the line sensors 70B, 70G, and 70R of the film scanner 72 are connected to the first means described in claim 4.
The line sensor 70IR corresponds to the second photoelectric conversion element of the fourth aspect.
ãï¼ï¼ï¼ï¼ãã¾ããä¸è¨ã§ã¯æ¬ é¥é¨ä¿®æ£æ¹æ³ã®ä¸ä¾ã¨ã
ã¦è£éæ¹æ³åã³è¼åº¦èª¿æ´æ¹æ³ã説æããããããã«éå®
ããããã®ã§ã¯ãªãããã¼ãã¹ãã£ã«ã¿çãããããã¨
ã§æ¬ é¥é¨ãã¼ãããæè¬ã¼ããæ¹æ³ãé©ç¨ãã¦ããããIn the above description, the interpolation method and the brightness adjustment method have been described as examples of the defective portion repairing method. However, the present invention is not limited to these methods. May be applied.
ãï¼ï¼ï¼ï¼ãã¾ããä¸è¨ã§ã¯åçãã£ã«ã ãééããå
ãå
é»å¤æãããã¨ã§ç»åãèªã¿åãæ§æã説æãã
ããããã«éå®ããããã®ã§ã¯ãªããåçãã£ã«ã ãå
å°ããå
ãå
é»å¤æãããã¨ã§ç»åãèªã¿åãæ§æãæ¡
ç¨ãã¦ããããã¾ããç»åè¨é²ææã¯åçãã£ã«ã ã«é
å®ããããã®ã§ã¯ãªããåçãã£ã«ã 以å¤ã®åçæå
æ
æãæ®éç´ãOHPã·ã¼ãçãç»åè¨é²ææã¨ãã¦ç¨ã
ã¦ãè¯ããã¨ã¯è¨ãã¾ã§ããªããIn the above description, the configuration in which an image is read by photoelectrically converting the light transmitted through the photographic film has been described. However, the present invention is not limited to this. The image is obtained by photoelectrically converting the light reflected by the photographic film. May be adopted. The image recording material is not limited to a photographic film, and it goes without saying that a photographic photosensitive material other than a photographic film, plain paper, an OHP sheet, or the like may be used as the image recording material.
ãï¼ï¼ï¼ï¼ãæ´ã«ãä¸è¨ã§ã¯ãã¬ã¹ãã£ã³æã«ï¼²ï¼ï¼§ï¼
ï¼¢ã®èªã¿åããè¡ãããã¡ã¤ã³ã¹ãã£ã³æã«ï¼²ï¼ï¼§ï¼
ï¼¢ï¼ï¼©ï¼²ã®èªã¿åããè¡ãä¾ã説æããããããã«éå®
ããããã®ã§ã¯ãªããIRèªã¿åãã¯ãã¬ã¹ãã£ã³æã«
ã®ã¿è¡ã£ã¦ããããããã¬ã¹ãã£ã³æåã³ãã¡ã¤ã³ã¹ã
ã£ã³æã«åã
è¡ã£ã¦ããããFurther, in the above, R, G,
B is read, and R, G,
Although an example of reading B and IR has been described, the present invention is not limited to this. IR reading may be performed only at the time of pre-scanning, or may be performed at the time of pre-scanning and fine scanning.
ãï¼ï¼ï¼ï¼ã[0133]
ãçºæã®å¹æã以ä¸èª¬æããããã«è«æ±é
ï¼åã³è«æ±é
ï¼è¨è¼ã®çºæã¯ãç»åè¨é²ææã®ç»åè¨é²é åã«å¯è¦å
ãç
§å°ãã¦ééåã¯åå°ããå¯è¦å
ãæ¤åºãããã¨ã§å¾
ãããå¯è¦ç»åæ
å ±ãåã³ç»åè¨é²é åã«éå¯è¦å
ãç
§
å°ãã¦ééåã¯åå°ããéå¯è¦å
ãæ¤åºãããã¨ã§å¾ã
ããéå¯è¦ç»åæ
å ±ãåå¾ããå¯è¦ç»åæ
å ±åã³éå¯è¦
ç»åæ
å ±ã®å°ãªãã¨ã䏿¹ã«å¯¾ããå
å¦ç¹æ§ã«èµ·å ãã
åæ¹ã®æ
å ±ã®å·®ç°ãè£æ£ããããã«ããã®ã§ãä½ã³ã¹ã
ãªæ§æã§æ¬ é¥é¨ã®ä¿®æ£ç²¾åº¦ãåä¸ããããã¨ãå¯è½ã¨ãª
ããã¨ããåªãã广ãæãããAs described above, the first and ninth aspects of the present invention are obtained by irradiating an image recording area of an image recording material with visible light and detecting transmitted or reflected visible light. Obtain the visible image information, and the invisible image information obtained by irradiating the image recording area with the invisible light and detecting the transmitted or reflected invisible light, and at least one of the visible image information and the invisible image information On the other hand, since the difference between the two types of information caused by the optical characteristics is corrected, there is an excellent effect that it is possible to improve the correction accuracy of the defective portion with a low-cost configuration.
ãï¼ï¼ï¼ï¼ãè«æ±é
ï¼è¨è¼ã®çºæã¯ãè«æ±é
ï¼ã®çºæã«
ããã¦ãç»åè¨é²é åãééåã¯åå°ããå¯è¦å
ãè¤æ°
ã®æ³¢é·åæ¯ã«åã
æ¤åºãããã¨ã§å¾ãããè¤æ°ã®æ³¢é·å
æ¯ã®å¯è¦ç»åæ
å ±åã³éå¯è¦ç»åæ
å ±ã«å¯¾ããå
å¦ç¹æ§
ã«èµ·å ããåè¨åç»åæ
å ±ç¸äºã®å·®ç°ãè£æ£ããããã«
ããã®ã§ãä¸è¨å¹æã«å ããæ¬ é¥é¨ã®ä¿®æ£ç²¾åº¦ãæ´ã«å
ä¸ããããã¨ãå¯è½ã¨ãªããã¨ãã广ãæãããAccording to a second aspect of the present invention, in the first aspect of the present invention, the visible light in each of a plurality of wavelength ranges obtained by detecting the visible light transmitted or reflected through the image recording area in each of the plurality of wavelength ranges. For the image information and the invisible image information, since the difference between the respective image information caused by the optical characteristics is corrected, in addition to the above effects, it is possible to further improve the correction accuracy of the defective portion, It has the effect of.
ãï¼ï¼ï¼ï¼ãè«æ±é
ï¼è¨è¼ã®çºæã¯ãè«æ±é
ï¼ã®çºæã«
ããã¦ãçµåã¬ã³ãºã®åçè²åå·®åã¯æªæ²åå·®ã«èµ·å ã
ãåç»åæ
å ±ã®ç»ç´ ä½ç½®ã®ãããè£æ£ããã®ã§ãä¸è¨å¹
æã«å ããçµåã¬ã³ãºã®åçè²åå·®åã¯æªæ²åå·®ã«èµ·å
ããç»ç´ ä½ç½®ã®ããã«ãããæ¬ é¥é¨ã®ç¯å²ã誤æ¤åºãã
ããå®éã®æ¬ é¥é¨ã¨ãããç¯å²ã誤修æ£ããããã¨ãé²
æ¢ã§ãããã¨ãã广ãæãããAccording to a third aspect of the present invention, in the first aspect of the present invention, the displacement of the pixel position of each image information due to the chromatic aberration of magnification or distortion of the imaging lens is corrected. There is an effect that it is possible to prevent a range of a defective portion from being erroneously detected or a range deviated from an actual defective portion from being erroneously corrected due to a shift in a pixel position due to a magnification chromatic aberration or a distortion of a lens.
ãï¼ï¼ï¼ï¼ãè«æ±é
ï¼è¨è¼ã®çºæã¯ãè«æ±é
ï¼ã®çºæã«
ããã¦ã第ï¼ã®å
é»å¤æç´ åã¨ç¬¬ï¼ã®å
é»å¤æç´ åã®é
ç½®ä½ç½®ã®ç¸éã«èµ·å ããåç»åæ
å ±ã®ç»ç´ ä½ç½®ã®ããã
è£æ£ããã®ã§ãä¸è¨å¹æã«å ãã第ï¼ã®å
é»å¤æç´ åã¨
第ï¼ã®å
é»å¤æç´ åã®é
ç½®ä½ç½®ã®ç¸éã«èµ·å ããç»ç´ ä½
ç½®ã®ããã«ãããæ¬ é¥é¨ã®ç¯å²ã誤æ¤åºããããå®éã®
æ¬ é¥é¨ã¨ãããç¯å²ã誤修æ£ããããã¨ã鲿¢ã§ããã
ã¨ãã广ãæãããAccording to a fourth aspect of the present invention, in the first aspect of the present invention, the displacement of the pixel position of each image information due to the difference in the arrangement position of the first photoelectric conversion element and the second photoelectric conversion element is corrected. Therefore, in addition to the above-described effects, the range of the defective portion is erroneously detected or deviates from the actual defective portion due to a pixel position shift due to a difference in the arrangement position of the first photoelectric conversion element and the second photoelectric conversion element. The range can be prevented from being corrected incorrectly,
It has the effect of.
ãï¼ï¼ï¼ï¼ãè«æ±é
ï¼è¨è¼ã®çºæã¯ãè«æ±é
ï¼ã®çºæã«
ããã¦ãçµåã¬ã³ãºã®ç¦ç¹è·é¢ã®æ³¢é·ä¾åæ§ã«èµ·å ã
ããåç»åæ
å ±ã表ãç»åã®é®®é度ã®å·®ç°ãè£æ£ããã®
ã§ãä¸è¨å¹æã«å ããçµåã¬ã³ãºã®ç¦ç¹è·é¢ã®æ³¢é·ä¾å
æ§ã«èµ·å ããé®®é度ã®å·®ç°ã«ãããæ¬ é¥é¨ã®ç¯å²ã誤æ¤
åºããããæ¬ é¥é¨ã®ä¿®æ£ã«éãã¦ä¿®æ£å¼·åº¦ãä¸é©æ£ã«è¨
å®ããããã¨ã鲿¢ã§ãããã¨ãã广ãæãããAccording to a fifth aspect of the present invention, in the first aspect of the present invention, a difference in sharpness of an image represented by each piece of image information due to the wavelength dependence of the focal length of the imaging lens is corrected. In addition, due to the difference in sharpness caused by the wavelength dependence of the focal length of the imaging lens, it is possible to prevent the range of the defective portion from being erroneously detected or to set the correction intensity improperly when correcting the defective portion. It has the effect of.
ãï¼ï¼ï¼ï¼ãè«æ±é
ï¼è¨è¼ã®çºæã¯ãè«æ±é
ï¼ã®çºæã«
ããã¦ãå¯è¦åãä¸é¨å«ãæ³¢é·åã®å
ãéå¯è¦å
ã¨ãã¦
ç
§å°ããããã¨ã«èµ·å ããéå¯è¦å
ã®æ¤åºå
éã®å¤åã
è£æ£ããã®ã§ãä¸è¨å¹æã«å ããéå¯è¦å
ã®æ³¢é·åãå¯
è¦åãä¸é¨å«ãã§ãããã¨ã«èµ·å ããéå¯è¦å
ã®æ¤åºå
éã®å¤åã«ãã£ã¦ãæ¬ é¥é¨ã®ç¯å²ã誤æ¤åºããããæ¬ é¥
é¨ã®ä¿®æ£ã«éãã¦ä¿®æ£å¼·åº¦ãä¸é©æ£ã«è¨å®ããããã¨ã
鲿¢ã§ãããã¨ãã广ãæãããAccording to a sixth aspect of the present invention, in the first aspect of the invention, a change in the detected light amount of the non-visible light caused by the irradiation of the light in the wavelength range partially including the visible range as the non-visible light is corrected. Therefore, in addition to the above effects, the range of the defective portion may be erroneously detected or the defective portion may be corrected due to a change in the detected light amount of the invisible light caused by the wavelength range of the invisible light partially including the visible range. In this case, the correction strength can be prevented from being set improperly.
ãï¼ï¼ï¼ï¼ãè«æ±é
ï¼è¨è¼ã®çºæã¯ãè«æ±é
ï¼ã®çºæã«
ããã¦ãç»åè¨é²ææãééåã¯åå°ããå
ã®ç»åè¨é²
ææã«ããæ¸è¡°åº¦ã®æ³¢é·ä¾åæ§ã«èµ·å ãããåæ¤åºææ®µ
ã«ããæ¤åºå
éã®å·®ç°ãè£æ£ããã®ã§ãä¸è¨å¹æã«å
ããæ¬ é¥é¨ã®ç¯å²ã誤æ¤åºããããæ¬ é¥é¨ã®ä¿®æ£ã«éã
ã¦ä¿®æ£å¼·åº¦ãä¸é©æ£ã«è¨å®ããããã¨ã鲿¢ã§ãããã¨
ãã广ãæãããAccording to a seventh aspect of the present invention, in the first aspect of the present invention, the difference in the amount of light detected by each detecting means due to the wavelength dependence of the attenuation of the light transmitted or reflected by the image recording material due to the image recording material. Is corrected, and in addition to the above-described effect, there is an effect that it is possible to prevent the range of the defective portion from being erroneously detected or to set the correction strength improperly when correcting the defective portion.
ãï¼ï¼ï¼ï¼ãè«æ±é
ï¼ï¼è¨è¼ã®çºæã¯ãç»åè¨é²ææã®
ç»åè¨é²é åã«å¯è¦å
ãç
§å°ãã¦ééåã¯åå°ããå¯è¦
å
ãæ¤åºãããã¨ã§å¾ãããå¯è¦ç»åæ
å ±ãåã³ç»åè¨
é²é åã«éå¯è¦å
ãç
§å°ãã¦ééåã¯åå°ããéå¯è¦å
ãæ¤åºãããã¨ã§å¾ãããéå¯è¦ç»åæ
å ±ãåå¾ãã第
ï¼ã®ã¹ããããå¯è¦ç»åæ
å ±åã³éå¯è¦ç»åæ
å ±ã®å°ãª
ãã¨ã䏿¹ã«å¯¾ããå
å¦ç¹æ§ã«èµ·å ããåæ¹ã®æ
å ±ã®å·®
ç°ãè£æ£ãã第ï¼ã®ã¹ããããå«ãå¦çãã³ã³ãã¥ã¼ã¿
ã«å®è¡ãããããã®ããã°ã©ã ãè¨é²åªä½ã«è¨é²ããã®
ã§ãä½ã³ã¹ããªæ§æã§æ¬ é¥é¨ã®ä¿®æ£ç²¾åº¦ãåä¸ãããã
ã¨ãå¯è½ã¨ãªããã¨ããåªãã广ãæãããAccording to a tenth aspect of the present invention, the visible image information obtained by irradiating the image recording area of the image recording material with visible light and detecting the transmitted or reflected visible light and the non-visible image information A first step of acquiring invisible image information obtained by irradiating light and detecting transmitted or reflected invisible light, at least one of visible image information and invisible image information is caused by optical characteristics. Since the program for causing the computer to execute the process including the second step of correcting the difference between the two pieces of information is recorded on the recording medium, it is possible to improve the accuracy of correcting the defective portion with a low-cost configuration. , Which is an excellent effect.
ãå³ï¼ã æ¬å®æ½å½¢æ
ã«ä¿ãç»åå¦çã·ã¹ãã ã®æ¦ç¥æ§
æå³ã§ãããFIG. 1 is a schematic configuration diagram of an image processing system according to an embodiment.
ãå³ï¼ã ãã£ã«ã ã¹ãã£ãã®æ¦ç¥æ§æã示ãæè¦å³ã§
ãããFIG. 2 is a perspective view illustrating a schematic configuration of a film scanner.
ãå³ï¼ã åçãã£ã«ã ã«ç
§å°ããå
ã®åå
ç¹æ§ã®ä¸ä¾
ã示ãç·å³ã§ãããFIG. 3 is a diagram illustrating an example of spectral characteristics of light applied to a photographic film.
ãå³ï¼ã æ¬ é¥é¨ä¿®æ£å¤æ±ºå®å¦çã®å
容ã示ãããã¼ã
ã£ã¼ãã§ãããFIG. 4 is a flowchart showing the contents of a defective portion correction value determination process.
ãå³ï¼ã æªæ²åå·®è£æ£åã³åçè²åå·®è£æ£ã説æãã
ããã®ãï¼ï¼¡ï¼ã¯ç»åã«å¯¾ãã¦è¨å®ããï½ï½åº§æ¨ç³»ã
ï¼ï¼¢ï¼ã¯ï½P ï½P 座æ¨ç³»ãç¤ºãæ¦å¿µå³ã§ãããFIG. 5A is an xy coordinate system set for an image for explaining distortion correction and lateral chromatic aberration correction;
(B) is a conceptual diagram showing an x P y P coordinate system.
ãå³ï¼ã åçãã£ã«ã ãééããå
ã®æ³¢é·ã¨ééå
ã«
対ããæ¸è¡°åº¦ã¨ã®é¢ä¿ã®ä¸ä¾ã示ãç·å³ã§ãããFIG. 6 is a diagram showing an example of the relationship between the wavelength of light transmitted through a photographic film and the degree of attenuation for transmitted light.
ãå³ï¼ã ï¼ï¼¡ï¼ã¯ç»åãã¼ã¿ï¼±åã³éé®®éãã¹ã¯ç»å
ãã¼ã¿ï¼±ï¼µï¼³ã®ã¬ã¹ãã³ã¹ç¹æ§ã®ä¸ä¾ãï¼ï¼¢ï¼ã¯ç»åã
ã¼ã¿ï¼ï¼±âQUSï¼ã®ã¬ã¹ãã³ã¹ç¹æ§ã®ä¸ä¾ãï¼ï¼£ï¼ã¯
ç»åãã¼ã¿ï¼±åã³é®®éåº¦è£æ£å¾ã®ç»åãã¼ã¿ï¼±ï¼¬ã®ã¬ã¹
ãã³ã¹ç¹æ§ã®ä¸ä¾ãåã
示ãç·å³ã§ããã7A illustrates an example of response characteristics of image data Q and unsharp mask image data QUS, FIG. 7B illustrates an example of response characteristics of image data (Q-QUS), and FIG. 7C illustrates image data Q and sharpness. It is a diagram which shows each example of the response characteristic of the image data QL after degree correction.
ãå³ï¼ã ï¼ï¼¡ï¼ã¯åçãã£ã«ã ã®å·åã³ç°ç©ãä»ãã¦
ããªãç®æãå·ãä»ãã¦ããç®æãç°ç©ãä»ãã¦ããç®
æã«ãããå
ã®ééãåã
ç¤ºãæ¦å¿µå³ãï¼ï¼¢ï¼ã¯åçã
ã£ã«ã ã®ããã¯é¢ã«å·ãä»ãã¦ããå ´åãï¼ï¼£ï¼ã¯åç
ãã£ã«ã ã®ä¹³å¤é¢ã«å·ãä»ãã¦ããå ´åã®å
ã®ééãå
ã
ç¤ºãæ¦å¿µå³ã§ãããFIG. 8A is a conceptual diagram showing light transmission in a portion of a photographic film without a scratch and foreign matter, a portion with a scratch, and a portion with a foreign material, and FIG. (C) is a conceptual diagram showing light transmission when the back surface is scratched and (C) shows the light transmission when the emulsion surface of the photographic film is scratched.
ãå³ï¼ã ï¼ï¼¡ï¼ã¯ããã¯é¢ã«å·ãä»ãã¦ããå ´åã
ï¼ï¼¢ï¼ã¯ä¹³å¤é¢ã«å·ãä»ãã¦ããå ´åã®ï¼²å
ãï¼§å
ãï¼¢
å
ãIRå
ã®ééå
éã®å¤åã®ä¸ä¾ã示ãç·å³ã§ãããFIG. 9A shows a case where the back surface is scratched.
(B) shows R light, G light, and B light when the emulsion surface is scratched.
FIG. 4 is a diagram illustrating an example of a change in transmitted light amount of light and IR light.
ãå³ï¼ï¼ã æ¬çºæã®ä»ã®å®æ½å½¢æ
ã«ä¿ããã£ã«ã ã¹ã
ã£ãã®æ¦ç¥æ§æå³ã§ãããFIG. 10 is a schematic configuration diagram of a film scanner according to another embodiment of the present invention.
ï¼ï¼ ãã£ã«ã ã¹ãã£ã ï¼ï¼ ç»åå¦çè£ ç½® ï¼ï¼ å æº ï¼ï¼ ãã£ã«ã¿ã¦ããã ï¼ï¼ åçãã£ã«ã ï¼ï¼ çµåã¬ã³ãº ï¼ï¼ ã¨ãªã¢ï¼£ï¼£ï¼¤ ï¼ï¼ ã¤ã¡ã¼ã¸ããã»ããµ ï¼ï¼ å¶å¾¡é¨ ï¼ï¼ æ å ±è¨æ¶åªä½Â 12 Film Scanner 14 Image Processing Device 20 Light Source 23 Filter Unit 26 Photo Film 28 Imaging Lens 30 Area CCD 40 Image Processor 42 Controller 72 Information Storage Medium
âââââââââââââââââââââââââââââââââââââââââââââââââââââ ããã³ããã¼ã¸ã®ç¶ã Fã¿ã¼ã (åèï¼ 5B057 BA02 BA19 CA01 CA08 CA12 CA16 CB01 CB08 CB12 CB16 CC01 CD12 CE02 CH01 CH08 CH20 DB02 DB06 DB09 DC32 5C072 AA01 BA17 CA03 DA02 DA09 DA13 DA16 DA18 DA21 EA05 EA08 QA06 QA17 UA18 VA03 WA04 5C077 LL02 MM03 MP08 PP03 PP32 PP39 PP54 PQ12 PQ20 SS01 TT09 5C079 HB01 JA16 JA23 LA15 LA24 MA11 NA02 NA03 NA25 PA08 ââââââââââââââââââââââââââââââââââââââââââââââââââ âââ Continued on the front page F term (reference) 5B057 BA02 BA19 CA01 CA08 CA12 CA16 CB01 CB08 CB12 CB16 CC01 CD12 CE02 CH01 CH08 CH20 DB02 DB06 DB09 DC32 5C072 AA01 BA17 CA03 DA02 DA09 DA13 DA16 DA18 DA21 EA05 EA08 QA06 VA03 5C077 LL02 MM03 MP08 PP03 PP32 PP39 PP54 PQ12 PQ20 SS01 TT09 5C079 HB01 JA16 JA23 LA15 LA24 MA11 NA02 NA03 NA25 PA08
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4