This is the html version of the file https://committee.tta.or.kr/include/Download.jsp?filename=choan%2F2012-1420+%B8%D6%C6%BC+%BD%BA%C5%D7%B7%B9%BF%C0+%B1%E2%B9%DD+%B0%ED%C1%A4+3D++%C6%E4%C0%CC%BC%C8+%BD%BA%C4%B3%B4%D7+%B9%E6%B9%FD_2%C2%F7%BC%F6%C1%A4%BA%BB.hwp.

 

완전 방사체인 흑체는 열을 가하면 금속과 같이 달궈지면서 붉은색을 띠다가 점차 밝은 흰색을 띠게 된다. 흑체는 속이 빈 뜨거운 공과 같으며 분광 에너지 분포가 물질의 구성이 아닌 온도에 의존하는 특징이 있다. 색온도는 온도가 높아지면 푸른색, 낮아지면 붉은색을 띤다. 우리가 주변에서 흔히 보는 태양광은 5,5007,000°K, 카메라 플래시는 5,6006,000°K, 백열등은 2,5003,600°K, 촛불은 1,8002,000°K가 된다.

Posted by uniqueone
,

http://terms.naver.com/entry.nhn?docId=271066&cid=42641&categoryId=42641

 

 

 

국제 조명 위원회(Commission Internation ale de I'Eclair age: CIE)에서 정한 기준이 되는 광원.
국제적으로 정확한 색의 측정을 위해 사용된다. 국제 조명 위원회(CIE)에 의해서 분광 분포가 규정지어진 표준광은 A, B, C, D의 네 종류가 있다. CIE에서 규정한 측색용의 빛. 이에는 A, B, C, D가 있다.
(1) 표준광 A는 상관색온도가 약 2,856K인 완전 방사체가 발하는 빛으로 텅스텐 전구의 빛을 나타낸다.
(2) 표준광 B는 상관색온도가 약 4,874K인 가시 파장역의 직사 태양광을 나타내며 현재 거의 사용하지 않고 있다.
(3) 표준광 C는 상관색온도가 약 6,774K인 가시파장역의 평균적인 주광이다. 표준광 D65는 상관색온도가 약 6,504K인 자외선 영역을 포함한 평균적인 CIE 주광이다.
(4) 기타 표준광 D는 기타 상관색온도에서의 이 CIE 주광으로서 D65에 따른 것으로서는 표준광 D65(5,503K) 및 표준광 D75(7,504K)를 측색용 표준광으로 우선적으로 사용한다.

CIE 표준광

[네이버 지식백과] CIE 표준광 [CIE standard illuminant] (색채용어사전, 2007., 도서출판 예림)

 

 

 

 

http://www.ktword.co.kr/abbr_view.php?m_temp1=4526

 

 

1. 표준 광원 / 기준 광원 / 측색광원 (Standard Illuminant)

  ㅇ 물체 색조명 종류 및 세기에 따라 달라지므로 이를 표준화하기 위한 조명 광원
     - 상대 분광 전력 분포(relative spectral power distribution)로써 정의되는 광원
        * (이를 물리적으로 굳이 나타낼 필요는 없음)

     - 조명 광원 분광 분포가 변화하면,
        . 이에따라 물체색삼자극치도 변화하므로,
        . 객관적으로 색을 표시할 수 없으므로,
        . 상대 분광 분포를 규정한 측색용의 표준 광원


2. 표준광원 종류


  ㅇ 표준광원 A
     - 가스 충전 상태의 텅스텐 백열 전구에 의한 조명시 색 묘사를 위함 (상관 색온도 2856 K)
        . 2856 K의 흑체가 발하는

  ㅇ 표준광원 B
     - 대낮 태양평균 직사량을 나타냄 (상관 색온도 4900 K)
        . 현재는 사용하지 않음

  ㅇ 표준광원 C
     - 흐른 하늘 낮의 평균 직사량을 나타냄 (상관 색온도 6800 K)
        . 표준광원 A에 데이비드-깁슨 필터(David-Gibson Filter)로 필터링하여 얻어지는 것
          으로 엄밀성이 다소 결여됨

  ㅇ 표준광원 D
     - 실제 대낮의 태양광(자연 일광,daylight)을 측정하여 얻은 평균 데이터
        . 표준광 C를 보완하고, 임의의 색온도를 조정한 것

     - D50 : 정오의 태양광선을 대표적으로 묘사 (상관 색온도 5000 K)
     - D55 : (상관 색온도 5500 K)
     - D60 : 컴퓨터 그래픽 분야에서 많이 사용 (상관 색온도 6000 K)
     - D65 : 북쪽하늘의 평균 태양광선을 대표적으로 묘사 (상관 색온도 6500 K)
        . 페인트,플라스틱,직물 산업에서 주로 사용
        . 표준광 C에 비해 자외선 영역의 단파이 강함
     - D75 : 북쪽하늘의 태양광선을 대표적으로 묘사 (상관 색온도 7500 K)

  ㅇ 표준광원 E
     - 이론적으로 동일한 에너지를 갖음
     - 실제 광원이 아닌 주로 계산 목적으로 사용됨

  ㅇ 표준광원 F 시리즈
     - 일반적인 형광등 광 수준
     - F2 ~ F12

  ※ [참고_웹]
     - 표준광원 분광분포(Spectral Distribution) 그림 例 : A,C,D26 분광분포도
     - 복사측정 표준: 자외선부터 적외선까지
     - 표준광원의 이해와 측색 조명환경


3. 주요 표준광 분류


  ㅇ CIE 분류 표준광 : A, C, D50, D65, F2, F8, F11 등

  ㅇ 한국산업규격(KS A0074) 분류 표준광
     - 표준광 A, C, D65
     - 보조표준광 B, D50, D55, D75
     - 시료광 F6, F8, F10

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Posted by uniqueone
,

https://ryanfb.github.io/etc/2015/07/08/automatic_colorchecker_detection.html

 

https_ryanfb.github.io_etc_2015_07_08_automatic_colorchecker_.pdf

 


/etcryanfb.github.io


↳ Automatic ColorChecker Detection, a Survey


A while back (July 2010 going by Git commit history), I hacked together a program for automatically finding the GretagMacbeth ColorCheckerin an image, and cheekily named it Macduff.

The algorithm I developed used adaptive thresholding against the RGB channel images, followed by contour findingwith heuristics to try to filter down to ColorChecker squares, then using k-means clustering to cluster squares (in order to handle the case of images with an X-Rite ColorChecker Passport), then computing the average square colors and trying to find if any layout/orientation of square clusters would match ColorChecker reference values (within some Euclidean distance in RGB space). Because of the original use case I was developing this for (automatically calibrating images against an image of a ColorChecker on a copy stand), I could assume that the ColorChecker would take up a relatively large portion of the input image, and coded Macduff using this assumption.

I recently decided to briefly revisit this problem and see if any additional work had been done, and I thought a quick survey of what I turned up might be generally useful:
•Jackowski, Marcel, et al. Correcting the geometry and color of digital images. Pattern Analysis and Machine Intelligence, IEEE Transactions on 19.10 (1997): 1152-1158. Requires manual selection of patch corners, which are then refined with template matching.
•Tajbakhsh, Touraj, and Rolf-Rainer Grigat. Semiautomatic color checker detection in distorted images. Proceedings of the Fifth IASTED International Conference on Signal Processing, Pattern Recognition and Applications. ACTA Press, 2008. Unfortunately I cannot find any online full-text of this article, and my library doesn’t have the volume. Based on the description in Ernst 2013, the algorithm proceeds as follows: “The user initially selects the four chart corners in the image and the system estimates the position of all color regions using projective geometry. They transform the image with

 

a Sobel kernel, a morphological operator and thresholding into a binary image and find connected regions.”
•Kapusi, Daniel, et al. Simultaneous geometric and colorimetric camera calibration. 2010. This method requires color reference circles placed in the middle of black and white chessboard squares, which they then locate using OpenCV’s chessboard detection.
•Bianco, Simone, and Claudio Cusano. Color target localization under varying illumination conditions. Computational Color Imaging. Springer Berlin Heidelberg, 2011. 245-255. Uses SIFTfeature matching, and then clusters matched features to be fed into a pose selection and appearance validation algorithm.
•Brunner, Ralph T., and David Hayward. Automatic detection of calibration charts in images. Apple Inc., assignee. Patent US8073248. 6 Dec. 2011. Uses a scan-line based method to try to fit a known NxM reference chart.
•Minagawa, Akihiro, et al. A color chart detection method for automatic color correction. 21st International Conference on Pattern Recognition (ICPR). IEEE, 2012. Uses pyramidizationto feed a pixel-spotting algorithm which is then used for patch extraction.
•K. Hirakawa, “ColorChecker Finder,” accessed from http://campus.udayton.edu/~ISSL/software. AKA CCFind.m. The earliest Internet Archive Wayback Machine snapshotfor this page is in August 2013, however I also found this s-colorlab mailing list announcement from May 2012. Unfortunately this code is under a restrictive license: “This code is copyrighted by PI Keigo Hirakawa. The softwares are for research use only. Use of software for commercial purposes without a prior agreement with the authors is strictly prohibited.” According to the webpage, “CCFind.mdoes not detect squares explicitly. Instead, it learns the recurring shapes inside an image.”
•Liu, Mohan, et al. A new quality assessment and improvement system for print media. EURASIP Journal on Advances in Signal Processing 2012.1 (2012): 1-17. An automatic ColorChecker detection is described as part of a comprehensive system for automatic color correction. The algorithm first quantizes all colors to those in the color chart, then performs connected component analysis with heuristics to locate patch candidates, which are then fed to a Delaunay triangulation which is pruned to find the final candidate patches, which is then checked for the correct color orientation. This is the same system described in: Konya, Iuliu Vasile, and Baia Mare. Adaptive Methods for Robust Document Image Understanding. Diss. Universitäts-und Landesbibliothek Bonn, 2013.
•Devic, Goran, and Shalini Gupta. Robust Automatic Determination and Location of Macbeth Color Checker Charts. Nvidia Corporation, assignee. Patent US20140286569. 25 Sept. 2014. Uses edge-detection, followed by a flood-fill, with heuristics to try to detect the remaining areas as ColorChecker(s).
•Ernst, Andreas, et al. Check my chart: A robust color chart tracker for colorimetric camera calibration. Proceedings of the 6th International Conference on Computer Vision/Computer Graphics Collaboration Techniques and Applications. ACM, 2013. Extracts polygonal image regions and applies a cost function to check adaptation to a color chart.
•Kordecki, Andrzej, and Henryk Palus. Automatic detection of color charts in images. Przegląd Elektrotechniczny 90.9 (2014): 197-202. Uses image binarization and patch grouping to construct bounding parallelograms, then applies heuristics to try to determine the types of color charts.
•Wang, Song, et al. A Fast and Robust Multi-color Object Detection Method with Application to Color Chart Detection. PRICAI 2014: Trends in Artificial Intelligence. Springer International Publishing, 2014. 345-356. Uses per-channel feature extraction with a sliding rectangle, fed into a rough detection step with predefined 2x2 color patch templates, followed by precise detection.
•García Capel, Luis E., and Jon Y. Hardeberg. Automatic Color Reference Target Detection(Direct PDF link). Color and Imaging Conference. Society for Imaging Science and Technology, 2014. Implements a preprocessing step for finding an approximate ROI for the ColorChecker, and examines the effect of this for both CCFindand a template matching approach (inspired by a project report which I cannot locate online). They also make their software available for download at http://www.ansatt.hig.no/rajus/colorlab/CCDetection.zip.

Data Sets
•Colourlab Image Database: Imai’s ColorCheckers (CID:ICC)(246MB) Used by García Capel 2014. 43 JPEG images.
•Gehler’s Dataset(approx. 8GB, 592MB downsampled) ◦Shi’s Re-processing of Gehler’s Raw Dataset(4.2GB total) Used by Hirakawa. 568 PNG images.
◦Reprocessed Gehler“We noticed that the renderings provided by Shi and Funt made the colours look washed out. There also seemed to be a strong Cyan tint to all of the images. Therefore, we processed the RAW files ourselves using DCRAW. We followed the same methodology as Shi and Funt. The only difference is we did allow DCRAW to apply a D65 Colour Correction matrix to all of the images. This evens out the sensor responses.”


Originally published on 2015-07-08 by Ryan BaumannFeedback? e-mail/ twitter/ github
Revision History

Suggested citation:
Baumann, Ryan. “Automatic ColorChecker Detection, a Survey.” Ryan Baumann - /etc(blog), 08 Jul 2015, https://ryanfb.github.io/etc/2015/07/08/automatic_colorchecker_detection.html(accessed 15 Jul 2016).

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Posted by uniqueone
,
http://www.mathworks.com/matlabcentral/answers/105120-normalize-colors-under-different-lighting-conditions

 

I have done an outdoor experiment of a camera capturing images every 1 hour for color checker chart. I am trying to normalize each color so it looks the same through the day; as the color constancy is intensity independent.

I have tried several methods but the results were disappointing. Any ideas?

Thanks

---------------------------------------------------------------------------------------------

What methods did you try? Did you try cross channel cubic linear regression?

---------------------------------------------------------------------------------------------

I tried normalization for RGB channels user comprehensive color normalization

No, I haven't tried cross channel cubic linear regression.

---------------------------------------------------------------------------------------------

Try an equation like

newR = a0 + a1*R + a2*G + a3*B + a4*R*G + a5*R*B + a6*G*B + a7*R^2 + a8*G^2 + a9*B^2 + .....

See my attached seminar/tutorial on RGB-to-RGB color correction and RGB-to-LAB color calibration.

 

 

 

 

---------------------------------------------------------------------------------------------

Thank you, this method looks complicated whereas my application is simple. The only thing I want to do is to normalize a color throughout the day. For example, red through the day is shown as [black - brown - red - pink - white - pink - red - brown - black]. I want to stabilize this color to be red during the day. The data was taken from a CMOS imaging sensor.

---------------------------------------------------------------------------------------------

Attached is a super simple (dumb) way of doing it. Not too sophisticated and won't be as good in all situations but it might work for you. I would never use it though because it's not as accurate as we need for industrial use. It's more just for students to learn from.

  • crude_white_balancing.m
  •  

    ---------------------------------------------------------------------------------------------

    Thank you for this file. But this code converts colors to grey scale after correction. I want the result image to be colored

    ---------------------------------------------------------------------------------------------

    The result image is colored. Did you actually run it? With the onion image? And draw out a square over the yellowish onion? You'll see that the final corrected image is color. Try again. Post screenshots if you need to.

    ---------------------------------------------------------------------------------------------

    I don't know why you're processing each quadrilateral individually. Whatever happened to the image (lighting color shift, overall intensity shift, introduction of haze or whatever) most likely affected the whole image. I think if you processed each quadrilateral individually and then came up with 24 individual transforms, and then applied those individual transforms to the same area in the subject image, your subject image would look very choppy. So if your mid gray chip went from greenish in the morning to gray in the mid-day to bluish in the evening, those color shifts would apply to all chips. I've seen a lot of talks and posters at a lot of color conferences and talked to a lot of the worlds experts in color and I don't recall ever seeing anyone do what you want to do. There are situations in spectral estimation where you have a mixture of light (e.g. indoor fluorescent and outdoor daylight) both impinging on a scene and they want to estimate the percentage of light hitting different parts of the scene and the resultant spectrum so that you can get accurate color correction across the different illumination regions but you can't do that on something as small as a single X-rite Color Checker Chart. Anyway even if you did want to chop your scene up into 24 parts and have 24 transforms to fix up each of the 24 regions independently, you'd still have to do one of the methods I showed - either the more accurate regression, or the less accurate linear scaling - or something basically the same concept. You need a transform that take the R, G, and B and gives you a "fixed up" red. And another transform to fix green, and another transform to fix the blue.

    ---------------------------------------------------------------------------------------------

    Thank you very much.

    ---------------------------------------------------------------------------------------------

     

     

    Posted by uniqueone
    ,
    http://kr.mathworks.com/matlabcentral/answers/79147-how-to-color-correct-an-image-from-with-a-color-checker

    http_kr.mathworks.com_matlabcentral_answers_79147-how-to-colo.pdf

     

    We are developing an open source image analysis pipeline ( http://bit.ly/VyRFEr) for processing timelapse images of plants growing. Our lighting conditions vary dynamically throughout the day ( http://youtu.be/wMt5xtp9sH8) but we want to be able to automate removal of the background and then count things like green pixels between images of the same plant throughout the day despite the changing lighting. All the images have x-rite (equivalent) color checkers in them. I've looked through a lot of posts but I'm still a unclear on how we go about doing color (and brightness) correction to normalize the images so they are comparable. Am I wrong in assuming this is a relatively simple undertaking?

    Anyone have any working code, code samples or suggested reading to help me out?

    Thanks!

    Tim

    Sample images: Morning: http://phenocam.anu.edu.au/data/timestreams/Borevitz/_misc/sampleimages/morning.JPG

    Noon: http://phenocam.anu.edu.au/data/timestreams/Borevitz/_misc/sampleimages/noon.JPG

    -------------------------------------------------------------------------------------------

    Tim: I do this all the time, both in RGB color space, when we need color correction to a standard RGB image, and in XYZ color space, when we want calibrated color measurements. In theory it's simple, but the code and formulas are way too lengthy to share here. Basically for RGB-to-RGB correction, you make a model of your transform, say linear, quadratic, or cubic, with or without cross terms (RG, RB, R*B^2, etc.). Then you do a least squares model to get a model for Restimated, Gestimated, and Bestimated. Let's look at just the Red. You plug in the standard values for your 24 red chips (that's the "y"), and the values of R, G, B, RG, RB, GB, R^2G, etc. into the "tall" 24 by N matrix, and you do least squares to get the coefficients, alpha. Then repeat to get sets of coefficients beta, and gamma, for the estimated green and blue. Now, for any arbitrary RGB, you plug them into the three equations to get the estimated RGB as if that color was snapped at the same time and color temperature as your standard. If all you have are changes in intensity you probably don't need any cross terms, but if you have changes in the color of the illumination, then including cross terms will correct for that, though sometimes people do white balancing as a separate step before color correction. Here is some code I did to do really crude white balancing (actually too crude and simple for me to ever actually use but simple enough that people can understand it).

    I don't have any demo code to share with you - it's all too intricately wired into my projects. Someone on the imaging team at the Mathworks (I think it was Grant if I remember correctly) has a demo to do this. I think it was for the Computer Vision System Toolbox, but might have been for the Image Processing Toolbox. Call them and try to track it down. In the mean time try this: http://www.mathworks.com/matlabcentral/answers/?search_submit=answers&query=color+checker&term=color+checker

    --------------------------------------------------------------------------------------------

    Thanks, this is very helpful. Is there a quick and easy way to adjust white balance at least? Likewise for the color correction... if I don't need super good color correction but just want to clean up the lighting a bit without doing any high end color corrections is there a simple way to do this or do am I stuck figuring out how to do the full color correction or nothing?

    Thanks again.

    Tim

    ---------------------------------------------------------------------------------------------

    Did you ever get the demo from the Mathworks? If so, and they have it on their website, please post the URL.

    Here's a crude white balancing demo:

    % Does a crude white balancing by linearly scaling each color channel.
    clc;    % Clear the command window.
    close all;  % Close all figures (except those of imtool.)
    clear;  % Erase all existing variables.
    workspace;  % Make sure the workspace panel is showing.
    format longg;
    format compact;
    fontSize = 15;
    
    % Read in a standard MATLAB gray scale demo image.
    folder = fullfile(matlabroot, '\toolbox\images\imdemos');
    button = menu('Use which demo image?', 'onion', 'Kids');
    % Assign the proper filename.
    if button == 1
    	baseFileName = 'onion.png';
    elseif button == 2
    	baseFileName = 'kids.tif';
    end
    % Read in a standard MATLAB color demo image.
    folder = fullfile(matlabroot, '\toolbox\images\imdemos');
    % Get the full filename, with path prepended.
    fullFileName = fullfile(folder, baseFileName);
    if ~exist(fullFileName, 'file')
    	% Didn't find it there.  Check the search path for it.
    	fullFileName = baseFileName; % No path this time.
    	if ~exist(fullFileName, 'file')
    		% Still didn't find it.  Alert user.
    		errorMessage = sprintf('Error: %s does not exist.', fullFileName);
    		uiwait(warndlg(errorMessage));
    		return;
    	end
    end
    [rgbImage colorMap] = imread(fullFileName);
    % Get the dimensions of the image.  numberOfColorBands should be = 3.
    [rows columns numberOfColorBands] = size(rgbImage);
    % If it's an indexed image (such as Kids),  turn it into an rgbImage;
    if numberOfColorBands == 1
    	rgbImage = ind2rgb(rgbImage, colorMap); % Will be in the 0-1 range.
    	rgbImage = uint8(255*rgbImage); % Convert to the 0-255 range.
    end
    % Display the original color image full screen
    imshow(rgbImage);
    title('Double-click inside box to finish box', 'FontSize', fontSize);
    % Enlarge figure to full screen.
    set(gcf, 'units','normalized','outerposition', [0 0 1 1]);
    
    % Have user specify the area they want to define as neutral colored (white  or gray).
    promptMessage = sprintf('Drag out a box over the ROI you want to be neutral colored.\nDouble-click inside of it to finish it.');
    titleBarCaption = 'Continue?';
    button = questdlg(promptMessage, titleBarCaption, 'Draw', 'Cancel', 'Draw');
    if strcmpi(button, 'Cancel')
    	return;
    end
    hBox = imrect;
    roiPosition = wait(hBox);	% Wait for user to double-click
    roiPosition % Display in command window.
    % Get box coordinates so we can crop a portion out of the full sized image.
    xCoords = [roiPosition(1), roiPosition(1)+roiPosition(3), roiPosition(1)+roiPosition(3), roiPosition(1), roiPosition(1)];
    yCoords = [roiPosition(2), roiPosition(2), roiPosition(2)+roiPosition(4), roiPosition(2)+roiPosition(4), roiPosition(2)];
    croppingRectangle = roiPosition;
    
    % Display (shrink) the original color image in the upper left.
    subplot(2, 4, 1);
    imshow(rgbImage);
    title('Original Color Image', 'FontSize', fontSize);
    
    % Crop out the ROI.
    whitePortion = imcrop(rgbImage, croppingRectangle);
    subplot(2, 4, 5);
    imshow(whitePortion);
    caption = sprintf('ROI.\nWe will Define this to be "White"');
    title(caption, 'FontSize', fontSize);
    
    % Extract the individual red, green, and blue color channels.
    redChannel = whitePortion(:, :, 1);
    greenChannel = whitePortion(:, :, 2);
    blueChannel = whitePortion(:, :, 3);
    % Display the color channels.
    subplot(2, 4, 2);
    imshow(redChannel);
    title('Red Channel ROI', 'FontSize', fontSize);
    subplot(2, 4, 3);
    imshow(greenChannel);
    title('Green Channel ROI', 'FontSize', fontSize);
    subplot(2, 4, 4);
    imshow(blueChannel);
    title('Blue Channel ROI', 'FontSize', fontSize);
    
    % Get the means of each color channel
    meanR = mean2(redChannel);
    meanG = mean2(greenChannel);
    meanB = mean2(blueChannel);
    
    % Let's compute and display the histograms.
    [pixelCount grayLevels] = imhist(redChannel);
    subplot(2, 4, 6); 
    bar(pixelCount);
    grid on;
    caption = sprintf('Histogram of original Red ROI.\nMean Red = %.1f', meanR);
    title(caption, 'FontSize', fontSize);
    xlim([0 grayLevels(end)]); % Scale x axis manually.
    % Let's compute and display the histograms.
    [pixelCount grayLevels] = imhist(greenChannel);
    subplot(2, 4, 7); 
    bar(pixelCount);
    grid on;
    caption = sprintf('Histogram of original Green ROI.\nMean Green = %.1f', meanR);
    title(caption, 'FontSize', fontSize);
    xlim([0 grayLevels(end)]); % Scale x axis manually.
    % Let's compute and display the histograms.
    [pixelCount grayLevels] = imhist(blueChannel);
    subplot(2, 4, 8); 
    bar(pixelCount);
    grid on;
    caption = sprintf('Histogram of original Blue ROI.\nMean Blue = %.1f', meanR);
    title(caption, 'FontSize', fontSize);
    xlim([0 grayLevels(end)]); % Scale x axis manually.
    
    % specify the desired mean.
    desiredMean = mean([meanR, meanG, meanB])
    message = sprintf('Red mean = %.1f\nGreen mean = %.1f\nBlue mean = %.1f\nWe will make all of these means %.1f',...
    	meanR, meanG, meanB, desiredMean);
    uiwait(helpdlg(message));
    
    % Linearly scale the image in the cropped ROI.
    correctionFactorR = desiredMean / meanR;
    correctionFactorG = desiredMean / meanG;
    correctionFactorB = desiredMean / meanB;
    redChannel = uint8(single(redChannel) * correctionFactorR);
    greenChannel = uint8(single(greenChannel) * correctionFactorG);
    blueChannel = uint8(single(blueChannel) * correctionFactorB);
    % Recombine into an RGB image
    % Recombine separate color channels into a single, true color RGB image.
    correctedRgbImage = cat(3, redChannel, greenChannel, blueChannel);
    figure;
    % Display the original color image.
    subplot(2, 4, 5);
    imshow(correctedRgbImage);
    title('Color-Corrected ROI', 'FontSize', fontSize);
    % Enlarge figure to full screen.
    set(gcf, 'units','normalized','outerposition',[0 0 1 1]);
    
    % Display the color channels.
    subplot(2, 4, 2);
    imshow(redChannel);
    title('Corrected Red Channel ROI', 'FontSize', fontSize);
    subplot(2, 4, 3);
    imshow(greenChannel);
    title('Corrected Green Channel ROI', 'FontSize', fontSize);
    subplot(2, 4, 4);
    imshow(blueChannel);
    title('Corrected Blue Channel ROI', 'FontSize', fontSize);
    
    % Let's compute and display the histograms of the corrected image.
    [pixelCount grayLevels] = imhist(redChannel);
    subplot(2, 4, 6); 
    bar(pixelCount);
    grid on;
    caption = sprintf('Histogram of Corrected Red ROI.\nMean Red = %.1f', meanR);
    title(caption, 'FontSize', fontSize);
    xlim([0 grayLevels(end)]); % Scale x axis manually.
    % Let's compute and display the histograms.
    [pixelCount grayLevels] = imhist(greenChannel);
    subplot(2, 4, 7); 
    bar(pixelCount);
    grid on;
    caption = sprintf('Histogram of Corrected Green ROI.\nMean Green = %.1f', meanR);
    title(caption, 'FontSize', fontSize);
    xlim([0 grayLevels(end)]); % Scale x axis manually.
    % Let's compute and display the histograms.
    [pixelCount grayLevels] = imhist(blueChannel);
    subplot(2, 4, 8); 
    bar(pixelCount);
    grid on;
    caption = sprintf('Histogram of Corrected Blue ROI.\nMean Blue = %.1f', meanR);
    title(caption, 'FontSize', fontSize);
    xlim([0 grayLevels(end)]); % Scale x axis manually.
    
    % Get the means of the corrected ROI for each color channel.
    meanR = mean2(redChannel);
    meanG = mean2(greenChannel);
    meanB = mean2(blueChannel);
    correctedMean = mean([meanR, meanG, meanB])
    message = sprintf('Now, the\nCorrected Red mean = %.1f\nCorrected Green mean = %.1f\nCorrected Blue mean = %.1f\n(Differences are due to clipping.)\nWe now apply it to the whole image',...
    	meanR, meanG, meanB);
    uiwait(helpdlg(message));
    
    % Now correct the original image.
    % Extract the individual red, green, and blue color channels.
    redChannel = rgbImage(:, :, 1);
    greenChannel = rgbImage(:, :, 2);
    blueChannel = rgbImage(:, :, 3);
    % Linearly scale the full-sized color channel images
    redChannelC = uint8(single(redChannel) * correctionFactorR);
    greenChannelC = uint8(single(greenChannel) * correctionFactorG);
    blueChannelC = uint8(single(blueChannel) * correctionFactorB);
    
    % Recombine separate color channels into a single, true color RGB image.
    correctedRGBImage = cat(3, redChannelC, greenChannelC, blueChannelC);
    subplot(2, 4, 1);
    imshow(correctedRGBImage);
    title('Corrected Full-size Image', 'FontSize', fontSize);
    
    message = sprintf('Done with the demo.\nPlease flicker between the two figures');
    uiwait(helpdlg(message));
    Posted by uniqueone
    ,
    http://www.mathworks.com/matlabcentral/fileexchange/42548-consistent-imaging-with-consumer-cameras

    Consistent imaging with consumer cameras

    These set of scripts accompany the paper:

    Use of commercial-off-the-sgeld (COTS) digital cameras for scientific data acquisition and scene-specific color calibration

    by Akkaynak et al.

    The paper is currently in submission and the toolbox has been made available for testing in advance.

    code.zip

     

    Posted by uniqueone
    ,