• fullscreen
  • PhaseUnwrap.pde
  • PhaseWrap.pde
  • ThreePhase.pde
  • /*
      Use the wrapped phase information,  and propagate it across the boundaries.
      This implementation uses a flood-fill propagation algorithm.
      Because the algorithm starts in the center and propagates outwards,
      so if you have noise (e.g.: a black background, a shadow) in
      the center, then it may not reconstruct your image.
    LinkedList toProcess;
    void phaseUnwrap() {
      int startX = inputWidth / 2;
      int startY = inputHeight / 2;
      toProcess = new LinkedList();
      toProcess.add(new int[]{startX, startY});
      process[startX][startY] = false;
      while (!toProcess.isEmpty()) {
        int[] xy = (int[]) toProcess.remove();
        int x = xy[0];
        int y = xy[1];
        float r = phase[y][x];
        if (y > 0)
          phaseUnwrap(r, x, y-1);
        if (y < inputHeight-1)
          phaseUnwrap(r, x, y+1);
        if (x > 0)
          phaseUnwrap(r, x-1, y);
        if (x < inputWidth-1)
          phaseUnwrap(r, x+1, y);
    void phaseUnwrap(float basePhase, int x, int y) {
      if(process[y][x]) {
        float diff = phase[y][x] - (basePhase - (int) basePhase);
        if (diff > .5)
        if (diff < -.5)
        phase[y][x] = basePhase + diff;
        process[y][x] = false;
        toProcess.add(new int[]{x, y});
      Assumes you're using grayscale images.
      Go through all the pixels in the out of phase images,
      and determine their angle (theta). Throw out noisy pixels.
    void phaseWrap() { 
      PImage phase1Image = loadImage("phase1.jpg");
      PImage phase2Image = loadImage("phase2.jpg");
      PImage phase3Image = loadImage("phase3.jpg");
      float sqrt3 = sqrt(3);
      for (int y = 0; y < inputHeight; y++) {
        for (int x = 0; x < inputWidth; x++) {     
          int i = x + y * inputWidth;  
          float phase1 = (phase1Image.pixels[i] & 255) / 255.;
          float phase2 = (phase2Image.pixels[i] & 255) / 255.;
          float phase3 = (phase3Image.pixels[i] & 255) / 255.;
          float phaseSum = phase1 + phase2 + phase3;
          float phaseRange = max(phase1, phase2, phase3) - min(phase1, phase2, phase3);
          // avoid the noise floor
          float gamma = phaseRange / phaseSum;
          mask[y][x] = gamma < noiseTolerance;
          process[y][x] = !mask[y][x];
          // this equation can be found in Song Zhang's
          // "Recent progresses on real-time 3D shape measurement..."
          // and it is the "bottleneck" of the algorithm
          // it can be sped up with a LUT, which has the benefit
          // of allowing for simultaneous gamma correction.
          phase[y][x] = atan2(sqrt3 * (phase1 - phase3), 2 * phase2 - phase1 - phase3) / TWO_PI;
    import peasy.*;
     These three variables are the main "settings".
     zscale corresponds to how much "depth" the image has,
     zskew is how "skewed" the imaging plane is.
     These two variables are dependent on both the angle
     between the projector and camera, and the number of stripes.
     The sign on both is based on the direction of the stripes
     (whether they're moving up vs down)
     as well as the orientation of the camera and projector
     (which one is above the other).
     noiseTolerance can significantly change whether an image
     can be reconstructed or not. Start with it small, and work
     up until you start losing important parts of the image.
    float zscale = 140;
    float zskew = 23;
    float noiseTolerance = 0.15;
    int inputWidth = 480;
    int inputHeight = 640;
    PeasyCam cam;
    float[][] phase = new float[inputHeight][inputWidth];
    boolean[][] mask = new boolean[inputHeight][inputWidth];
    boolean[][] process = new boolean[inputHeight][inputWidth];
    void setup() {
      size(inputWidth, inputHeight, P3D);
      cam = new PeasyCam(this, width);
    void draw () {
      translate(-inputWidth / 2, -inputHeight / 2);  
      int step = 2;
      for (int y = step; y < inputHeight; y += step) {
        float planephase = 0.5 - (y - (inputHeight / 2)) / zskew;
        for (int x = step; x < inputWidth; x += step)
          if (!mask[y][x])
            point(x, y, (phase[y][x] - planephase) * zscale);


    tweaks (0)

    about this sketch

    This sketch is running as Java applet, exported from Processing.



    Report Sketch

    Report for inappropriate content

    Please provide details if possible:

    Your have successfully reported the sketch. Thank you very much for helping to keep OpenProcessing clean and tidy :)

    Make a Copyright Infringement claim

    Kyle McDonald

    Three Phase 3D Scanner

    Add to Faves Me Likey@! 87
    You must login/register to add this sketch to your favorites.

    Technique from Song Zhang, coded in C++ by Alex Evans, ported to Processing by Florian Jennet. I rewrote the code and got rid of things that were unnecessary or didn't work. The original had a little less noise. I extrapolated out three variables instead of trying to compute them: zskew, zscale, and noiseTolerance.

    Learn how to use this code to make your own 3D scans <a href="http://www.instructables.com/id/Structured-Light-3D-Scanning/">on Instructables</a>.

    Hey Kyle, amazing! do you have those 4 phase images (png files) somewhere on the web? I would love to see those in order to understand how this works.
    Kyle McDonald
    12 May 2009
    Thanks Florian! Also, thanks for the port :)
    wav, thanks a lot for all the reference! This is amazing, and seems very cheap to build!
    I have been showing this to the friends at work, they are amazed!
    Thomas Telandro
    26 May 2009
    very nice !
    This method is also used for eye reconstruction (indeed any wavefront reconstruction ) !
    An other method is to use point instead of fringe:
    you can have a look at Shack-Hartmann http://en.wikipedia.org/wiki/Shack-Hartmann ^_^

    Thanks for sharing !
    Dear Kyle McDonald, I shown your implementation of the 3D scanner proposed by Song Zhang, recently I implemented the image proyector generation for a complete system but I have some problemas with the triangulation. You have implemented the phase to height convertion ? How could I use your code to implement the triangulation?.

    Kyle McDonald
    2 Jul 2009
    Hi William, you might want to check out the link above http://code.google.com/p/structured-light/wiki/GunterWebersWork as there is another coder who has developed a "complete system" as well. There are some examples of triangulation in there. There are also a number of Processing libraries implementing different triangulation algorithms for pre-processing before exporting to other formats.
    jose casanova
    11 Aug 2009
    thank you
    Great work - beautiful code!
    Josue Page
    29 Jan 2011
    Abbey Carlstrom
    25 Feb 2011
    Hi Kyle,
    Thank you so much for the tutorial on instructables! How would I need to alter the code in order to make it work for video? (recorded, not live capture) Thanks again!
    31 Mar 2011
    Hi Kyle,

    Thank you for the tutorial, for studying i want to find the code in C++ by Alex Evans. Do you know where i can get it? Or do you have your own C++ code. Any help would be greatly appreciated!
    Kyle McDonald
    31 Mar 2011
    @abbey you would need to modify the code so it can load a video, and take consecutive frames from the video instead of from images in a directory.

    @divad: i've never seen the original c++ code, just florian's port. if you're looking for more code to study i recommend looking on my google code http://code.google.com/p/structured-light
    Artur Hadyniak
    6 Jul 2012
    Amazing work.

    Can you provide some more information about adjusting zscale and zskew? Maybe a hint how to implement a nice calibration algorithm?

    I am now trying to implement that scanner during my internship.

    Kyle McDonald
    6 Jul 2012
    artur, the best place to look for more info is the instructable linked above. unfortunately, going through the process of getting a "good looking" zskew and zscale will not give you a "correct" 3d model, just something that is visually similar to the original object. if you want a "real" 3d model i suggest using a tool like reconstructme or 123dcatch.
    Diogo Nogueira
    1 Mar 2013
    Hi Kyle,

    Thank you for the tutorial.
    I am conducting an experiment that uses the same principle, but with circular patterns. Do you know any code that is adapted to this case or where can I find more information about it?

    best regards
    Kyle McDonald
    1 Mar 2013
    hey diogo, i don't know any circular pattern technique -- send me an email (address is at the top of my website) with a sketch or picture of the pattern and i can send you some more info or ideas.
    Naren Sathiya
    23 Jul 2014
    Hi Kyle,

    Thank you for the code! Learnt alot going though it.

    If I were to use black and white pics, how would the code change exactly? Im presuming color[y][x] will be calibrated differently?

    You need to login/register to comment.