Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
433 views
in Technique[技术] by (71.8m points)

algorithm - Transformation between two set of points

I have object let's say on model image. I want to compute transformation (displacement, scale, rotation) between object on model image and object on target image. I want to make assumption that object's can be treated as 2D so only 2D transformations should be computed.

First I want to to it in manually assisted way. The user selects base point on model image and then target point on target image. The number of points should be defined by user (but no less than some minimum 2-3 points). When points gives different information, the transformation should be averaged and for example from this the quality of matching can be computed.

So the questions is rather about computing transformation of two sets of points, but as I want to do it on image I've added image processing tag.

Especially welcomed are references and advices with some pieces of code or pseudocode.

With two points it's very easy issue, only rotation, scale and displacement of line should be taken, but how to do it with more points, and with averaging it and computing some quality factors.

Current solution is:

void transformFnc(std::vector<PointF> basePoints, std::vector<PointF> targetPoints,
                  PointF& offset, double rotation, double scale)
{
    std::vector<Line> basePointsLines;
    std::vector<Line> targetPointsLines;
    assert(basePoints.size() == targetPoints.size());
    int pointsNumber = basePoints.size();

    for(int i = 0; i < pointsNumber; i++)
    {
         for(int j = i + 1; j < pointsNumber; j++)
         {
             basePointsLines.push_back(Line(basePoints[i], basePoints[j]));
             targetPointsLines.push_back(Line(targetPoints[i], targetPoints[j]));
         }
    }
    std::vector<double> scalesVector;
    std::vector<double> rotationsVector;
    double baseCenterX = 0, baseCenterY = 0, targetCenterX = 0, targetCenterY = 0;
    for(std::vector<Line>::iterator it = basePointsLines.begin(), i = targetPointsLines.begin();
        it != basePointsLines.end(), i != targetPointsLines.end(); it++, i++)
    {
        scalesVector.push_back((*i).length()/(*it).length());
        baseCenterX += (*it).pointAt(0.5).x(); 
        baseCenterY += (*it).pointAt(0.5).y();
        targetCenterX += (*i).pointAt(0.5).x();
        targetCenterY += (*i).pointAt(0.5).y();
        double rotation;
        rotation = (*i).angleTo((*it));
        rotationsVector.push_back(rotation);
    }
    baseCenterX = baseCenterX / pointsNumber;
    baseCenterY = baseCenterY / pointsNumber;
    targetCenterX = targetCenterX / pointsNumber;
    targetCenterY = targetCenterY / pointsNumber;

    offset = PointF(targetCenterX - baseCenterX, targetCenterY - baseCenterY);
    scale = sum(scalesVector) / scalesVector.size();
    rotation = sum(rotationsVector) / rotationsVector.size();
}

Only optimization I can find in this code is to eliminate from scales and rotations those values which differs too much from the rest.

I'm looking for codes or pseudocodes of solution propositions. It can also be references to some codes.

So far from answers I know that:

  • RANSAC algorithm can be used
  • I need to look for some affine transform computing algorithm in the least square sense
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

First generalize the problem in a simple affine transformation with a 3x3 affine transformation matrix: i.e.

[M11 M12 M13]
[M21 M22 M23]
[M31 M32 M33]

Since we already know that the third row will always be [0 0 1] we can simply disregard it.

Now we can describe the problem as the following matrix equation

[xp0]     [x0 y0 1  0  0  0 ]
[yp0]     [0  0  0  x0 y0 1 ]     [M11]
[xp1]     [x1 y1 1  0  0  0 ]     [M12]
[yp1]  =  [0  0  0  x1 y1 1 ]  *  [M13]
[xp2]     [x2 y2 1  0  0  0 ]     [M21]
[yp2]     [0  0  0  x2 y2 1 ]     [M22]
[xp3]     [x3 y3 1  0  0  0 ]     [M23]
[yp3]     [0  0  0  x3 y3 1 ]

where xp and yp are the projected coordinates and x and y are the original coordinates.

Let's call this

proj = M * trans

We can then calculate a least squares fit for the transformation by

trans = pinv(M) * proj

where pinv is the pseudo inverse.

This gives us an affine transformation that best fits the points given in the least squares sense.

Now obviously this will also give shear, coordinate flips as well as non-uniform scaling which you did not want so we need to limit the affine transformation in some way to avoid shear. This turns out to be quite easy, we can use a single vector to describe the rotation (direction of the vector) and scaling (magnitude of the vector,) the other vector will simply be orthogonal to it. This reduces the degrees of freedom by two.

M21 = -M12
M22 = M11

So reduce to

[xp0]     [x0  y0 1 0]
[yp0]     [y0 -x0 0 1]
[xp1]     [x1  y1 1 0]     [M11]
[yp1]  =  [y1 -x1 0 1]  *  [M12]
[xp2]     [x2  y2 1 0]     [M13]
[yp2]     [y2 -x2 0 1]     [M23]
[xp3]     [x3  y3 1 0]
[yp3]     [y3 -x3 0 1]

and calculate M21 and M22 from M12 and M11 after we have solved the above matrix equation.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...