[GAMES101]Homework 3: Pipeline and Shading

Assignment 3: Pipeline and Shading

Work requirements

In this programming assignment, we will further simulate modern graphics technology. We added Object Loader (for loading 3D models), Vertex Shader and Fragment Shader in the code, and supported texture mapping.

In this experiment, the tasks you need to complete are:

  1. Modify the function rasterize_triangle(const Triangle& t) in rasterizer.cpp: Implement the interpolation algorithm similar to that of job 2 here, and realize the interpolation of normal vector, color, and texture color.
  2. Modify the function get_projection_matrix() in main.cpp: Fill in the projection matrix you implemented in the previous experiment here, and you can run ./Rasterizer output.png normal to observe the normal vector implementation result.
  3. Modify the function phong_fragment_shader() in main.cpp: Implement Blinn-Phong model to calculate Fragment Color.
  4. Modify the function texture_fragment_shader() in main.cpp: On the basis of implementing Blinn-Phong, the texture color is regarded as kd in the formula, and the Texture Shading Fragment Shader is implemented.
  5. Modify the function bump_fragment_shader() in main.cpp: On the basis of implementing Blinn-Phong, read the comments in this function carefully to implement Bump mapping.
  6. Modify the function displacement_fragment_shader() in main.cpp: On the basis of implementing Bump mapping, implement displacement mapping.

rasterize_triangle(const Triangle& t)

deep interpolation

First get by auto [alpha, beta, gamma] = computeBarycentric2D(i + 0.5, j + 0.5, t.v); α , β , γ \alpha, \beta, \gamma α, β, γ. As mentioned before, the barycentric coordinates may change after projection, so for attributes in three-dimensional space, the coordinates in the three-dimensional space should be taken to calculate the barycentric coordinates.

Assume that the depth values ​​of the three vertices of the small triangle in the two-dimensional space are Z 1 ′ , Z 2 ′ , Z 3 ′ Z_1^{'}, Z_2^{'}, Z_3^{'} Z1′​,Z2′​,Z3′​, the barycentric coordinates of its inner point are ( α ′ , β ′ , γ ′ ) (\alpha^{'}, \beta^{'},\gamma^{'}) (α′,β′,γ′). The depth values ​​of the three vertices of the triangle in three-dimensional space are Z 1 , Z 1 , Z 1 Z_1, Z_1, Z_1 Z1​,Z1​,Z1​, the barycentric coordinates are ( α , β , γ ) (\alpha,\beta, \gamma) (α,β,γ) . How to get from ( α ′ , β ′ , γ ′ ) (\alpha^{'}, \beta^{'},\gamma^{'}) (α′,β′,γ′) to get ( α , β , γ ) (\alpha,\beta, \gamma) (α,β,γ) , or get the true depth Z Z Z?

The true depth can be obtained by the following derivation Z Z Z
Z = α Z 1 + β Z 2 + γ Z 3 Z ′ = α ′ Z 1 ′ + β ′ Z 2 ′ + γ ′ Z 3 ′ ∵ α ′ + β ′ + γ ′ = 1 ∴ Z = ( Z 1 Z 1 α ′ + Z 2 Z 2 β ′ + Z 3 Z 3 γ ′ ) Z = Z α ′ Z 1 Z 1 + Z α ′ Z 2 Z 2 + Z α ′ Z 3 Z 3 ∴ α = Z α ′ Z 1 ,   β = Z β ′ Z 2   γ = Z γ ′ Z 3 ∵ α + β + γ = 1 ∴ Z = 1 α ′ Z 1 + β ′ Z 2 + γ ′ Z 3 Z=\alpha Z_1+\beta Z_2+ \gamma Z_3 \\Z^{'}=\alpha^{'} Z_1^{'}+ \beta^{'} Z_2^{'}+\gamma^{'} Z_3^{'}\\ \because \quad\alpha^{'}+ \beta^{'}+\gamma^{'}=1\\ \therefore \quad Z=(\frac{Z_1}{Z_1}\alpha^{'}+\frac{Z_2}{Z_2}\beta^{'}+\frac{Z_3}{Z_3}\gamma^{'})Z=\frac{Z\alpha^{'}}{Z_1}Z_1+\frac{Z\alpha^{'}}{Z_2}Z_2+\frac{Z\alpha^{'}}{Z_3}Z_3\\ \therefore \quad \alpha = \frac{Z\alpha^{'}}{Z_1},\ \beta =\frac{Z\beta^{'}}{Z_2}\ \gamma =\frac{Z\gamma^{'}}{Z_3}\\ \because \quad \alpha +\beta+\gamma = 1\\\therefore \quad Z=\frac{1}{\frac{\alpha^{'}}{Z_1}+\frac{\beta^{'}}{Z_2}+\frac{\gamma^{'}}{Z_3}} Z=αZ1​+βZ2​+γZ3​Z′=α′Z1′​+β′Z2′​+γ′Z3′​∵α′+β′+γ′=1∴Z=(Z1​Z1​​α′+Z2​Z2​​β′+Z3​Z3​​γ′)Z=Z1​Zα′​Z1​+Z2​Zα′​Z2​+Z3​Zα′​Z3​∴α=Z1​Zα′​, β=Z2​Zβ′​ γ=Z3​Zγ′​∵α+β+γ=1∴Z=Z1​α′​+Z2​β′​+Z3​γ′​1​
Similarly, for any attribute V V V, we can get
V = α V 1 + β V 2 + γ V 3 = Z α ′ Z 1 V 1 + Z β ′ Z 2 V 2 + Z γ ′ Z 3 V 3 = ( α ′ Z 1 V 1 + β ′ Z 2 V 2 + γ ′ Z 3 V 3 ) / ( α ′ Z 1 + β ′ Z 2 + γ ′ Z 3 ) \begin{aligned}V&=\alpha V_1 + \beta V_2 + \gamma V_3\\&=\frac{Z\alpha^{'}}{Z_1}V_1+\frac{Z\beta^{'}}{Z_2}V_2+\frac{Z\gamma^{'}}{Z_3}V_3\\&=(\frac{\alpha^{'}}{Z_1}V_1+\frac{\beta^{'}}{Z_2}V_2+\frac{\gamma^{'}}{Z_3}V_3)/(\frac{\alpha^{'}}{Z_1}+\frac{\beta^{'}}{Z_2}+\frac{\gamma^{'}}{Z_3})\end{aligned} V​=αV1​+βV2​+γV3​=Z1​Zα′​V1​+Z2​Zβ′​V2​+Z3​Zγ′​V3​=(Z1​α′​V1​+Z2​β′​V2​+Z3​γ′​V3​)/(Z1​α′​+Z2​β′​+Z3​γ′​)​
Let's look at the code provided in the framework again

float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());
float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() 
    + gamma * v[2].z() / v[2].w();
zp *= Z;

Z in the code is the true depth, because the last row of the perspective projection matrix [ 0 , 0 , 1 , 0 ] [0,0,1,0] [0,0,1,0], so the projected point's w w w is the three-dimensional space point z z z coordinate. zp*z seems to calculate the real depth, that is, put the V V For V Z Z Z instead, but why use the projected z z What about the z coordinate? Moreover, according to the following code

td::array<Vector4f, 3> Triangle::toVector4() const
{
    std::array<Vector4f, 3> res;
    std::transform(std::begin(v), std::end(v), res.begin(), [](auto& vec) { return Vector4f(vec.x(), vec.y(), vec.z(), 1.f); });
    return res;
}

All triangle vertices after projection w w w has all been initialized to 1, so after one operation, calculate the depth Z = α ′ Z 1 ′ + β ′ Z 2 ′ + γ ′ Z 3 ′ Z=\alpha^{'} Z_1^{'}+ \beta^{'} Z_2^{'}+\gamma^{'} Z_3^{'} Z=α′Z1′​+β′Z2′​+γ′Z3′​ , the same as the uncorrected calculation method, I don’t know what to do.

Subsequent interpolated_color, interpolated_normal, interpolated_texcoords, and interpolated_shadingcoords are directly calculated by approximate interpolation with the barycentric coordinates of the two-dimensional plane.

Reference 1Reference 2

Rasterization

The rasterization operation is basically the same as the homework 2. I wanted to try to use MASS for rasterization, but I found some problems.

void rst::rasterizer::rasterize_triangle(const Triangle& t, const std::array<Eigen::Vector3f, 3>& view_pos) 
{
    // TODO: From your HW3, get the triangle rasterization code.
    auto v = t.toVector4();

    // TODO : Find out the bounding box of current triangle.
    // Get the bounding box, pay attention to rounding up and down
    int xmin = std::floor(std::min(v[0].x(), std::min(v[1].x(), v[2].x())));
    int xmax = std::ceil(std::max(v[0].x(), std::max(v[1].x(), v[2].x())));
    int ymin = std::floor(std::min(v[0].y(), std::min(v[1].y(), v[2].y())));
    int ymax = std::ceil(std::max(v[0].y(), std::max(v[1].y(), v[2].y())));

    // iterate through the pixel and find if the current pixel is inside the triangle
    
    for (int i = xmin; i <= xmax; ++i)
        for (int j = ymin; j <= ymax; ++j) {
                if (insideTriangle(i, j, t.v)) {
                    // TODO: Inside your rasterization loop:
                    //    * v[i].w() is the vertex view space depth value z.
                    //    * Z is interpolated view space depth for the current pixel
                    //    * zp is depth between zNear and zFar, used for z-buffer
                    auto [alpha, beta, gamma] = computeBarycentric2D(i + 0.5, j + 0.5, t.v);

                    float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());
                    float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() + gamma * v[2].z() / v[2].w();
                    zp *= Z;
                    if (zp < depth_buf[get_index(i, j)]) {
                        // TODO: Interpolate the attributes:
                        auto interpolated_color = interpolate(alpha, beta, gamma, t.color[0], t.color[1], t.color[2], 1);
                        auto interpolated_normal = interpolate(alpha, beta, gamma, t.normal[0], t.normal[1], t.normal[2], 1).normalized();
                        auto interpolated_texcoords = interpolate(alpha, beta, gamma, t.tex_coords[0], t.tex_coords[1], t.tex_coords[2], 1);
                        auto interpolated_shadingcoords = interpolate(alpha, beta, gamma, view_pos[0], view_pos[1], view_pos[2], 1);

                        fragment_shader_payload payload(interpolated_color, interpolated_normal.normalized(), interpolated_texcoords, texture ? &*texture : nullptr);
                        payload.view_pos = interpolated_shadingcoords;
                        auto pixel_color = fragment_shader(payload);

                        set_pixel(Vector2i(i, j), pixel_color);	
                        depth_buf[get_index(i, j)] = zp;
                    }
                }
            }
}

operation result:

phong_fragment_shader()

Implement the Blinn-Phong model to calculate Fragment Color and get a vector l , n , h , v l,n,h,v l,n,h,v and distance r r r is enough.

L = L a + L d + L s = k a I a + k d ( I / r 2 ) max ⁡ ( 0 , n ⋅ l ) + k s ( I / r 2 ) max ⁡ ( 0 , n ⋅ h ) p \begin{aligned} L &=L_{a}+L_{d}+L_{s} \\ &=k_{a} I_{a}+k_{d}\left(I / r^{2}\right) \max (0, \mathbf{n} \cdot \mathbf{l})+k_{s}\left(I / r^{2}\right) \max (0, \mathbf{n} \cdot \mathbf{h})^{p} \end{aligned} L​=La​+Ld​+Ls​=ka​Ia​+kd​(I/r2)max(0,n⋅l)+ks​(I/r2)max(0,n⋅h)p​

Eigen::Vector3f phong_fragment_shader(const fragment_shader_payload& payload)
{
    Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
    Eigen::Vector3f kd = payload.color;
    Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

    auto l1 = light{{20, 20, 20}, {500, 500, 500}};
    auto l2 = light{{-20, 20, 0}, {500, 500, 500}};

    std::vector<light> lights = {l1, l2};
    Eigen::Vector3f amb_light_intensity{10, 10, 10};
    Eigen::Vector3f eye_pos{0, 0, 10};

    float p = 150;

    Eigen::Vector3f color = payload.color;
    Eigen::Vector3f point = payload.view_pos;
    Eigen::Vector3f normal = payload.normal;

    Eigen::Vector3f result_color = {0, 0, 0};
    for (auto& light : lights)
    {
        // TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular* 
        // components are. Then, accumulate that result on the *result_color* object.
        Vector3f l = (light.position - point).normalized(),
            n = normal.normalized(),
            v = (eye_pos - point).normalized(),
            h = (v + l).normalized(),
            I = light.intensity;
        float r2 = (light.position - point).dot(light.position - point);
        Vector3f Ld = kd.cwiseProduct(I / r2) * std::max(0.0f, n.dot(l)),
            Ls = ks.cwiseProduct(I / r2) * std::pow(std::max(0.0f, n.dot(h)), p);
        result_color += (Ld + Ls);
    }
    result_color += ka.cwiseProduct(amb_light_intensity);

    return result_color * 255.f;
}

operation result:

texture_fragment_shader()

Use the getColor method in the texture of the payload to get the texture color passed in, and then get the lighting as before.

What is the payload about?
It should be a custom data structure used to store some additional necessary information for intersection or shading calculations. Simple function calls or light information are not enough to support the shading calculation of a certain point, so some additional information needs to be passed. .

A small question:
When building the environment on Visual Studio, when running output.png texture in Debug mode, an error may be reported Microsoft C++ exception: cv::Exception

Eigen::Vector3f getColor(float u, float v)
    {
        auto u_img = u * width;
        auto v_img = (1 - v) * height;
        auto color = image_data.at<cv::Vec3b>(v_img, u_img);
        return Eigen::Vector3f(color[0], color[1], color[2]);
    }

This is because v_img or u_img in the above function takes the boundary value, I don't know how to solve it
But if you run in Release mode, this error will be skipped, which is amazing.

Eigen::Vector3f texture_fragment_shader(const fragment_shader_payload& payload)
{
    Eigen::Vector3f return_color = {0, 0, 0};
    if (payload.texture)
    {
        // TODO: Get the texture value at the texture coordinates of the current fragment
        return_color = payload.texture->getColor(payload.tex_coords.x(), payload.tex_coords.y());
    }
    Eigen::Vector3f texture_color;
    texture_color << return_color.x(), return_color.y(), return_color.z();

    Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
    Eigen::Vector3f kd = texture_color / 255.f;
    Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

    auto l1 = light{{20, 20, 20}, {500, 500, 500}};
    auto l2 = light{{-20, 20, 0}, {500, 500, 500}};

    std::vector<light> lights = {l1, l2};
    Eigen::Vector3f amb_light_intensity{10, 10, 10};
    Eigen::Vector3f eye_pos{0, 0, 10};

    float p = 150;

    Eigen::Vector3f color = texture_color;
    Eigen::Vector3f point = payload.view_pos;
    Eigen::Vector3f normal = payload.normal;

    Eigen::Vector3f result_color = {0, 0, 0};

    for (auto& light : lights)
    {
        // TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular* 
        // components are. Then, accumulate that result on the *result_color* object.
        Vector3f l = (light.position - point).normalized(),
            n = normal.normalized(),
            v = (eye_pos - point).normalized(),
            h = (v + l).normalized(),
            I = light.intensity;
        float r2 = (light.position - point).dot(light.position - point);
        Vector3f Ld = kd.cwiseProduct(I / r2) * std::max(0.0f, n.dot(l)),
            Ls = ks.cwiseProduct(I / r2) * std::pow(std::max(0.0f, n.dot(h)), p);
        result_color += (Ld + Ls);
        result_color += ka.cwiseProduct(amb_light_intensity);
    }

    return result_color * 255.f;
}

operation result:

bump_fragment_shader()

This function is the implementation of bump map/normal map,

The texture can change the relative height of the surface, make a perturbation to the normal of any one pixel, the actual geometry does not change, only the visual effect is changed.

  • The initial surface normal is n ( p ) = ( 0 , 0 , 1 ) n(p)=(0,0,1) n(p)=(0,0,1)

  • calculate p p p partial differential
    d p d u = c 1 ∗ [ h ( u + 1 ) − h ( u ) ] d p d v = c 2 ∗ [ h ( v + 1 ) − h ( v ) ] \frac{dp}{du}=c_1*[h(u+1)-h(u)]\\\frac{dp}{dv}=c_2*[h(v+1)-h(v)] dudp​=c1​∗[h(u+1)−h(u)]dvdp​=c2​∗[h(v+1)−h(v)]

  • The perturbation normal is
    n = ( − d p d u , − d p d v , 1 ) . n o r m a l i z e d ( ) n=(-\frac{dp}{du}, -\frac{dp}{dv},1).normalized() n=(−dudp​,−dvdp​,1).normalized()

Regarding tangent space, see refer to ; According to the above formula, dU and dV can be obtained. As for why not add one directly, because the texture coordinate u,v​ increases by 1, ​u\_img,v\_img​ increases ​width,height​.

function in the above formula h h h, the norm of the color in the code, that is, getColor().norm(), through which the height is reflected.

igen::Vector3f bump_fragment_shader(const fragment_shader_payload& payload)
{
    
    Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
    Eigen::Vector3f kd = payload.color;
    Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

    auto l1 = light{{20, 20, 20}, {500, 500, 500}};
    auto l2 = light{{-20, 20, 0}, {500, 500, 500}};

    std::vector<light> lights = {l1, l2};
    Eigen::Vector3f amb_light_intensity{10, 10, 10};
    Eigen::Vector3f eye_pos{0, 0, 10};

    float p = 150;

    Eigen::Vector3f color = payload.color; 
    Eigen::Vector3f point = payload.view_pos;
    Eigen::Vector3f normal = payload.normal;


    float kh = 0.2, kn = 0.1;

    // TODO: Implement bump mapping here
    // Let n = normal = (x, y, z)
    // Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
    // Vector b = n cross product t
    // Matrix TBN = [t b n]
    // dU = kh * kn * (h(u+1/w,v)-h(u,v))
    // dV = kh * kn * (h(u,v+1/h)-h(u,v))
    // Vector ln = (-dU, -dV, 1)
    // Normal n = normalize(TBN * ln)

    float x = normal.x(), y = normal.y(), z = normal.z();
    Vector3f t(x * y / sqrt(x * x + z * z), sqrt(x * x + z * z), z * y / sqrt(x * x + z * z)),
        b = normal.cross(t);
    Matrix3f TBN;
    TBN << t.x(), b.x(), normal.x(),
        t.y(), b.y(), normal.y(),
        t.z(), b.z(), normal.z();
    float u = payload.tex_coords.x(), v = payload.tex_coords.y(),
        h = payload.texture->height, w = payload.texture->width;
    float dU = kh * kn * (payload.texture->getColor(u + 1.0 / w, v).norm() - payload.texture->getColor(u, v).norm()),
        dV = kh * kn * (payload.texture->getColor(u, v + 1.0 / h).norm() - payload.texture->getColor(u, v).norm());
    Vector3f ln(-dU, -dV, 1);
    normal = TBN * ln;


    Eigen::Vector3f result_color = {0, 0, 0};
    result_color = normal.normalized();

    return result_color * 255.f;
}

operation result:

displacement_fragment_shader()

Added lighting factor on the basis of bump.

Eigen::Vector3f displacement_fragment_shader(const fragment_shader_payload& payload)
{
    
    Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
    Eigen::Vector3f kd = payload.color;
    Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

    auto l1 = light{{20, 20, 20}, {500, 500, 500}};
    auto l2 = light{{-20, 20, 0}, {500, 500, 500}};

    std::vector<light> lights = {l1, l2};
    Eigen::Vector3f amb_light_intensity{10, 10, 10};
    Eigen::Vector3f eye_pos{0, 0, 10};

    float p = 150;

    Eigen::Vector3f color = payload.color; 
    Eigen::Vector3f point = payload.view_pos;
    Eigen::Vector3f normal = payload.normal;

    float kh = 0.2, kn = 0.1;
    
    // TODO: Implement displacement mapping here
    // Let n = normal = (x, y, z)
    // Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
    // Vector b = n cross product t
    // Matrix TBN = [t b n]
    // dU = kh * kn * (h(u+1/w,v)-h(u,v))
    // dV = kh * kn * (h(u,v+1/h)-h(u,v))
    // Vector ln = (-dU, -dV, 1)
    // Position p = p + kn * n * h(u,v)
    // Normal n = normalize(TBN * ln)

    float x = normal.x(), y = normal.y(), z = normal.z();
    Vector3f t(x * y / sqrt(x * x + z * z), sqrt(x * x + z * z), z * y / sqrt(x * x + z * z)),
        b = normal.cross(t);

    Matrix3f TBN;
    TBN << t.x(), b.x(), normal.x(),
        t.y(), b.y(), normal.y(),
        t.z(), b.z(), normal.z();

    float u = payload.tex_coords.x(), v = payload.tex_coords.y(),
        h = payload.texture->height, w = payload.texture->width;

    float dU = kh * kn * (payload.texture->getColor(u + 1.0 / w, v).norm() - payload.texture->getColor(u, v).norm()),
        dV = kh * kn * (payload.texture->getColor(u, v + 1.0 / h).norm() - payload.texture->getColor(u, v).norm());

    Vector3f ln(-dU, -dV, 1);
    point += (kn * normal * payload.texture->getColor(u, v).norm());
    normal = TBN * ln;


    Eigen::Vector3f result_color = {0, 0, 0};

    for (auto& light : lights)
    {
        // TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular* 
        // components are. Then, accumulate that result on the *result_color* object.
        Vector3f l = (light.position - point).normalized(),
            n = normal.normalized(),
            v = (eye_pos - point).normalized(),
            h = (v + l).normalized(),
            I = light.intensity;
        float r2 = (light.position - point).dot(light.position - point);
        Vector3f Ld = kd.cwiseProduct(I / r2) * std::max(0.0f, n.dot(l)),
            Ls = ks.cwiseProduct(I / r2) * std::pow(std::max(0.0f, n.dot(h)), p);
        result_color += (Ld + Ls);
        result_color += ka.cwiseProduct(amb_light_intensity);
    }

    return result_color * 255.f;
}

operation result:

other models

Just change the path in the main function, some models will report an error

Bilinear interpolation sampling

Linear interpolation (1D)
lerp ⁡ ( x , v 0 , v 1 ) = v 0 + x ( v 1 − v 0 ) \operatorname{lerp}\left(x, v_{0}, v_{1}\right)=v_{0}+x\left(v_{1}-v_{0}\right) lerp(x,v0​,v1​)=v0​+x(v1​−v0​)
Two helper lerps
u 0 = lerp ⁡ ( s , u 00 , u 10 ) u 1 = lerp ⁡ ( s , u 01 , u 11 ) \begin{array}{l} u_{0}=\operatorname{lerp}\left(s, u_{00}, u_{10}\right) \\ u_{1}=\operatorname{lerp}\left(s, u_{01}, u_{11}\right) \end{array} u0​=lerp(s,u00​,u10​)u1​=lerp(s,u01​,u11​)​
Final vertical lerp, to get result:
f ( x , y ) = lerp ⁡ ( t , u 0 , u 1 ) f(x, y)=\operatorname{lerp}\left(t, u_{0}, u_{1}\right) f(x,y)=lerp(t,u0​,u1​)
Just add function Eigen::Vector3f getColorBilinear(float u, float v) in Texture.hpp

Eigen::Vector3f getColorBilinear(float u, float v) {
    float u_00 = int(u * width), v_00 = int((1 - v) * height),
    u_01 = u_00 + 1, v_01 = v_00,
    u_10 = u_00, v_10 = v_00 + 1,
    u_11 = u_00 + 1, v_11 = v_00 + 1;

    Eigen::Vector3f color_00, color_01, color_10, color_11, color_u0, color_u1, color;
    color_00 = getColor(u_00 / width, 1 - v_00 / height);
    color_01 = getColor(u_01 / width, 1 - v_01 / height);
    color_10 = getColor(u_10 / width, 1 - v_10 / height);
    color_11 = getColor(u_11 / width, 1 - v_11 / height);
    color_u0 = color_00 + (color_01 - color_00) * (u * width - u_00);
    color_u1 = color_10 + (color_11 - color_10) * (u * width - u_00);
    color = color_u0 + (color_u1 - color_u0) * ((1 - v) * height - v_00);

    return color;
}

Before bilinear interpolation:

After bilinear interpolation:

Obviously the transition is smoother.

Tags: Algorithm

Posted by johndale on Wed, 13 Jul 2022 07:11:14 +0530