Czech Welcome Guff Coding 3D - Engine Guestbook Links Downloads About author Mail me Mailform
box_en 3D Engine 11 - Light and lighting methods 3 (lightmaps) box_en

Well, hello !

    It's 21th of december .. wait! It's 22th of december, 1:51 o'clock to be precise. I'm in time pressure and need to translate two more tutorials, possibly write third one so i won't guff about how i've been lately ...
    Today we're goinng to implement lightmapping - the very famous method since Quake I that started lightmap mayhem. Lightmaps themselves are textures, used for storing lighting values across all surfaces of our world. (it should be obvious none of them is repeated) They are most times low resolution, because they would take up too much memory. Advantage of lightmaps is they can be pre-computed. Disadvantage is - they're static. You can connect them with stencil to get nice dynamic-shadowed world, thus - with static lights. So, what are we rally going to do ?
  • Precompute lightmaps and store them some way
  • Render scene with normal textures modulated by lightmaps
    ... as lightmaps are calcluated offline (ie. not realtime), we can afford very expensive methods for computing them. We can use such an slow and complicated methods we couldn't even write it ! :-)     In case you experienced Quake I level editing, you know you have to draw your map in editor, then run BSP compiller program, then Light program and preferably VIS. And then, you get some junkie and give him computer with your level on it ...     Yes - and by the way. Lightmaps are everywhere (did i mention lightmap mayhem ?) so i hope you're going to be more original and replace lightmaps with different method. (ok, with todays pixel shaders lightmaps are slowly going to be forgotten, but connection of lightmap and stencil is very attractive) You could see lightmaps in say .. Quake, Half-Life, Unreal or any other FPS game ... But get started !

classic lightmap disadvantage - notice,
shadow is kind of edgy (low lightmap resolution)

How to map lightmap

     As i told you, we need lightmaps all over our world. Next we need to have (aproximately) same size of texels on all surfaces and in ideal case we'd like texels to be square. (it means use different sizes of lightmaps for different-sized surfaces) Otherwise the result look quite ugly. The simplest way to do it is so-called planar mapping. We get surface normal, find it's greatest component and select apropriate plane. (it will be the closest one of coordinate planes xy xz yz) We will project our surface to this plane in order to get rectangle, bounding it. You propably got the idea that rectangle should be the lightmap. We can derive mapping coordinates from relative positions of surface vertices and imaginara texture rectangle. It won't be so easy, we're having couple of troubles:
  • we have triangles, not surfaces (ie. single lightmap should contain more trialngles)
  • our lightmap propably won't ever be power-of-two sized
    We can deal quite easily with first problem. We collect all coplanar faces into groups and map lightmaps to groups.
    The second problem is a bit more complicated. You can solve it by forcing lightmaps to be nearest power-of-two size, but this yields non-square texels (and of course not uniform-sized) Some of newer 3D accelerators can handle non-power-of-two textures. Shall we ? I think not. We will solve it by packing more textures into a few big texture pages, say 1024×1024. It has many advantages, even for 3D accelerators:
  • we won't waste too much space when using texture pages instead of many nearest pow-2 textures
  • it will reduce number of texture switches, which can be expensive with hardware accelerators
  • we will get our uniform square texels we always wanted !
The only disadvantage is someone has to write that packing function. (from some reason i'm getting nasty feeling it's actually me who is going to write it :-P))

How to fill lightmaps

    Any possible way. We can use whatever we'd like to use. In our example we will use Lambertian equation (get texel position in worldspace, create vector to light, meassure angle between this vector and normal, yielding diffuse-shaded surfaces) and then cast ray to light to see if texel is shadowed. We'll do single check, resulting in 0 / 1 visibility values, but you can do more checks and get nicely antialiassed shadows.
    Next example will be radiosity so i'm planning to use this code for it as well and do some HDR lighting ...

How to do animated lightmaps

    That's simple. Have you ever played System Shock 2 ? Walking trough hulls of abandoned Von Braun starship, blinking and flashing lights everywhere - that was pretty scary, not mentioned all these sounds ! Perfect game. But back to lightmaps. In this kind of games there's set of all lightmaps with some lit texels for every light. Then you can separately modulate lightmaps and make some of lights blinking, pulsing or flashing - whatewer you want. I'm in temptation to write something like this as well. Maybe i'll get some idea.
    Next, there's another kind of animated lightmaps. You have some light motion animation and generate set of lightmaps for every frame of animation. Then you have animated lightmaps again. You can try doing some swaying lights or so ...
    You can do semi-dynamic lightmaps - make one more layer of lightmaps and recompute it's contents each n-th frame. You can do also some simple shadows for moving lights with heavy help of some BSP-like technique to speed-up raytracing. You can also try projecting geometry in front of surface to it's lightmap to get shadows, but it's slow for high-poly scenes.

How to render lightmaps

    When drawing surface, we have texture and corresponding lightmap. When drawing pixels, you modulate texture (diffuse color) by lightmap. Disadvantage of this system is you have lighting values in <0, 255> range. We'd like to be able to make texture shine when it's lightmap is too bright, instead of clamping it's values. We have today some formats for HDR support (16 bit per channel or floats) so we can display values in zero-infinity range. HDR is used by newer games such as Half-Life 2:

hl2 1

hl2 2

    Beautiful, isn't it ? (Half - Life 2 uses lightmaps in combination with shadow-buffer which is (in result) pretty similar to stencil) We will do HDR lightmaps in next tutorial with radiosity.

    Now i'm gonna write example. Wait a minute, because i'm gonna debug it.

I was writing it slow for you to understand it. First we'll look at function, used for grouping faces together as they will be stored in lightmaps:

enum {
    Plane_yz = 0,

struct Group {
    int n_id;
    int n_plane;
    Vector v_u, v_v;
    Vector v_min, v_max;
    int n_face_num;
    int n_face_used;
    Face **p_face;
// group = list of faces, lying in the same
// plane and sharing some vertices

Group_Data *Group_Faces(World *p_world, float f_max_size)
    Group_Data *p_data;
    int i, j, k, l, m;
    int b_found;
    int b_skip;

    if((p_data = (Group_Data*)malloc(sizeof(Group_Data))) == NULL)
        return NULL;
    p_data->n_group_num = 0;
    p_data->p_group = NULL;
    // alloc memory for this structure
    for(i = 0; i < p_world->n_object_num; i ++) {
        for(j = 0; j < p_world->object[i].n_face_num; j ++) {
            for(k = 0, b_found = false; k < p_data->n_group_num; k ++) {
                   &p_world->object[i].face[j], f_max_size))

                for(l = 0; l < p_data->p_group[k].n_face_used; l ++) {
                       p_data->p_group[k].p_face[l])) {
                        b_found = true;
                            return false;
                        p_world->object[i].face[j].n_group = k;
                        // add to group ...

                        for(k ++; k < p_data->n_group_num; k ++) {
                            for(l = 0; l < p_data->p_group[k].n_face_used; l ++) {
                                   p_data->p_group[k].p_face[l])) {
                                    // groups p_world->object[i].face[j].n_group
                                    // and k are connected with this face and need
                                    // to be merged ...

                                    b_skip = false;
                                    for(m = 0;
                                       m < p_data->p_group[k].n_face_used; m ++) {
                                           f_max_size)) {
                                            b_skip = true;
                                            return false;
                                    // add all faces of group[k] ...

                                    if(!b_skip) {
                                        p_data->p_group[k --] =
                                            p_data->p_group[-- p_data->n_group_num];
                                    // in case group is empty, delete it

                        // search for more neighbour faces in other groups ...

            // search if face have neighbour(s) in some of group(s)

            if(!b_found) {
                if(!p_data->p_group) {
                    p_data->n_group_num = 1;
                    if(!(p_data->p_group = (Group*)malloc(sizeof(Group))))
                        return false;
                } else {
                    if(!(p_data->p_group = (Group*)realloc(p_data->p_group,
                       sizeof(Group) * ++ p_data->n_group_num)))
                        return false;
                // realloc groups - we need a new one ...

                p_data->p_group[p_data->n_group_num - 1].n_id =
                    p_data->n_group_num - 1;
                p_data->p_group[p_data->n_group_num - 1].n_face_num = 0;
                p_data->p_group[p_data->n_group_num - 1].n_face_used = 0;
                p_data->p_group[p_data->n_group_num - 1].p_face = 0;
                if(!Add_FaceToGroup(&p_data->p_group[p_data->n_group_num - 1],
                    return false;
                p_world->object[i].face[j].n_group = p_data->n_group_num - 1;
                // add first face ...

                Init_Group(&p_data->p_group[p_data->n_group_num - 1]);
                // calculate some vectors ...
    // search neighbouring coplanar faces

    return p_data;

    It's quite simple. We have our Group structure to store face groups. What's it doing ? We call Group_Faces(), pass it world (all vertices should be already transformed if necessary -> all matrices are unit) and function is examinating the world, face by face. For each of them it visit faces in already existing groups to check if there is coplanar neighbour. If ther is, it add face to group and also search if new face connected two or more groups together. In that case, groups are merged into one. You can also see some Check_GroupSize() calls. It checks if group is still small enough to fit into lightmap without stretching. Otherwise new group is set. Imagine a big house - all floors are coplanar, ie. they would all connect in single group of size of the house. That would require too big texture which we can't have. But we can have the same amount of smaller textures to divide parts of floor into ... So we create groups up to some maximal size.See Check_GroupSize() code:

int Check_GroupSize(Group *p_group, Face *p_face, float f_max_group_size)
    Vector v_new_min, v_new_max, v_new_size;
    int i;

    v_new_min = p_group->v_min;
    v_new_max = p_group->v_max;
    for(i = 0; i < 3; i ++) {
        if(v_new_min.x > p_face->vertex[i]->x)
            v_new_min.x = p_face->vertex[i]->x;
        if(v_new_min.y > p_face->vertex[i]->y)
            v_new_min.y = p_face->vertex[i]->y;
        if(v_new_min.z > p_face->vertex[i]->z)
            v_new_min.z = p_face->vertex[i]->z;
        if(v_new_max.x < p_face->vertex[i]->x)
            v_new_max.x = p_face->vertex[i]->x;
        if(v_new_max.y < p_face->vertex[i]->y)
            v_new_max.y = p_face->vertex[i]->y;
        if(v_new_max.z < p_face->vertex[i]->z)
            v_new_max.z = p_face->vertex[i]->z;
    v_new_size = v_new_max;
    v_new_size.x -= v_new_min.x;
    v_new_size.y -= v_new_min.y;
    v_new_size.z -= v_new_min.z;
    if(_Dot(v_new_size, p_group->v_u) > f_max_group_size ||
       _Dot(v_new_size, p_group->v_v) > f_max_group_size) {
        if(v_new_size.x - (p_group->v_max.x - p_group->v_min.x) > .1f ||
           v_new_size.y - (p_group->v_max.y - p_group->v_min.y) > .1f ||
           v_new_size.z - (p_group->v_max.z - p_group->v_min.z) > .1f)
            return false;
        // in case it was already bigger than treshold, the new face might
        // not enlarge it (imagine quad, consisting of two faces, both
        // bigger than treshold)
    // in case group would be too big after adding face ...

    p_group->v_max = v_new_max;
    p_group->v_min = v_new_min;
    // update dimensions ...

    return true;
    // true = you should add face

This function uses minimal and maximal coordinates of group faces triangles to determine size. When creating new group, we determine mapping plane, coplanar with one of coordinate planes xy, xz or yz. How to choose one of these ? Simply select the gratest component of normal of some face in group (no matter which one, they have all the same normal in group, remember ?) In case x, we choose yz. Right ? Next from knowledge of plane we tell vectos v_u and v_v, showing directions (in worldspace) of increasing u and v lightmap coordinates. Next we can project our imaginary rectangle dimensions by performing dot product of v_max - v_min and v_u, respectively v_v vectors. I'll show you function, reponsible for computing these vectors right after inserting first face into new group:

void Init_Group(Group *p_group)
    Vector v_new_min, v_new_max;
    float v_f[3];
    int i;

    v_f[0] = (float)fabs(p_group->p_face[0]->normal.a);
    v_f[1] = (float)fabs(p_group->p_face[0]->normal.b);
    v_f[2] = (float)fabs(p_group->p_face[0]->normal.c);
    for(i = 0; i < 2; i ++) {
        if(v_f[i] > v_f[(i + 1) % 3] &&
           v_f[i] > v_f[(i + 2) % 3])
    p_group->n_plane = i;
    // get greatest normal vector component (and derive mapping plane)

    memset(&p_group->v_u, 0, sizeof(Vector));
    memset(&p_group->v_v, 0, sizeof(Vector));
    switch(p_group->n_plane) {
    case Plane_yz: // plane yz
        p_group->v_u.z = 1;
        p_group->v_v.y = 1;
    case Plane_xz: // plane xz
        p_group->v_u.x = 1;
        p_group->v_v.z = 1;
    default: // plane xy
        p_group->v_u.x = 1;
        p_group->v_v.y = 1;
    // find u and v vectors of mapping plane

    v_new_min.x = p_group->p_face[0]->vertex[0]->x;
    v_new_min.y = p_group->p_face[0]->vertex[0]->y;
    v_new_min.z = p_group->p_face[0]->vertex[0]->z;
    v_new_max.x = p_group->p_face[0]->vertex[0]->x;
    v_new_max.y = p_group->p_face[0]->vertex[0]->y;
    v_new_max.z = p_group->p_face[0]->vertex[0]->z;
    for(i = 1; i < 3; i ++) {
        if(v_new_min.x > p_group->p_face[0]->vertex[i]->x)
            v_new_min.x = p_group->p_face[0]->vertex[i]->x;
        if(v_new_min.y > p_group->p_face[0]->vertex[i]->y)
            v_new_min.y = p_group->p_face[0]->vertex[i]->y;
        if(v_new_min.z > p_group->p_face[0]->vertex[i]->z)
            v_new_min.z = p_group->p_face[0]->vertex[i]->z;
        if(v_new_max.x < p_group->p_face[0]->vertex[i]->x)
            v_new_max.x = p_group->p_face[0]->vertex[i]->x;
        if(v_new_max.y < p_group->p_face[0]->vertex[i]->y)
            v_new_max.y = p_group->p_face[0]->vertex[i]->y;
        if(v_new_max.z < p_group->p_face[0]->vertex[i]->z)
            v_new_max.z = p_group->p_face[0]->vertex[i]->z;
    p_group->v_min = v_new_min;
    p_group->v_max = v_new_max;
    // find min. and max. vertex coordinate of first (and the only, right now) face

Function Add_FaceToGroup() is just face-list manager function, but for code completeness:

int Add_FaceToGroup(Group *p_group, Face *p_face)
    if(p_group->n_face_used == p_group->n_face_num) {
        if(p_group->n_face_num) {
            if(!(p_group->p_face = (Face**)realloc
               (p_group->p_face, (p_group->n_face_num + 4) * sizeof(Face*))))
                return false;
            p_group->n_face_num += 4;
        } else {
            if(!(p_group->p_face = (Face**)malloc(2 * sizeof(Face*))))
                return false;
            p_group->n_face_num = 2;
    // in case of need, grow buffer

    p_group->p_face[p_group->n_face_used ++] = p_face;
    // add face

    return true;

This should be pretty clear. Function b_Same_Vertex() was already in stencil example and all it does is compare two vertices positions and return true / false. b_Neighbour_Face() is slightly modified out from stencil example aswell. There's a bit different coplanar face detection:

int b_SameVertex(Vertex *p_vertex_a, Vertex *p_vertex_b)
    float f, d;

    f = (p_vertex_a->x - p_vertex_b->x);
    f *= f;
    d = (p_vertex_a->y - p_vertex_b->y);
    f += d * d;
    d = (p_vertex_a->z - p_vertex_b->z);
    // compute distance between vertices
    return ((f + d * d) < 0.01f);

int n_ClassifyVertex(Plane t_plane, Vertex *p_vertex)
    float f;

    if((f = p_vertex->x * t_plane.a + p_vertex->y * t_plane.b +
       p_vertex->z * t_plane.c + t_plane.d) > .1f)
        return 1;
    else if(f < -.1f)
        return -1;
    return 0;

int b_Neighbour(Face *p_face_a, Face *p_face_b)
    int n_edge_a, n_edge_b;
    int n_edge_c, n_edge_d;
    Plane t_plane;

    t_plane = p_face_a->normal;
    t_plane.d = -(p_face_a->normal.a * p_face_b->vertex[0]->x +
          p_face_a->normal.b * p_face_b->vertex[0]->y +
          p_face_a->normal.c * p_face_b->vertex[0]->z);

    if(n_ClassifyVertex(t_plane, p_face_b->vertex[1]) ||
       n_ClassifyVertex(t_plane, p_face_b->vertex[2]))
        return false;
    // check if faces are coplanar ...

    t_plane = p_face_b->normal;
    t_plane.d = -(p_face_b->normal.a * p_face_a->vertex[0]->x +
          p_face_b->normal.b * p_face_a->vertex[0]->y +
          p_face_b->normal.c * p_face_a->vertex[0]->z);

    if(n_ClassifyVertex(t_plane, p_face_a->vertex[1]) ||
       n_ClassifyVertex(t_plane, p_face_a->vertex[2]))
        return false;
    // check if faces are coplanar (again)

    for(n_edge_a = 0; n_edge_a < 3; n_edge_a ++) {
        n_edge_b = (1 + n_edge_a) % 3;
        // indices of edge 1 vertices (a, b)

        for(n_edge_c = 0; n_edge_c < 3; n_edge_c ++) {
            n_edge_d = (n_edge_c + 1) % 3;
            // indices of edge 2 vertices (c, d)

                            p_face_b->vertex[n_edge_c]) &&
                            p_face_b->vertex[n_edge_d])) ||
                            p_face_b->vertex[n_edge_d]) &&
                return true;
    // check for neighbour faces

    return false;

The result of calling described functions is you have all faces connected to groups as they will have single lightmap per group. You can see it on image below where color represents group id:

lightmap groups
faces in groups

    Now when all faces are divided, we should finish the mapping precedure. We have all we need - we have min and max coordinate of each group geomethry, we have selected mapping plane we can project gemoetry to and we have vectors showing u and v (worldspace) direction. We're able to compute texture coordinates with knowledge of this. We can also compute position of each lightmap texel (in worldspace again) so we can compute the very lightmap textures. We set up next structure to keep some values for mapping and proceed with it:

struct Mapping_Info {
    Vector v_org;
    Vector v_u, v_v;
    float f_width;
    float f_height;
    int n_plane;
// information about lightmap worldspace positions

Mapping_Info *Generate_Map_Coords(Group_Data *p_group_data)
    Mapping_Info *p_map_info;
    Vector v_min, v_max;
    Vector v_u, v_v;
    Group *p_group;
    int i, j, k;

    if(!(p_map_info = (Mapping_Info*)malloc(p_group_data->n_group_num *
        return 0;
    // we have as much mapping infos as much groups

    for(i = 0; i < p_group_data->n_group_num; i ++) {
        p_group = &p_group_data->p_group[i];

        v_min.x = p_group->p_face[0]->vertex[0]->x;
        v_min.y = p_group->p_face[0]->vertex[0]->y;
        v_min.z = p_group->p_face[0]->vertex[0]->z;
        v_max = v_min;
        for(j = 0; j < p_group->n_face_used; j ++) {
            for(k = 0; k < 3; k ++) {
                if(v_min.x > p_group->p_face[j]->vertex[k]->x)
                    v_min.x = p_group->p_face[j]->vertex[k]->x;
                if(v_min.y > p_group->p_face[j]->vertex[k]->y)
                    v_min.y = p_group->p_face[j]->vertex[k]->y;
                if(v_min.z > p_group->p_face[j]->vertex[k]->z)
                    v_min.z = p_group->p_face[j]->vertex[k]->z;
                if(v_max.x < p_group->p_face[j]->vertex[k]->x)
                    v_max.x = p_group->p_face[j]->vertex[k]->x;
                if(v_max.y < p_group->p_face[j]->vertex[k]->y)
                    v_max.y = p_group->p_face[j]->vertex[k]->y;
                if(v_max.z < p_group->p_face[j]->vertex[k]->z)
                    v_max.z = p_group->p_face[j]->vertex[k]->z;
        // find min and max coordinates

        p_map_info[i].v_org = v_min;
        // lightmap worldspace origin
        // (used for calculating worldspace position of lightmap texels)

        v_u = p_group->v_u;
        v_v = p_group->v_v;
        // u and v of mapping plane

        p_map_info[i].n_plane = p_group->n_plane;
        // mapping plane index

        p_map_info[i].f_width = _Dot(v_max, v_u) - _Dot(v_min, v_u);
        p_map_info[i].f_height = _Dot(v_max, v_v) - _Dot(v_min, v_v);
        // compute size of texture rectangle

        p_map_info[i].v_u = v_u;
        p_map_info[i].v_v = v_v;
        // rectangle edge vectors (unit size)

        for(j = 0; j < p_group->n_face_used; j ++) {
            for(k = 0; k < 3; k ++) {
                p_group->p_face[j]->vertex[k]->lu =
                    ((p_group->p_face[j]->vertex[k]->x - v_min.x) * v_u.x +
                    (p_group->p_face[j]->vertex[k]->y - v_min.y) * v_u.y +
                    (p_group->p_face[j]->vertex[k]->z - v_min.z) * v_u.z) /
                p_group->p_face[j]->vertex[k]->lv =
                    ((p_group->p_face[j]->vertex[k]->x - v_min.x) * v_v.x +
                    (p_group->p_face[j]->vertex[k]->y - v_min.y) * v_v.y +
                    (p_group->p_face[j]->vertex[k]->z - v_min.z) * v_v.z) /
        // calculate lightmap coordinates of all faces in this particluar group

    return p_map_info;

    ... to explain it a bit. Our Generate_Map_Coords() function work with array of all groups and calculate mapping info structure for every of them. There belong lightmap worldspace origin v_org (origin = left upper corner) and two vectors v_u and v_v, showing directions of lightmap edges along u and v axis. Size of lightmap in worldspace (f_width, f_height) is also important for telling how much place in texture we will need.     Next we pass every vertex of every face in this group and calculate his lightmap u, v coordinates. Now we have our rectangular textures (well, it could be square as well, but it's not going to happen too offen). Now we need power-of-two textures so it's time for some texture-page creating function
    How to do that ? Don't think about it too much. You can't make it absolutely effective anyway. The simplest way is to make array of values, telling you how high every column of raster is occupied by textures. When inserting another texture, we will select some column and check lightmap width columns right of it. Find the greatest value from the table and check if that value plus lightmap height is below texture page height (so our texture fit). It's quite simple, but today i've problems with explaining so i better write it down in C:

struct TLightmapTex {
    int n_width, n_height;
    int *p_frontline;
    unsigned __int32 *p_bytes;
// texture page information

TLightmapTex *p_Lightmap(int n_width, int n_height)
    TLightmapTex *p_lightmap;

    if(!(p_lightmap = (TLightmapTex*)malloc(sizeof(TLightmapTex))))
        return 0;
    // alloc lightmap data ...

    p_lightmap->n_width = n_width;
    p_lightmap->n_height = n_height;
    // set lightmap dimensions ...

    if(!(p_lightmap->p_frontline = (int*)malloc(n_width * sizeof(int))))
        return 0;
    memset(p_lightmap->p_frontline, 0, n_width * sizeof(int));
    // alloc and clear frontline ("collumn occupation" table) ...

    if(!(p_lightmap->p_bytes = (unsigned __int32*)malloc(n_width *
       n_height * sizeof(unsigned __int32))))
        return 0;
    memset(p_lightmap->p_bytes, 0, n_width * n_height * sizeof(unsigned __int32));
    // alloc texture frame buffer and clean it ...

    return p_lightmap;

int b_InsertTile(int n_width, int n_height, TLightmapTex *p_lightmap,
                 int *p_x, int *p_y, Matrix *p_coord_transform)
    int i, j, n_max;

    n_width += 2;
    n_height += 2;
    // add one-pixel border for every lightmap ...

    if(n_width > p_lightmap->n_width || n_height > p_lightmap->n_height)
        return false;
    // don't fit ...

    for(i = 0, j = 0, n_max = 0; i < p_lightmap->n_width; i ++) {
        if(p_lightmap->p_frontline[i] + n_height >= p_lightmap->n_height) {
            n_max = 0;
            j = 0;
        } else {
            if(n_max < p_lightmap->p_frontline[i])
                n_max = p_lightmap->p_frontline[i];
            j ++;
        // proceed trough columns, checking it fit,
        // simultaneously searching for highest occupied column ...

        if(j == n_width) {
            for(*p_x = (i -= j - 1); i < (*p_x) + j; i ++)
                p_lightmap->p_frontline[i] = n_max + n_height;
            // fill frontline for our tile ...

            *p_y = n_max;
            // get tile raster coordinates ...

            (*p_x) ++;
            (*p_y) ++;
            // move it one pixel right and down
            // so the tile is in the centre of frame

            n_width -= 2;
            n_height -= 2;

            (*p_coord_transform)[0][0] *= (float)n_width /
            (*p_coord_transform)[0][1] *= (float)n_width /
            (*p_coord_transform)[0][2] *= (float)n_width /
            (*p_coord_transform)[1][0] *= (float)n_height /
            (*p_coord_transform)[1][1] *= (float)n_height /
            (*p_coord_transform)[1][2] *= (float)n_height /
            (*p_coord_transform)[3][0] = (float)(*p_x - .5f) /
            (*p_coord_transform)[3][1] = (float)(*p_y - .5f) /
            // create transformation matrix, used
            // for transforming lightmap coordinates

            return true;
    // searching for place for the tile ...

    return false;

    So ... we have another structure with dimensions, frontline and image data buffer. Next here is p_Lightmap() we are not interested in, because all it does is to properly initialize structure i've just described. But the next function is much better in terms of features.
    b_InsertTile() is attempting to place tile of some width and height into texture page. It's using parameters p_x and p_y for returning it's position in raster. The returning value of function simply translates into true = our tile fit or false = try it with another texture page or create a new one. I think the function is pretty self-explanatory so i won't describe it any further ...
    I'd like mention my concern in one-pixel rectangles. We will want to filter our lightmap when rendering it ando our bilinear filter require some neighbour pixels so we will make this one-pixel rectangle arround lightmap and what is important - fill it by neighbouring values of lightmap texels. Then we can calmly sleep and dream about lightmap filtering ...
    We also compute some odd transfromation matrix. Why ? What do we need to transform ? Our lightmap coordinates are computed for our texture rectangle, where [0, 0] is it's top left corner and [1, 1] is the opposite. But we've just moved that rectangle int bigger texture page so we need to recompute texture coordinates for all faces this rectangle is mapped to (all faces in current group)     Now we finally have raster coordinates so nothing can stop us from writing simple routine that would calculate lightmap contents for us. We'll handle this simple. Lambert's rule, cast one ray to light, add ambient and over. The result is this:

void Raytrace_Tile(Group *p_group, Mapping_Info *p_mapping_info, World *p_world,
                   int n_x, int n_y, int n_h, int n_w, int n_p_w,
                   unsigned __int32 *p_buffer, unsigned __int32 n_ambient)
    Vector v_texel_pos;
    Vector v_pos_v;
    float r, g, b;
    Vector v_ray;
    int x, y;
    float a;
    int j;

    for(y = 0; y < n_h; y ++) {
        v_pos_v = p_mapping_info->v_org;
        v_pos_v.x += p_mapping_info->v_v.x * (float)y /
            (float)n_h * p_mapping_info->f_height;
        v_pos_v.y += p_mapping_info->v_v.y * (float)y /
            (float)n_h * p_mapping_info->f_height;
        v_pos_v.z += p_mapping_info->v_v.z * (float)y /
            (float)n_h * p_mapping_info->f_height;
        for(x = 0; x < n_w; x ++) {
            v_texel_pos.x = p_mapping_info->v_u.x * (float)x /
                (float)n_w * p_mapping_info->f_width + v_pos_v.x;
            v_texel_pos.y = p_mapping_info->v_u.y * (float)x /
                (float)n_w * p_mapping_info->f_width + v_pos_v.y;
            v_texel_pos.z = p_mapping_info->v_u.z * (float)x /
                (float)n_w * p_mapping_info->f_width + v_pos_v.z;
            // calculate lightmap texel position in worldspace

            switch(p_mapping_info->n_plane) {
            case Plane_xy:
                v_texel_pos.z = -(v_texel_pos.x * p_group->p_face[0]->normal.a +
                                  v_texel_pos.y * p_group->p_face[0]->normal.b +
                                  p_group->p_face[0]->normal.d) /
            case Plane_xz:
                v_texel_pos.y = -(v_texel_pos.x * p_group->p_face[0]->normal.a +
                                  v_texel_pos.z * p_group->p_face[0]->normal.c +
                                  p_group->p_face[0]->normal.d) /
            case Plane_yz:
                v_texel_pos.x = -(v_texel_pos.y * p_group->p_face[0]->normal.b +
                                  v_texel_pos.z * p_group->p_face[0]->normal.c +
                                  p_group->p_face[0]->normal.d) /
            // project lightmap texel position from rectangle
            // to the plane faces are lying in ...

            r = (float)((n_ambient >> 16) & 0xff) / 256;
            g = (float)((n_ambient >> 8) & 0xff) / 256;
            b = (float)(n_ambient & 0xff) / 256;
            // we'll start with ambient light ...

            for(j = 0; j < p_world->n_light_num; j ++) {
                v_ray.x = p_world->light[j].v_position.x - v_texel_pos.x;
                v_ray.y = p_world->light[j].v_position.y - v_texel_pos.y;
                v_ray.z = p_world->light[j].v_position.z - v_texel_pos.z;
                a = - p_group->p_face[0]->normal.a * v_ray.x +
                    - p_group->p_face[0]->normal.b * v_ray.y +
                    - p_group->p_face[0]->normal.c * v_ray.z;
                a /= _Len(v_ray);
                // calculate Lambert's equation

                if(a <= 0)

                if(a > 0 && (/*!p_world->light[j].b_cast_shadows ||*/
                   Cast_Ray(p_world->light[j].v_position, v_texel_pos, p_world))) {
                    a = (a > 1)? p_world->light[j].f_multiplier :
                        p_world->light[j].f_multiplier * a;
                    // multiply by light multiplier (value in 3ds)

                    r += p_world->light[j].v_color.x * a;
                    b += p_world->light[j].v_color.y * a;
                    g += p_world->light[j].v_color.z * a;
                    // calculate color components
                // in case light is visible and either
                // it doesn't cast shadows or texel isn't in shadow
            // calculate illumination ...

            r = (r > 1)? 255 : r * 255;
            g = (g > 1)? 255 : g * 255;
            b = (b > 1)? 255 : b * 255;
            // get rgb from <0, 1> float range

            p_buffer[(x + n_x) + (y + n_y) * n_p_w] =
                ((int)r << 16) + ((int)g << 8) + (int)b;
            // pack RGB to single int and store into texture

    Raytrace_Tile() gets group, mapping info, geomethry, raster position (and raster width, so we can compute index), raster buffer itself and ambient light color. On the beginning of the loop, position of texel in worldspace is computed. But it's position on that imaginary texture rectangle so we imediately project it on plane faces are lying on. All what remains is code, computing illumination of each pixel. I didn't add code for raytracing, find it in example ... Anyway, we're doner with computing lightmaps and we're going to see how to display them !

lightmap texture page
baked texture

    Before we get to that, i'd like to mention ... yes, i can mention one-pixel frame again, because now it's time to fill it, but i wnated to mention blur. Since we're dealing with point lights, the shadows might me quite hard so the simplest way to soften them is to apply some blur to lightmaps (but separately ! not to whole texture page) With blur it looks better but you mustn't blur it too much, otherwise there will be noticeable differences between edges of neighbouring surfaces due to blur.


 lightmap tool

    Tool is designed to be run from command line, where the first parameter is compulsory ant contain filename of .3ds world. Another (optional) parameters are "-axxxxxx", where "xxxxxx" is hexadecimal ambient color, next "-qx" where "x" is float multiplier of texture size againist world size (.5 is very reasonable, but it depends on your world scale), Next is "-rx" where "x" is texture page resolution (default 512) and eventually "-bx" where "x" is radius of blur (0 = no blur) ... Program work for a moment, plotting dot for each processed group and then store lightmaps as bitmaps plus geomethry as .mesh file (don't try to find format description, i've just created it) while .3ds doesn't contain any useful information (but cameras, animations, etc ...)
    Now we'll proceed to rendering with lightmaps. Normally it's nothing complicated, but i've decided not to create 256-coloured lightmaps to perform optimalisation i've introduced for textures neither bilinear interpolation which is quite slow on that P133 back at home. It's time to learn something about dithering ...


    Dithering is method where color transitions are substituted by dispersing them. What does it mean ? We're not going to compute slow color transition for bilinear filter. We'll use advantage of our eye - some pattern of two different colors may appear as solid area of another color. That is exactly what we want to achieve ... And the result is very fast. Very maginifed it look like this:

dithered lightmap
dithered lightmap

It's screenshot from near-wall, you can see several kinds of patterns, used for transition between texels. How to do that ? We need some table, used for dispersing texture coordinates. I've managed to get 4x4 dithering table form some old 256-color crap. What does it contain ? There are fractions in range <0, 1), systemathically spread accross the table, which we will add to lightmap coordinate. Where do we get values to index table ? It won't be texture fractional values this time. It will be screen coordinates. Yes ! We'll take x and y of pixel, calculate index as if you were texturing screen with that table and disturb u-v coordinates. I've been talking about single table, but it's obvious we need two tables for both u and v coordinates. The second table is constructed simply by reversing order of elements from first one. I think i've told you enough to understand this piece of source:

int dither_u[] = {
    (int)(0 / 15.0 * 0x10000),    (int)(14 / 15.0 * 0x10000),
    (int)(3 / 15.0 * 0x10000),    (int)(13 / 15.0 * 0x10000),
    (int)(11 / 15.0 * 0x10000),    (int)(5 / 15.0 * 0x10000),
    (int)(8 / 15.0 * 0x10000),    (int)(6 / 15.0 * 0x10000),
    (int)(12 / 15.0 * 0x10000),    (int)(2 / 15.0 * 0x10000),
    (int)(15 / 15.0 * 0x10000),    (int)(1 / 15.0 * 0x10000),
    (int)(7 / 15.0 * 0x10000),    (int)(9 / 15.0 * 0x10000),
    (int)(4 / 15.0 * 0x10000),    (int)(10 / 15.0 * 0x10000)
// table with coefficients for dithering (fixed point values)

int dither_v[] = {
    (int)(10 / 15.0 * 0x10000),  (int)(4 / 15.0 * 0x10000),
    (int)(9 / 15.0 * 0x10000),    (int)(7 / 15.0 * 0x10000),
    (int)(1 / 15.0 * 0x10000),    (int)(15 / 15.0 * 0x10000),
    (int)(2 / 15.0 * 0x10000),    (int)(12 / 15.0 * 0x10000),
    (int)(6 / 15.0 * 0x10000),    (int)(8 / 15.0 * 0x10000),
    (int)(5 / 15.0 * 0x10000),    (int)(11 / 15.0 * 0x10000),
    (int)(13 / 15.0 * 0x10000),    (int)(3 / 15.0 * 0x10000),
    (int)(14 / 15.0 * 0x10000),    (int)(0 / 15.0 * 0x10000)
// the same table with reversed order

static void __fastcall Interpolate_Segment_Bilerp(int u1, int du, int v1, int dv,
    int lu1, int ldu, int lv1, int ldv,
    unsigned __int32 *video, unsigned __int32 *v2, int x, int y)
    register int frac, tex;

    y = (y & 3) << 2;
    do {
        tex = ((u1 >> 16) & A_1) + ((v1 >> B_1) & A_2);
        // texelu address

        frac = ((u1 >> 10) & 0x3f) + ((v1 >> 4) & 0xfc0);
        // fractional part of texelu address (6 bits V + 6 bits U)

        *video = _palette[(_texture[tex] << 6) + multab[0][frac]] +
                 _palette[(_texture[(tex + 1) & A_3] << 6) + multab[1][frac]] +
                 _palette[(_texture[(tex + W_0) & A_3] << 6) + multab[2][frac]] +
                 _palette[(_texture[(tex + W_1) & A_3] << 6) + multab[3][frac]];
        // bilinear interpolation (texture)

        frac = y | (x & 3);
        // compute index to dithering table
        tex = _lightmap[(((lu1 + dither_u[frac]) >> 16) & LA_1) +
            (((lv1 + dither_v[frac]) >> LB_1) & LA_2];
        // disturb each coordinate using values form table

        *video = ((((*video & 0xff0000) * (tex & 0xff) & 0xff000000) |
                  (((*video & 0x00ff00) * ((tex >> 8) & 0xff)) & 0x00ff0000) |
                  (((*video & 0x0000ff) * ((tex >> 16) & 0xff))) & 0x0000ff00)) >> 8;
        // modulate with lightmap

        video ++;
        x ++;
        // next pixel

        u1 += du;
        v1 += dv;
        lu1 += ldu;
        lv1 += ldv;
        // coordinate interpolation
    } while(video < v2);

NOn the beginning we have our tables. They're fractional numbers <0, 15/16> in fixed point. The function below is our classical texture loop, but there's a new fragment, computing dither table index out of screen position and then use it when getting lightmap texel. As it have texel color it modulate texture color with it and move on next pixels ...
    Example can switch between three filter modes: texture and lightmap are dithered (F9), texture is bilinear filtered, lightmap dithered (F10) and in the end i've written even function where both, texture and lightmap are bilinear filtered (F11). But the result isn't as different as i'd expected:

bilinear-filtered lightmap
bilinear filtered lightmaps

dithered lightmap
dithered lightmaps

dithered lightmap with quincunx
dithered lightmaps with quincunx filter - nearly no difference from bilinear filter

    What to say as conclusion ? I hope i haven't made too much of mistakes, but it's 4:22 AM what justice me (or i hope so :o)) So ... see you later (when i sleep enough) Yes ! And i'd forget to put here download of next s-buffer engine, this time with lightmaps.

S-buffer lightmap 2

 S-buffer lightmap 2

well ... see you again ...

    -tHE SWINe-


Valid HTML 4.01!
Valid HTML 4.01