Archive for the 'PhD lab book' Category

Nov 11 2008

improved background subtraction and debugged voxel carving

Published by Martijn under PhD lab book

  • Improved background subtraction by enabling the upper shadow threshold (m_nBeta). This enables doing the inverse of shadow removal, highlight removal. It improves backsub results greatly by removing large light spots caused by movement of the sun and clouds. Currently set to 3 for detecting spots coloured similar to the background but 3x as bright.
  • Fixed problems with VoxelCarving. It turns out to be important to undistort the images first, before mapping them to the voxel grid. This fixesthe projection. Voxel carving result now looks a lot better.
  • Michael mentionned the difference between voxel carving using only the mapping of the voxel’s centroid to the foreground image to determine wheter it should be classified as foreground or carving using the mapping of all 8 corners of the voxel. The second one turns out to be a lot slower (because of 8 instead of 1 match, but gives more acurate results
  • Next thing to try is changing the size of the voxel grid. When are there too few voxels to make a correct classification?
  • Furthermore, improbe backsub result to create better foreground segmented people.
  • Also: think of ways to get rid of “shadow people”. People sometimes get duplicated because voxelcarving can  not distinguish between two spatial positions.

Comments Off

Oct 31 2008

voxelcarving

Published by Martijn under PhD lab book

Tried implementing the voxelcarving method making use of the virtuallib functions. The system compiles just fine, output of the voxelcarving function seems a little bit harder. Important things:

  • What should be the order of the camera images provided? Currently it’s 7-14-19. If this is the wrong order, voxelcarving fails because of incorrect transformations. The correct order which should be used is 14-19-7. This relates to the bog files numbered 0-1-2. To transform between this and the order used previously for loading video streams (19-7-14) use: (i+2)%3 where i is the index of the video stream (i: 14->0,  19->1, 7->2).
  • How to show the voxelspace? Currently going over the ground plane of the space, adding up the number of points in the height that are seen as foreground by 3 cameras. This doesn’t seem to give any good insights.
  • How does the imagedraw3D function work? Seems like it can be used to map voxels to images and plot them. Current input seems to be faulty however..

Comments Off

Oct 30 2008

redeploying vhl and boost

Published by Martijn under PhD lab book

While integrating the VirtualHumanLib (vhl) into cassandra, the final problem was the runtime error stating something like the following: “This application has failed to start because the application configuration is incorrect“.This error is possibly caused my different versions of visual studio compiling different libraries. In this case, michael’s and mine. Itprobably has something to do with the native CRT library, of which different versions are used by different libraries.This results in a “Side-by-Side, or SxS conflict. It turns out that the cassandra project, when including vhl, adds a second CRT lib to the cassandra manifest, having a different version from the one already in there. This is the most likely cause of the error.

Now trying to recompile boost and vhl to see if this puts the CRT for all libs to the same version and eliminates the problem…

Recompilation of boost solved the problem. All libraries now link to the same version of the CRT dll. Besides recompiling the libraries, another solution probably is to redistribute the CRT dll in the correct version together with the program.

Comments Off

Oct 30 2008

library entries

Published by Martijn under PhD lab book

  • The prototype project turned out to use the include files of the other projects from the devel/libs folder. This prevented changes to these include files in the devel/src folder to work. The references were changed to link to the src dir, so all changes to any of the libraries are included as it should.
  • Furthermore, the VirtualHumanLib.lib library should be included into the project linker, so all fnctions can be located. A debug and release version can be found in D:\Programming\Output\libraries\.
  • Michael told about an interesting boost function called BOOST_FOREACH which allows to iterate over a list op any type just by giving the list and an iterating variable. This saves a lot of coding when walking trough all elements of a vector for example.

Comments Off

Oct 28 2008

multiple videodetail views

Published by Martijn under PhD lab book

Enabled the system to open multiple videodetail views at the same time. To make them independent, each view is instanced into a vector replacing the single videodetailview variable in the mdimainframe. Each view keeps an indext linking it to its position in the vector, which is necessary to keep the vector clean when windows open and close.

Comments Off

Oct 27 2008

reading multiple cameras and multi-threading

Published by Martijn under PhD lab book

  • Instead of passing a pointer to a videostream from the main frame to the video engine and implicitly casting onto the abstract videostream class, three video streams are created in the main frame, explicitly cast to the respective abstract type, and put into a vector. The vector is copied into the videoengine inside the videoengine init method.
  • all references to the video stream are replaced by loops, reading all separate video streams and saving the current frame to the pImageRGBThis_[3] variable. All three streams should be read for each iteration, because they will be processed together afterwards.
  • Background subtraction of the video streams is now also done for all streams using a loop.
  • To prevent massive slowdownbecause of multiple processing, the OpenMP API is activated (brief tutorial). All loops considering parallelvideo processing are preceded with the “#pragma omp parallel for” rule to enable paralellisation by OpenMP.

Comments Off

Oct 23 2008

First changes

Published by Martijn under PhD lab book

  • Made a change to declare the nFrame counter in the videoEngine global, which fixes a bug when temporarily stopping the video engine and restarting it afterwards.
  • Added VideoDetailView.(cpp/h) files as well as VideoDtailDoc.(cpp/h) files to implement an extra video window able to show alternative views like backsub. Added methods to create the new view, a button in the toolbar and methods to request data from the videoEngine.
  • VideoDetailViewhas a context menu which enables switching the image displayed. Current possibilities: source image, backsub image and tracked image. Used a switch together with menu ID names to do switching.

Comments Off

Oct 21 2008

scene annotation

Published by Martijn under PhD lab book

To annotate scene data, use ExtAnnotationTool. This tool uses pre-created .mat data files containing data placeholders for each scene. Tools and data files can be found in “D:\Devel\Matlab\SceneAnnotations”. Create a new empty data file using matlab, start ExtAnnotationTool, open the .mat file and select the scene to annotate. Afterwards, the data is saved to the data file used.

Comments Off

Oct 21 2008

homogenious image transformation

Published by Martijn under PhD lab book

The homogenious transformation matrix for projection of one camera view onto another one, can easily be found using Matlab. This link explains how to open the Control Point Selection tool (cpselect(input_image, base_image, input_points, base_points);), compute the transformation matrix (t = cp2tform(input_points, base_points, ‘perspective’);) and finally transform the image. Computing the transformation matrix needs only to be done one time for a single camera set-up, so it could be pre-processed and the matrix can be saved for later usage.

Comments Off

Oct 17 2008

inference mode and fusion sensors

Published by Martijn under PhD lab book

How does the inference mode for the fusion system work? It seems INFERENCE_MODE_FUSION is set in casSensorDBN, but the matlab cpt for the fusion part does not exist. The model_parameters file should have a table cpt_fusion_ag, retrieved by casFusionParameterFile::GetFusionSensorModel(), but this table doesn’t exist. What values are used instead? The system currently seems to be running INFERENCE_MODE_VIDEO, which prevents the problem. The mode is set in CasandraApp.cpp=>InitInstance.

Comments Off

« Prev - Next »