Archive for the 'PhD lab book' Category

Oct 14 2008

hard coding

Published by Martijn under PhD lab book

  • The number of people the system is able to track seems to be fixed at a maximum of 4, considering the code in casVideoEngine::run(). Kinetic energy is saved for the first four people. So is the kinetic energy data in many other places…
  • Another point is, howis Hugin integrated in the system. The manual only talks about editing system-specific source files for changing the DBN, while there are Hugin network files in the config dir. How are the source files, matlab files and Hugin related?

Comments Off

Oct 13 2008

annotations

Published by Martijn under PhD lab book

Having found the aggression annotation tool, the remaining question for training the DBN is where to find the scene annotation tool. It seems there should be a tool allowing you to annotate a ground-truth on apperance of trains and audio events. The tool can’t be found however. Because adding another detector wouldn’t necessarily change these files, we’ll just keep the old ones for the moment.

Comments Off

Oct 10 2008

backsub

Published by Martijn under PhD lab book

Background subtraction seems to be executed only to be able to detect when kernels should be killed, using the checkExitConditions function. Interestingly enough, this function has been put out of work by using a “return false;” statement at the beginning of the function, which seems to make the function as well as background subtraction obsolete.

Comments Off

Oct 08 2008

smoothed em-shift

Published by Martijn under PhD lab book

EM-shift is possibly more stable because of a smoothing factor (kalmann?) using the previous vector of movement. deltaPos_ is used to store the previous movement, which is scaled and added to the kernel position when em-shift is executed.

Comments Off

Oct 07 2008

em-shift kernels and feature tracking

Published by Martijn under PhD lab book

  • To enable the drawing of em-shift kernels, disable “#define DONT_DRAW_ELLIPSES” (line 143) in emstrack.cpp in the casVideoRoutines project. This shows ellipses for the em-shift kernels.
  • The train detector just searches for klt features inside the train mask and checks the amount of movement of these features as well as the relative number of moving features to determine the chance for a train being at that location.
  • Only KLT features located in the vicinity of em-shift kernels are used. “EMSTrack::computeOpticalFlow” uses the em-shift kernel to decide on its ROI. Unfortunately, the kernels tend to drift away, so wrong features are measured. However, the kernel seems much more stable then expected from pure em-shift. Should look into the method used. Possibly boosted by klt features?
  • The data from the .mat files in the video dir seems to be used only for kernel initialisation purposes, since em-shift is executed normally. The rest of the data is just logging data.

Comments Off

Oct 06 2008

BlobTracking.cpp, how initialisation works

Published by Martijn under PhD lab book

Blobtracker needs matlab file for initial person locations. EM-shift tracker created in blobtracker with default parameter data and frame size read from matlab file.  EM-shift kernel is initialised on initial person location and shape, after which it initialises the optical flow tracker in its turn.  Kernels positioned similar to matlab implementation. Unclear remains how the initial position and shape of all persons are estimated. Seems to be some kind op pre-processing of the complete scene.

Comments Off

Oct 06 2008

Published by Martijn under PhD lab book

  • Found out where cassandra data saved to “default location” end up. The data is added to the ’scenarioxx_audio.txt’ file. All gathered data, kinetic as well as train detectors is appended to the file. Furthermore, some original data (probably from the AVSS paper) can be found in “D:\Devel\Matlab\Data”. The .mat files contain data analysis.
  • An audio file for scenario 16-2 can be found in the march 5 events directory. It seems to contain the same audio analysis as used in the AVSS paper.
  • Difference between booting sensors in online or offline mode is still quite unclear. All runs using ‘online’ mode result the same analysis, as can be expected. Clearing the ’scenarioxx_video.txt’ file and booting in offline kinetic energy mode doesn’t seem to give the expected result of not producing motion feature data…
  • The video data dir contains blob tracking information in .mat files for all scenes. These files seem to be necessary to be able to execute cassandra. The question is, how are they created… They seem to be created off-line using em-shift tracking and manual initialization of people locations. This means the “people detector” is completely hard-coded and off-line.

Comments Off

Oct 02 2008

Published by Martijn under PhD lab book

  • Renamed software account to cassandra. Take care of visual studio project .user files, new ones are created for user cassandra, might not work well considering directory settings etc. Should change “<projectname>.vcproj.<pc-name>.software.user” to “<projectname>.vcproj.<pc-name>.cassandra.user”.
  • Scene 16-2 doesn’t seem to work because of a lack of processed audio files.  It seems “scenario16-2_audio.txt” should have been there, but it isn’t.

Comments Off

Oct 01 2008

Published by Martijn under PhD lab book

  • EM-shift tests can be found under D:\data\Amstelstation_21Mar2006_clips\raw\EMStracking. Only one camera analysed, mat files included. See if multiple camera’s can be combined?
  • SyncView needs a camera to be connected to be able to do correct bi-interpolation for the video frame colors. Without a camera connected, colors tend to be interpreted incorrect. First run SyncGrab to enable correct color conversion, after having connected the camera.
  • D:\Programming_SVN\VirtualHumanLib contains a neat compilation of many image processing tools and utilities. Image wrappers, OpenCV wrappers, easy access methods. Nice library created by Michael.

Comments Off

Sep 22 2008

location of stuff

Published by Martijn under PhD lab book

  • Found out which folders to use for cassandra. Most important are the following.
    • d:\devel: Contains latest version of the code
    • d:\projects: Contains sources of useful tools and utilities (eg syncgrab and syncview)
    • d:\data: Contains all video data of the recorded scenes
    • a DVD containing all audio data from RUG
  • Furthermore, cassandra documentation can be found at this location.
  • To find out: where are the originals of the processed audio data? They don’t seem to be on the DVD. There are files stored in D:\data\Amstelstation_21Mar2006_events\audio_sft_14Feb2008 which seem like the processed data, but are these the originals?
  • What’s with the locked zip files in D:\data\Amstelstation_21Mar2006_clips? Do they contain the frames for the compiled clips in the same directory? There doesn’t seem to be a folder which contains the movie frames ordered by scene as used later on. Only raw images in Run_xxxxx dirs can be found. Are there organised dirs? Scenario’s seem to be compiled automatically using the video tools (see documentation). Data from the audio files together with scenario data is used to match audio and video and build scenario dirs with the correct video information.

Comments Off

« Prev - Next »