Because we are presenting at the symposium tomorrow, most of the day was spent working on our powerpoint. I had to get screen capture videos of the validations to prove the accuracy of the model, and then throw them into the powerpoint. At first, we had both a live and model video for the static validation, but only the model’s video for the dynamic validation. I was running out of things to do and I wanted to see the actual comparison, so after a bit of hassle of moving the room around, getting people to do things, and then a few programming glitches, we got the video of the actual array with me walking in front of Killian to replicate the model animation. Now, we can compare both the static and dynamic validations side by side.
In some other down time, I plugged the specs for the Raspberry Pi Cameras into the model and pulled out the stills for them so we can run them through the program. Then we will be able to compare the Pi Cams to the Point Greys (the cameras currently in use) without even touching the array. I’m pretty excited that my model will actually be put use.