I did some more alternate-configuring today and made a semi circular array that Joe asked for, but this will need the program to be altered a bit so we can’t test it yet. I also tried putting the Pi Cam specs into the set up that could see the whole room, but then it couldn’t see the whole room anymore. Then, I rendered out some stills for the configurations that I hadn’t yet, and other than that, I pretty much just worked on the presentation. At the end of the day, I finally got to try out some of the configurations in the program. The array with twice as many cameras had a little bit higher image quality, but the others didn’t have any significant difference. Hopefully tomorrow we’ll be able to test the rest of the configurations.
My work has been interesting today because its been pretty much entirely self-guided, even more so than usual. My job is now to make alternate configurations for the array, but with the exception of one, I wasn’t given any specific arrangements. On Friday when I was killing time until College and Careers, I brainstormed some ideas for configurations and that’s what I based most of my stuff on today. So far, the alternate configurations are:
-The current array with the distance between each camera cut in half. This is the only one Joe has requested.
-The Point Greys replaced with the Pi Cams, which is an obvious one because it’s what Matt and Billy are actually doing.
-The current array with twice the number of cameras in the same space.
-The current array with a second row of six on a little higher above it.
-An array spread as far as possible while still being calibrateable, which came out to eleven cameras across a little over five meters. I made two other variations of this, one with a second row on top (22 cameras), and the other with a top row but shifted over so that there was a camera filling in each gap (23 cameras). That’s kinda confusing, so it looks like . ‘ . ‘ . if punctuation were cameras.
-The last one is a little less realistic. The cameras are wedged between the back wall and the ceiling, one in each corner and one in the center. Then, I rotated it downward so there wasn’t so much ceiling in the picture and adjusted the camera specs until I could see nearly the whole room. I think that would be a great array for a security application, but I don’t think a camera with those specs that exists.
Tomorrow, I plan on trying the model that can see the whole room with real camera specs, working on a semi-circular model, and running these new configurations through the program to see how they compare.
It’s been a short day. As soon as the meeting ended this morning, we headed over to the Conference Center for our presentation. We watched a few extremely engaging presentations, and then gave ours. I thought we did good enough, but wasn’t too psyched about my performance. I said “accurately” far too many times. We ducked out of the symposium after we were done presenting and came back to the building for the cookout, which ended a bit quicker than usual due to a little bit of rain. Luckily, I hadn’t brought my bag in yet, so it sat on the bench and frolicked as water streamed down its dry and cracking face and it ran its hands through its refreshingly clean hair. Feeling rejuvenated, it signaled to Kellyn, who pointed out that I had left it there through the brief downpour. Magically, when I unzipped it, its contents remained perfectly dry, only the back had gotten wet. I was relieved. And that brings us to now. I’m back in the lab typing this enthralling blog post as I wait to leave for the College and Career weekend on campus, which is at 1:30, making today a half-day. The end.
Because we are presenting at the symposium tomorrow, most of the day was spent working on our powerpoint. I had to get screen capture videos of the validations to prove the accuracy of the model, and then throw them into the powerpoint. At first, we had both a live and model video for the static validation, but only the model’s video for the dynamic validation. I was running out of things to do and I wanted to see the actual comparison, so after a bit of hassle of moving the room around, getting people to do things, and then a few programming glitches, we got the video of the actual array with me walking in front of Killian to replicate the model animation. Now, we can compare both the static and dynamic validations side by side.
In some other down time, I plugged the specs for the Raspberry Pi Cameras into the model and pulled out the stills for them so we can run them through the program. Then we will be able to compare the Pi Cams to the Point Greys (the cameras currently in use) without even touching the array. I’m pretty excited that my model will actually be put use.
Today, I spent some time testing out another renderer. By default, Maya renders with “Maya Software”, which has far fewer capabilities than “Mental Ray”, which is another renderer. One of the benefits of Mental Ray is that objects can be made into light sources, which is the only way I’ve gotten my lights to be lights, as opposed to the ceiling emitting a glow from where lights would be. Mental Ray also makes the picture look much prettier. Not that Maya’s renderings don’t look nice, but Mental Ray’s are a lot nicer. But with prettier pictures comes longer render time. To test them, I rendered out ten frames in each and timed them. Mental Ray took 5 minutes and 22 seconds, while Maya took 23 seconds. If I was to render every frame of the animation, that would come out to 1 hour and 40 minutes. Maya takes 7 minutes so as heartbreaking as it was, I had to stay with it. After my tests, I got the dynamic validation all rendered out and sent to Matt to grayscale who sent it to Killian and Elizabeth to make work. We made a video of it for our presentation, and its pretty exciting to see the model work.
I had an unproductive morning because the grayscaling duties had been taken by Matt (not something I was disappointed with) and Joe came in later, so I didn’t have a defined task. After lunch, I felt like I needed to get to work, so I set some goals and got to it. The ceiling of my model hasn’t been perfect because the grid for the tiles has been incredibly difficult to make, I hadn’t put the rails in that hold up the curtains in the lab, and the lights haven’t been visible objects in the model, just sources of light mysteriously coming through the ceiling. I put in the curtain rails without a problem, found a new way to render the lights that looks all fancy-like, and it took me a few tries, but I got the ceiling tiles in. Then, I put the outline for our final presentation into a powerpoint and filled in what I could. When Joe came in, we went over the presentation for the Undergrad Symposium and my animation (now called the dynamic validation). I need to change a few things in the animation and throw some pictures and/or videos into the presentation, so I have my work cut out for tomorrow.
I got in this morning and figured out the animation right away. The problem was that I was moving the figure himself instead of his “reference”, so my ridiculously monotonous typing of numbers was all in vain, which I was pretty psyched about. Then I moved on to rendering the animation. Maya apparently isn’t too fond of rendering videos, preferring instead single frames, so I wasted a decent amount of time before discovering that videos are actually impossible to render, at least with my version of the program. I switched my focus to getting all of the frames of the animation into stills for each camera. The array’s program runs on stills so it worked out. Once I got all of the 1,110 frames neatly into folders, I sent them over to Matt. He plugged them into the program and of course, they didn’t work. I realized that I didn’t put the images into grayscale, and so we had to search for a way to convert such a huge number of pictures, since Maya won’t let me render in grayscale either. We still haven’t figured out how to do it. After lunch, we found some spray paint and tape next to our static validation test balls. Joe said he wanted to make them contrast more for a more dramatic effect for a demonstration, so my job was to paint some patterns. Taping polka dots (which were already there, but didn’t stand out enough) and stripes took forever, and then I had to spray paint. I was pretty rusty on my spray painting skills and started off with some nice pools, but I got the hang of it pretty quick. There are now two big rubber balls hanging out in the fume hood room.