I will try again with high exposure and default FPS settings.
(2:39 PM) I ended up taking two more videos, and I'm settling with the second one. The first one didn't even capture the dropping point within its field of vision. Anyway, here is my video with high exposure as well as recorded with a frame rate of 29 frames per second (which seems to be the maximum that my phone can do). I converted it into a lower quality video, so that it can be handled by my internet's uploading speed:
Unfortunately, this activity requires a really good video camera, since the blur is still present even with the best settings that I can set my phone with my experience. I will just have to work with blurry blobs, it seems. Hopefully the ROI manages to capture these blurry blobs, since this was mainly my issue with the very first video I took, with the ROI colors not being able to match with the ball as it falls.
The biggest issue for all of the videos I have is that because my camera has such a low frame rate relative to the motion of the ball, when the ball becomes a blur as it falls down, it loses its chromaticity. That's why the ROI method doesn't work for it. I tried the grayscale method of image segmentation in the very first video I took for this activity, and the shadows near the floor had the same grayscale value as the blur that was the falling ball. I hope that isn't the case for this seventh video I took, which I am currently using.
(3:03 PM) As I expected, the ROI method doesn't work. Even with high exposure, the chromaticity of the blur is not the same as the ball's chromaticity when it isn't moving. I will try patching together snapshots of the blur in conjunction with the original ROI into a new, larger ROI, and see what happens.
(3:20 PM) I ended up just making the ball in frames where it is stationary have a really low probability of being part of the patched-together ROI. I will just try grayscale image segmentation now.
(3:31 PM) Nope, doesn't quite work either. It looks very similar to my results for the first video I recorded for this activity when grayscale image segmentation is performed.
I think this means that if I use the video I have, I will need to create a customized code for each set of frames, and basically process each set with different parameters just to get the blob for the frames in that set.
I will do that now. Anyway, here are GIF animations of some of my failed attempts:
Fig. 1. GIF animations of failed segmentations. From left to right: first video using color segmentation, first video using grayscale segmentation, seventh video using color segmentation, seventh video using grayscale segmentation.
(6:47 PM) Using nonparametric color segmentation with these parameters:
Frames
1-3: Color, seg_J > 0, ROI from Frame 1
4-6: Color, seg_J > 0, ROI from Frame 5
7-11: Color, seg_J > 0.1, ROI from Frame 9
12-16: Color, seg_J > 0.3, ROI from Frame 13
17: Color, seg_J > 0, ROI from Frame 17
I was able to get a segmented version of the first trial's frames. I forgot to mention that the video contains three trials, and the first trial contains 17 frames. Note that seg_J has values between 0 and 1. The segmented version looks like:
Fig. 2. Segmented version (right) of the first trial (left) using nonparametric color segmentation.
As observable in Fig. 2, when the ball blurs, it becomes difficult to isolate it from the background. Thus, it looks like it disintegrates into powder while falling, until it hits the ground and looks whole again. For the vertical clumps of powder that represent the blurred ball in the middle frames, the middle of that powder should more or less represent the position of the ball at that point in time, so getting the centroid should still work, as long as they are joined into one blob and the artifacts are removed.
(7:35 PM) Here is the cleaned version using morphological operations. Particularly, closing operators for hollow blobs, and opening operators for cleaning away artifacts.
Fig. 3. Segmented frames in Fig. 2 cleaned using morphological operations. |
Now to get the centroid locations. I will post this first before I place the results here just in case our internet runs out at a bad time later. I'll edit this shortly.
x y
62.040155 18.397668
62.684818 26.245875
63.222785 41.222785
64.428919 64.059538
64.272 95.650667
64.528274 132.66518
66.392709 179.28832
68.240343 231.59485
71.469045 292.85205
70.615854 359.83293
71.282927 435.5561
73.510921 521.00128
77.623872 608.6349
76.209991 713.00044
82.895717 821.03315
86.647849 938.28916
81.093805 1007.1646
The above are pixel coordinates. From 55 pixels to 1020 pixels, the height of the corner of the wall was measured to be 147.5 centimeters. This gives a conversion factor of 0.1528 centimeters per pixel, or 0.001528 meters per pixel.
Thank you to http://gifmaker.me/ for allowing me to convert frames into GIFs.
This was a very difficult activity to do in such a short time. But this is also the last activity, and that makes it bittersweet. With regards to AP186, I am happy with how the course went. The activities were challenging, and for most of them, I had rough starts. I managed to pull through with lots of help for the earlier activities, and then a lot of the later activities I had to (and managed to) do on my own, which made them even more fulfilling. I was very proud with how I managed to do the project that was tasked, and the difficulties in doing video processing I already surpassed, in a way, with the project. I'm sad I wasn't able to take a good video back in NIP, for this last activity. Still, I hope what I've done is enough.
Self-Evaluation: 8/10