SlideShare a Scribd company logo
Georgia Tech's
Computational Photography
Portfolio
Senthilkumar Gopal
sengopal@gatech.edu
https://cs6475.wordpress.com/spring-2017/
Assignment # 1: A Photograph is a Photograph
PURPOSE
The purpose of this assignment is to showcase one of our photographs and identify methods to
improve the image using computational photography techniques.
LEARNINGS
This assignment served two purposes. Being the first course in the program, this helped me
understand the nuances of assignment submission, evaluation process and the logistics in a gentle
manner. This also served as a great introduction to the course and the amount of research being done
in this field in contemporary times.
I cited the techniques of “pixel-wise flow field motion” and “edge flow” methods to identify the motion of
the video in each layer and create layer masks to mark the reflection/occlusion parts of the image, in
this paper which can be used to improve this image by removing the occluding fence grids.
Assignment # 1: A Photograph is a Photograph
Oakland, CA, USA
August 7, 2016 - 9:56 AM PST
Canon 550D - 1/800, f/6.3, ISO 1600
“The unblinking eye (Lilford Crane)”
PURPOSE
The purpose of this assignment is to get the development environment setup using vagrant and
understand simple arithmetic operations on images. This assignment also gives a great introduction
into Numpy library and its usage for image manipulations.
LEARNINGS
This assignment was my first attempt in using Python and the exercises were simple enough to
complete without any major impediments. Some of the learnings from this assignment are:
● understanding the importance of vectorization
● usage of operator broadcasting rather than iteration
● built in functions similar to fliplr such as flipud, rot90 and many more for image manipulation
● significance of datatypes and their implications for computational manipulations
● basic principles of how blending would work
Assignment # 2: Image I/O
Input Output
Observation: This vignette in the image was converted
into black pixels and the boat/sail were converted to
white pixels. However, an interesting observation is the
presence of black pixels in the water, even though the
original image does not give the impression that these
are dark grey pixels.
Blended Output image
Input Images
Convert to Black and White Image Average of Two Images
Assignment # 2: Image I/O
Assignment # 3: Epsilon Photography
PURPOSE
The purpose of this assignment is to generate a novel image or artifact which combines images of the
same scene or object with a variation of just one photographic parameter in the process.
SETUP
Inspired by this video, I choose to create a stop motion photography artifact where my epsilon is to
vary the location of the elements in the composition, slightly between images to create the novel
imagery of “stop-motion photography”.
The images were of a real rose flower with location of each of its petals changing slightly across
different pictures. I took a total of 32 images out of which the last 8 were submitted for the assignment
and used to build the final GIF images.
Assignment # 3: Epsilon Photography
Final GIF: http://imgur.com/a/psRQr
Full Stop Motion Video: https://www.youtube.com/watch?v=afJHcxMQHG0
Assignment # 3: Epsilon Photography
LEARNINGS
● Being my first attempt at Epsilon photography, I was very proud of the result.
● Testing the setup end to end using sample pictures and generating a test GIF, gave me the
confidence in the plan before implementing with the rose petals.
● The setup with external flash and tripod worked well, eliminating any need for heavy post
processing for alignment.
● I realized that the idea of motion is to illustrate more fluidity which can only be achieved with
more granular changes. For great output, I would have made really smaller changes (almost
1/3rd of what is performed in these images)
Assignment # 3: Epsilon Photography
ABOVE & BEYOND
For my extra credit, i tried to perform a different type of Epsilon photography by changing the focal
plane of the images and then combining them to create an all focused image using the technique
“Focus stacking”
Input Images with varying focal planes Output Image with all planes in focus
Assignment # 4: Gradients and Edges
PURPOSE
● The purpose of this assignment is to provide the implementation of different functions to
investigate various filters and create a rudimentary edge detection pipeline.
● The second part is to investigate the effects of different filters such box filter, Gaussian filter and
some of the commonly used gradient filters (Sobel filter).
● The last part is to use the functions created earlier and the Sobel filter to form an edge detection
pipeline and compare results with the Canny edge detector available in the OpenCV library.
Assignment # 4: Gradients and Edges
LEARNINGS
● Variations of the copyMakeBorder and np.pad can create effects such as vignetting, mirroring etc.,
● Different filter types - Box, Blur, Emboss, Edge etc., and its significance
● Importance of cross-Correlation and convolution
● How sampling and downsizing of images work
● How to use intensity changes and their direction to identify an edge
● How edge detection can be used for feature matching
● Doing this exercise has taught me to appreciate how a simple filter configuration, Sobel, box filter
etc., can perform varying computations to generate novel images
● Applications for feature detection, machine learning, artificial intelligence
● Learnt the significance of eliminating noise and why sharpness of an image matters for computer
vision and computational photography
Assignment # 4: Gradients and Edges
Gradient X Normalized Image Gradient Y Normalized Image copyMakeBorder() np.pad(img, 200, 'reflect')
Input image Gaussian filter Box filter
Assignment # 4: Gradients and Edges
SIMPLE EDGE DETECTION PIPELINE
● Combine the x and y direction magnitudes using np.hypot which provides the discrete magnitude
● A rudimentary implementation of edge detection from the gradients is to use the simple
convertToBlackAndWhite operation on the gradient magnitude
● However, a threshold of 128 gave very few details and missing critical ones while a threshold of 30
included lot of noise and the intermediary value of 90 was still not complete with details.
● Though this method was fast and provides some reliable results, it has some disadvantages.
FINAL EDGE DETECTION PIPELINE
● We need to run the image through a Gaussian filter to remove high intensity spikes and noise
● Compute the Sobel gradients on the x and y directions
● Combine them to identify the discrete magnitude at each pixel
● Perform non-maximum suppression by identifying gradient direction of the Sobel gradient magnitude
● Now all pixels are suppressed other than the ones part of the local maxima(edge).
● This also helps in “thinning” the edges which was not possible with the simple
convertToBlackAndWhite operation.
Assignment # 4: Gradients and Edges
#1 Gaussian filter appliedInput Image #2 Sobel filter - X direction
#3 Sobel filter - Y direction
#4 Sobel gradient magnitude Thinning of edges / local Maxima
Final image after
thresholding
changes
LO=0.1m HI=0.2m
Assignment # 4: Gradients and Edges
LO=0.1m HI=0.2m
LO=0.05m HI=0.1m
LO=0.05m HI=0.1m
FEW OTHER RESULTS WITH THE EDGE DETECTION PIPELINE
PURPOSE
● This was the most “physically active” assignment. Until then I had used pictures from the past,
but this time I had to “create” one.
● This assignment required considerable amount of time for setup and preparation of the location.
● As my apartment was on the second floor, it is one of the best lit buildings and covering all the
light sources in the room was the biggest task.
● Moving boxes, black plastic sheeting, Gorilla tape and a sharp knife were used to aid the setup.
● The image captured is of my apartment complex as seen from my bedroom window, which was
covered and used as the camera obscura.
LEARNINGS / OUTCOME
● This assignment reproduces one of the oldest image capturing techniques used from 400 BC.
● Multiple images were stitched using Photoshop to create the final output image.
● During “BULB” mode, AF (auto focus) should be turned off and long exposures drain the battery
quickly.
Assignment # 5: Camera Obscura
Blocked window Tools used
Camera & Pinhole location Screens
Assignment # 5: Camera Obscura
SCENE OUTSIDE THE PINHOLESETUP
Caption: “apartment” wide view
Assignment # 5: Camera Obscura
Assignment # 6: Blending
PURPOSE
● This assignment explores the steps necessary to build a Gaussian and Laplacian pyramids and
use them to blend two images using a pre-generated mask.
● This exercise introduces the concept of pyramid building using REDUCE and EXPAND algorithm
and multiple other uses of the Gaussian and Laplacian pyramids.
● Multiple functions which aid the pyramid building and the blending of the images were created as
part of the assignment
● These functions are then used to demonstrate the capability of blending using these pyramids.
LEARNINGS / OUTCOME
● The information retained in the final blended image was a great learning exercise and gives a
peek into how this could be useful for progressive transmission of image data.
● The blended output image was extremely close to the output from Photoshop.
● The resulting image demonstrated the clear usage and purpose of the Laplacian pyramid and the
minute details in the image which were retained in the final output
Assignment # 6: Blending
GAUSSIAN PYRAMID
LAPLACIAN PYRAMID
Assignment # 6: Blending
Laplacian BLACK Laplacian WHITE Gaussian MASK Blended IMAGE
Assignment # 6: Blending
INPUT IMAGES
OUTPUT IMAGE
Assignment # 6: Blending
● For my A&B, i started investigating the other types of blending algorithms and capabilities
available in OpenCV which can perform the blending with better smoothing and better tonal
retention
● Common ones used are linear blending using weighted or arithmetic operations to incorporate an
opaque image into another image.
● However, one of the more recent and highly used algorithm is the Poisson distribution based
blending which has better retention of tonal details.
Image 2Image 1
Blended Image with Poisson
Assignment # 7: Feature Detection
PURPOSE
● This assignment builds the function blocks to perform feature matching between different images
of the same object under different conditions of scale, lighting, rotation and among other objects.
● The second part of the assignment deals with experimenting with various parameters of the ORB
algorithm used for feature matching and its results.
LEARNINGS / OUTCOME
● This assignment opened up avenues of how feature matching works and various factors
influencing good and bad results.
● Choosing a good template image is critical to ensure good results.
● Image quality and closeness in scale and intensity are important factors for feature detection.
● Pre-enhancing the image definitely helps in feature detection.
● A single pipeline cannot be fit to match all types of images.
● There are multiple optimizations possible at each step of the pipeline for feature matching. A
better descriptor or a faster detector etc., can be utilized based on the needs of the pipeline.
Assignment # 7: Feature Detection
LIGHTING
ROTATION SAMPLE
SCALE
Assignment # 7: Feature Detection
ORB PARAMETER
INVESTIGATION
Assignment # 7: Feature Detection
POOR IMAGES SIFT MATCHES
GOOD IMAGES SIFT MATCHES
● For my A&B, I tried to investigate the other detectors and descriptors which are available and their
results against my images.
● Tested Brute-Force Matching with SIFT Descriptors and FLANN Matchers.
● The results suggest that ORB is a very good and extremely fast alternative to the commercially
available feature matchers available.
PURPOSE
● This assignment creates the function blocks for building a pipeline which can stiches multiple
images from left to right creating a panorama image.
● The functions help find features, calculate the homography, warp images from left to right, align
and then blend the images to create the final artifact.
LEARNINGS / OUTCOME
● Feature matching and alignment based on RANSAC worked perfectly without any false positives.
● Blending seemed to be a process which becomes increasingly complex with need for better
quality. However, basic blending algorithms such as linear weighted, center weighted does give a
great result.
● The panorama pipeline is very consistent with no specialization required for different image sets.
● Even while stitching five images, the results were consistent and high quality.
● It is possible to achieve commercial software quality output with a simple pipeline.
Assignment # 8: Panoramas
Assignment # 8: Panoramas
Test image result using center
weighting image blending
My image result using center weighting image
blending
Assignment # 8: Panoramas
Center weighted blending
Linear weighted blending
Gaussian pyramid blending
For my A&B, I experimented with different blending functions - linear weighted, center weighted and
gaussian pyramid blending out of which linear and center weighted does give a great result.
Assignment # 9: HDR
PURPOSE
● This assignment develops the code to combine images of varying exposures to build a composite
image which captures a wider dynamic range of irradiance.
● The building block functions computes the response curve and radiance map which then is used
to form the final image by using pixels from all the different exposure images.
LEARNINGS / OUTCOME
● Choosing a scene for HDR needs preparation and proper bracketing techniques to capture the
wide range of the spectrum.
● As there was no tone mapping, the results were dull and not exhibiting the bright intensity
exhibited by the original image. Adding tone mapping via one of the algorithms such as Durand,
Drago or Reinhard helps the image get displayed better.
● Adding a tone map to better visually display the image, different weighting algorithms and
vectorization would be enhancements possible for the pipeline.
Assignment # 9: HDR
Source
Assignment # 9: HDR
FINAL HDR IMAGE
Assignment # 9: HDR
For my A&B, i worked with my own set of images and attempted to match the tones of the image
using linear and Reinhard tone mapping
My own HDR images
Assignment # 9: HDR
Reinhard Tone MappingLinear Tone MappingHDR without tone mapping
Assignment #10: Pictures of Space
PURPOSE
● The purpose of this assignment is to build a point cloud and panorama using available software
to learn the usage and capabilities of the basics learnt earlier.
● A point cloud is a set of data points in a three-dimensional coordinate system and are used to
represent three dimensional objects or capture a location in three dimensions.
● The point cloud and panorama was captured in Central Park, Santa Clara, CA which is a public
park with a great water fountain filled with ducks and a trail around the fountain.
LEARNINGS / OUTCOME
● This assignment helped learn the different projection systems for creating a panorama and
helped identify my attempt as Cylindrical panorama with a planar reprojection
● The point clouds also related to the Microsoft Photosynth product help learn the usage of multiple
images to form a novel image and capability to provide crowd sourced image clouds.
Used 44 images for my panorama to get a 360o
FOV and stitched using Adobe Photoshop
The full version of this image is available at KUULA and Google Drive
Assignment #10: Pictures of Space
Assignment #10: Pictures of Space
Images + Mesh for 300 images available here. Created using OpenSFM
● The images to form the point cloud was captured in Central Park, Santa Clara, CA. I wanted to
create the point cloud as a "Walk" and attempt to capture various image along the path.
● To provide a wider view of the scene, I took three overlapping picture for every step which is
evident from the camera angles in the point cloud.
Assignment #10: Pictures of Space
Different images of the point cloud
Assignment #10: Pictures of Space
● For my A&B work, i attempted different spatial projections as earlier ones were only horizontal.
● So i took images of the top and bottom sections of the view (vertical projections) and stitched them all
together using Photoshop to a FOV of 170o
● The image is available at Google Drive and at Kuulu
PURPOSE
● The purpose of this assignment is to identify a part of a video which can generate an infinite loop
video with minimal or no visible break.
● As part of this assignment, we build a pipeline to compute the similarity matrix between frames of
a video and use them to identify and create a video texture.
● The assignment focuses on finding the biggest loop possible from the similarity matrix and
generating the infinite looping video.
LEARNINGS / OUTCOME
● Being the first assignment using a video instead of images, this assignment helped understand
the structure and how multiple frames compose a video.
● The building blocks of the assignment provided insight into how the pipeline would function to
create a video texture.
Assignment #11: Video Textures
Assignment #11: Video Textures
The assignment provides a set of candle images to demonstrate the video texture.
Link to the candle video texture gif: https://imgur.com/a/SG2OT
Assignment #11: Video Textures
I used a video clip that i recorded at home to create a video texture.
Link to the water drops video texture gif: https://imgur.com/a/SG2OT
Assignment #11: Video Textures
For my A&B, I experimented with few other video clips available from the internet
Source
https://www.youtube.com/watch?v=5lCK4jkqQxA
Link to video texture gif
http://imgur.com/a/OauHQ
Source
https://www.youtube.com/watch?v=WE-nFpUScEU
Link to video texture gif
http://imgur.com/a/V2BgY
FINAL PROJECT
GOAL
● The final project is based on this paper (Whiteboard scanning and image enhancement) which
discusses the method of extracting usable documentation out of whiteboard images which is a
useful artifact for many groups.
● The scope of this project is to take a picture of whiteboard in any perspective and produce a
perspective corrected, trimmed and enhanced image which can make it so much easier to save
and process the whiteboard images further.
● As part of the project, I also included the capability to take in multiple images forming a
panorama of the whiteboard and produce a corrected output.
● I also ported the code to Android and create an app to perform the similar activity as described.
FINAL PROJECT - pipeline
FINAL PROJECT - showcase
INPUT
Output with
Python
Output with
Android App
(zoomed in)
FINAL PROJECT - Results
Extraction under favorable conditions
FINAL PROJECT - Results
Extraction for a wide image
FINAL PROJECT - Results
Extraction from combined images
FINAL PROJECT - Pipeline
Extraction pipeline and intermediate results
FINAL PROJECT - Results
Video of the app usage - Using a Camera
Video of the app usage - Using an existing image
● This was a great learning experience about uses of feature
detection and customization to address specific image types,
generate novel images and find useful data.
● This project helped me learn about different contrast and image
data enhancement techniques.
● Georgia Tech for creating this great platform for learning - OMSCS
● Prof. Irfan Essa for the lectures and such an enriched course
● The TAs who are always available to discuss and help our questions and thoughts
on both Piazza and Slack.
● Stackoverflow which helped with all the questions i had regarding Numpy operations,
OpenCV errors etc.,
Thanks!
And as always,
Have fun computing with photographs !

More Related Content

PPTX
Entity Relationship design issues
PPTX
Turing machine
PPTX
ADBMS Object and Object Relational Databases
PDF
basis data lanjut modul
PDF
Bab III Class Diagram
PDF
Triggers and active database
PPT
PPTX
Graph Neural Network (한국어)
Entity Relationship design issues
Turing machine
ADBMS Object and Object Relational Databases
basis data lanjut modul
Bab III Class Diagram
Triggers and active database
Graph Neural Network (한국어)

What's hot (20)

PPTX
Knowledge representation
PDF
[PBO] Pertemuan 10 - Pemrograman Database (1)
PPTX
[AIoTLab]attention mechanism.pptx
PDF
Case Study Raid Utility Guideline
PDF
Deep Learning for Chatbot (2/4)
PPT
Chapter 1 - Concepts for Object Databases.ppt
PPTX
Constraint satisfaction Problem Artificial Intelligence
PDF
Tugas simbad
PDF
2. Sistem Basis Data
PPTX
Jupyter notebook 이해하기
PDF
Geospatial Data Analysis and Visualization in Python
PPTX
AI Propositional logic
PDF
plsql notes.pdf for students Oracle databases
PPT
ER.ppt
PDF
All pairs shortest path algorithm
PPTX
Beginner's Guide to Diffusion Models..pptx
PDF
Planning and Learning with Tabular Methods
PPTX
Inheritance and Polymorphism
PPTX
Desain dan analisis algoritma
PPT
Normalization
Knowledge representation
[PBO] Pertemuan 10 - Pemrograman Database (1)
[AIoTLab]attention mechanism.pptx
Case Study Raid Utility Guideline
Deep Learning for Chatbot (2/4)
Chapter 1 - Concepts for Object Databases.ppt
Constraint satisfaction Problem Artificial Intelligence
Tugas simbad
2. Sistem Basis Data
Jupyter notebook 이해하기
Geospatial Data Analysis and Visualization in Python
AI Propositional logic
plsql notes.pdf for students Oracle databases
ER.ppt
All pairs shortest path algorithm
Beginner's Guide to Diffusion Models..pptx
Planning and Learning with Tabular Methods
Inheritance and Polymorphism
Desain dan analisis algoritma
Normalization
Ad

Similar to Portfolio for CS 6475 Computational Photography (20)

PDF
Enhancing readability of digital image using image processing - Full Report
PDF
Computaional Photography portfolio
PDF
OpenCV.pdf
PDF
Log polar coordinates
PDF
"Computational Photography: Understanding and Expanding the Capabilities of S...
PDF
Portfolio for Computational Photography course at Georgia Tech OMSCS.
PPTX
Computer vision series
PPTX
Open cv tutorial
PDF
Thesis
PPTX
IntroComputerVision23.pptx
PDF
JonathanWestlake_ComputerVision_Project1
PDF
Practical Digital Image Processing 3
PPT
november29.ppt
PPTX
1_Intro2ssssssssssssssssssssssssssssss2.pptx
PPT
Digital Image Processing
PPT
Introduction to Camera Challenges - Ramesh Raskar
PDF
Lec12 review-part-i
PDF
ee8220_project_W2013_v5
PPT
Introduction to Digital Image Processing
PPTX
Cahall Final Intern Presentation
Enhancing readability of digital image using image processing - Full Report
Computaional Photography portfolio
OpenCV.pdf
Log polar coordinates
"Computational Photography: Understanding and Expanding the Capabilities of S...
Portfolio for Computational Photography course at Georgia Tech OMSCS.
Computer vision series
Open cv tutorial
Thesis
IntroComputerVision23.pptx
JonathanWestlake_ComputerVision_Project1
Practical Digital Image Processing 3
november29.ppt
1_Intro2ssssssssssssssssssssssssssssss2.pptx
Digital Image Processing
Introduction to Camera Challenges - Ramesh Raskar
Lec12 review-part-i
ee8220_project_W2013_v5
Introduction to Digital Image Processing
Cahall Final Intern Presentation
Ad

More from Senthilkumar Gopal (6)

PDF
IBM Index Conference - 10 steps to build token based API Security
PPTX
How developers write documentation
PPTX
Bdd using Cucumber
PPTX
Application resiliency using netflix hystrix
PDF
Git and github 101
PPTX
Responsive web design
IBM Index Conference - 10 steps to build token based API Security
How developers write documentation
Bdd using Cucumber
Application resiliency using netflix hystrix
Git and github 101
Responsive web design

Recently uploaded (20)

PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
Tartificialntelligence_presentation.pptx
PPTX
Spectroscopy.pptx food analysis technology
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
A comparative study of natural language inference in Swahili using monolingua...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
TLE Review Electricity (Electricity).pptx
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Approach and Philosophy of On baking technology
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Getting Started with Data Integration: FME Form 101
PDF
August Patch Tuesday
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
cloud_computing_Infrastucture_as_cloud_p
MIND Revenue Release Quarter 2 2025 Press Release
Tartificialntelligence_presentation.pptx
Spectroscopy.pptx food analysis technology
Group 1 Presentation -Planning and Decision Making .pptx
A comparative study of natural language inference in Swahili using monolingua...
Diabetes mellitus diagnosis method based random forest with bat algorithm
TLE Review Electricity (Electricity).pptx
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Mobile App Security Testing_ A Comprehensive Guide.pdf
Encapsulation_ Review paper, used for researhc scholars
A comparative analysis of optical character recognition models for extracting...
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Approach and Philosophy of On baking technology
Network Security Unit 5.pdf for BCA BBA.
Programs and apps: productivity, graphics, security and other tools
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Getting Started with Data Integration: FME Form 101
August Patch Tuesday
Building Integrated photovoltaic BIPV_UPV.pdf

Portfolio for CS 6475 Computational Photography

  • 1. Georgia Tech's Computational Photography Portfolio Senthilkumar Gopal sengopal@gatech.edu https://cs6475.wordpress.com/spring-2017/
  • 2. Assignment # 1: A Photograph is a Photograph PURPOSE The purpose of this assignment is to showcase one of our photographs and identify methods to improve the image using computational photography techniques. LEARNINGS This assignment served two purposes. Being the first course in the program, this helped me understand the nuances of assignment submission, evaluation process and the logistics in a gentle manner. This also served as a great introduction to the course and the amount of research being done in this field in contemporary times. I cited the techniques of “pixel-wise flow field motion” and “edge flow” methods to identify the motion of the video in each layer and create layer masks to mark the reflection/occlusion parts of the image, in this paper which can be used to improve this image by removing the occluding fence grids.
  • 3. Assignment # 1: A Photograph is a Photograph Oakland, CA, USA August 7, 2016 - 9:56 AM PST Canon 550D - 1/800, f/6.3, ISO 1600 “The unblinking eye (Lilford Crane)”
  • 4. PURPOSE The purpose of this assignment is to get the development environment setup using vagrant and understand simple arithmetic operations on images. This assignment also gives a great introduction into Numpy library and its usage for image manipulations. LEARNINGS This assignment was my first attempt in using Python and the exercises were simple enough to complete without any major impediments. Some of the learnings from this assignment are: ● understanding the importance of vectorization ● usage of operator broadcasting rather than iteration ● built in functions similar to fliplr such as flipud, rot90 and many more for image manipulation ● significance of datatypes and their implications for computational manipulations ● basic principles of how blending would work Assignment # 2: Image I/O
  • 5. Input Output Observation: This vignette in the image was converted into black pixels and the boat/sail were converted to white pixels. However, an interesting observation is the presence of black pixels in the water, even though the original image does not give the impression that these are dark grey pixels. Blended Output image Input Images Convert to Black and White Image Average of Two Images Assignment # 2: Image I/O
  • 6. Assignment # 3: Epsilon Photography PURPOSE The purpose of this assignment is to generate a novel image or artifact which combines images of the same scene or object with a variation of just one photographic parameter in the process. SETUP Inspired by this video, I choose to create a stop motion photography artifact where my epsilon is to vary the location of the elements in the composition, slightly between images to create the novel imagery of “stop-motion photography”. The images were of a real rose flower with location of each of its petals changing slightly across different pictures. I took a total of 32 images out of which the last 8 were submitted for the assignment and used to build the final GIF images.
  • 7. Assignment # 3: Epsilon Photography Final GIF: http://imgur.com/a/psRQr Full Stop Motion Video: https://www.youtube.com/watch?v=afJHcxMQHG0
  • 8. Assignment # 3: Epsilon Photography LEARNINGS ● Being my first attempt at Epsilon photography, I was very proud of the result. ● Testing the setup end to end using sample pictures and generating a test GIF, gave me the confidence in the plan before implementing with the rose petals. ● The setup with external flash and tripod worked well, eliminating any need for heavy post processing for alignment. ● I realized that the idea of motion is to illustrate more fluidity which can only be achieved with more granular changes. For great output, I would have made really smaller changes (almost 1/3rd of what is performed in these images)
  • 9. Assignment # 3: Epsilon Photography ABOVE & BEYOND For my extra credit, i tried to perform a different type of Epsilon photography by changing the focal plane of the images and then combining them to create an all focused image using the technique “Focus stacking” Input Images with varying focal planes Output Image with all planes in focus
  • 10. Assignment # 4: Gradients and Edges PURPOSE ● The purpose of this assignment is to provide the implementation of different functions to investigate various filters and create a rudimentary edge detection pipeline. ● The second part is to investigate the effects of different filters such box filter, Gaussian filter and some of the commonly used gradient filters (Sobel filter). ● The last part is to use the functions created earlier and the Sobel filter to form an edge detection pipeline and compare results with the Canny edge detector available in the OpenCV library.
  • 11. Assignment # 4: Gradients and Edges LEARNINGS ● Variations of the copyMakeBorder and np.pad can create effects such as vignetting, mirroring etc., ● Different filter types - Box, Blur, Emboss, Edge etc., and its significance ● Importance of cross-Correlation and convolution ● How sampling and downsizing of images work ● How to use intensity changes and their direction to identify an edge ● How edge detection can be used for feature matching ● Doing this exercise has taught me to appreciate how a simple filter configuration, Sobel, box filter etc., can perform varying computations to generate novel images ● Applications for feature detection, machine learning, artificial intelligence ● Learnt the significance of eliminating noise and why sharpness of an image matters for computer vision and computational photography
  • 12. Assignment # 4: Gradients and Edges Gradient X Normalized Image Gradient Y Normalized Image copyMakeBorder() np.pad(img, 200, 'reflect') Input image Gaussian filter Box filter
  • 13. Assignment # 4: Gradients and Edges SIMPLE EDGE DETECTION PIPELINE ● Combine the x and y direction magnitudes using np.hypot which provides the discrete magnitude ● A rudimentary implementation of edge detection from the gradients is to use the simple convertToBlackAndWhite operation on the gradient magnitude ● However, a threshold of 128 gave very few details and missing critical ones while a threshold of 30 included lot of noise and the intermediary value of 90 was still not complete with details. ● Though this method was fast and provides some reliable results, it has some disadvantages. FINAL EDGE DETECTION PIPELINE ● We need to run the image through a Gaussian filter to remove high intensity spikes and noise ● Compute the Sobel gradients on the x and y directions ● Combine them to identify the discrete magnitude at each pixel ● Perform non-maximum suppression by identifying gradient direction of the Sobel gradient magnitude ● Now all pixels are suppressed other than the ones part of the local maxima(edge). ● This also helps in “thinning” the edges which was not possible with the simple convertToBlackAndWhite operation.
  • 14. Assignment # 4: Gradients and Edges #1 Gaussian filter appliedInput Image #2 Sobel filter - X direction #3 Sobel filter - Y direction #4 Sobel gradient magnitude Thinning of edges / local Maxima Final image after thresholding changes LO=0.1m HI=0.2m
  • 15. Assignment # 4: Gradients and Edges LO=0.1m HI=0.2m LO=0.05m HI=0.1m LO=0.05m HI=0.1m FEW OTHER RESULTS WITH THE EDGE DETECTION PIPELINE
  • 16. PURPOSE ● This was the most “physically active” assignment. Until then I had used pictures from the past, but this time I had to “create” one. ● This assignment required considerable amount of time for setup and preparation of the location. ● As my apartment was on the second floor, it is one of the best lit buildings and covering all the light sources in the room was the biggest task. ● Moving boxes, black plastic sheeting, Gorilla tape and a sharp knife were used to aid the setup. ● The image captured is of my apartment complex as seen from my bedroom window, which was covered and used as the camera obscura. LEARNINGS / OUTCOME ● This assignment reproduces one of the oldest image capturing techniques used from 400 BC. ● Multiple images were stitched using Photoshop to create the final output image. ● During “BULB” mode, AF (auto focus) should be turned off and long exposures drain the battery quickly. Assignment # 5: Camera Obscura
  • 17. Blocked window Tools used Camera & Pinhole location Screens Assignment # 5: Camera Obscura SCENE OUTSIDE THE PINHOLESETUP
  • 18. Caption: “apartment” wide view Assignment # 5: Camera Obscura
  • 19. Assignment # 6: Blending PURPOSE ● This assignment explores the steps necessary to build a Gaussian and Laplacian pyramids and use them to blend two images using a pre-generated mask. ● This exercise introduces the concept of pyramid building using REDUCE and EXPAND algorithm and multiple other uses of the Gaussian and Laplacian pyramids. ● Multiple functions which aid the pyramid building and the blending of the images were created as part of the assignment ● These functions are then used to demonstrate the capability of blending using these pyramids. LEARNINGS / OUTCOME ● The information retained in the final blended image was a great learning exercise and gives a peek into how this could be useful for progressive transmission of image data. ● The blended output image was extremely close to the output from Photoshop. ● The resulting image demonstrated the clear usage and purpose of the Laplacian pyramid and the minute details in the image which were retained in the final output
  • 20. Assignment # 6: Blending GAUSSIAN PYRAMID LAPLACIAN PYRAMID
  • 21. Assignment # 6: Blending Laplacian BLACK Laplacian WHITE Gaussian MASK Blended IMAGE
  • 22. Assignment # 6: Blending INPUT IMAGES OUTPUT IMAGE
  • 23. Assignment # 6: Blending ● For my A&B, i started investigating the other types of blending algorithms and capabilities available in OpenCV which can perform the blending with better smoothing and better tonal retention ● Common ones used are linear blending using weighted or arithmetic operations to incorporate an opaque image into another image. ● However, one of the more recent and highly used algorithm is the Poisson distribution based blending which has better retention of tonal details. Image 2Image 1 Blended Image with Poisson
  • 24. Assignment # 7: Feature Detection PURPOSE ● This assignment builds the function blocks to perform feature matching between different images of the same object under different conditions of scale, lighting, rotation and among other objects. ● The second part of the assignment deals with experimenting with various parameters of the ORB algorithm used for feature matching and its results. LEARNINGS / OUTCOME ● This assignment opened up avenues of how feature matching works and various factors influencing good and bad results. ● Choosing a good template image is critical to ensure good results. ● Image quality and closeness in scale and intensity are important factors for feature detection. ● Pre-enhancing the image definitely helps in feature detection. ● A single pipeline cannot be fit to match all types of images. ● There are multiple optimizations possible at each step of the pipeline for feature matching. A better descriptor or a faster detector etc., can be utilized based on the needs of the pipeline.
  • 25. Assignment # 7: Feature Detection LIGHTING ROTATION SAMPLE SCALE
  • 26. Assignment # 7: Feature Detection ORB PARAMETER INVESTIGATION
  • 27. Assignment # 7: Feature Detection POOR IMAGES SIFT MATCHES GOOD IMAGES SIFT MATCHES ● For my A&B, I tried to investigate the other detectors and descriptors which are available and their results against my images. ● Tested Brute-Force Matching with SIFT Descriptors and FLANN Matchers. ● The results suggest that ORB is a very good and extremely fast alternative to the commercially available feature matchers available.
  • 28. PURPOSE ● This assignment creates the function blocks for building a pipeline which can stiches multiple images from left to right creating a panorama image. ● The functions help find features, calculate the homography, warp images from left to right, align and then blend the images to create the final artifact. LEARNINGS / OUTCOME ● Feature matching and alignment based on RANSAC worked perfectly without any false positives. ● Blending seemed to be a process which becomes increasingly complex with need for better quality. However, basic blending algorithms such as linear weighted, center weighted does give a great result. ● The panorama pipeline is very consistent with no specialization required for different image sets. ● Even while stitching five images, the results were consistent and high quality. ● It is possible to achieve commercial software quality output with a simple pipeline. Assignment # 8: Panoramas
  • 29. Assignment # 8: Panoramas Test image result using center weighting image blending My image result using center weighting image blending
  • 30. Assignment # 8: Panoramas Center weighted blending Linear weighted blending Gaussian pyramid blending For my A&B, I experimented with different blending functions - linear weighted, center weighted and gaussian pyramid blending out of which linear and center weighted does give a great result.
  • 31. Assignment # 9: HDR PURPOSE ● This assignment develops the code to combine images of varying exposures to build a composite image which captures a wider dynamic range of irradiance. ● The building block functions computes the response curve and radiance map which then is used to form the final image by using pixels from all the different exposure images. LEARNINGS / OUTCOME ● Choosing a scene for HDR needs preparation and proper bracketing techniques to capture the wide range of the spectrum. ● As there was no tone mapping, the results were dull and not exhibiting the bright intensity exhibited by the original image. Adding tone mapping via one of the algorithms such as Durand, Drago or Reinhard helps the image get displayed better. ● Adding a tone map to better visually display the image, different weighting algorithms and vectorization would be enhancements possible for the pipeline.
  • 32. Assignment # 9: HDR Source
  • 33. Assignment # 9: HDR FINAL HDR IMAGE
  • 34. Assignment # 9: HDR For my A&B, i worked with my own set of images and attempted to match the tones of the image using linear and Reinhard tone mapping My own HDR images
  • 35. Assignment # 9: HDR Reinhard Tone MappingLinear Tone MappingHDR without tone mapping
  • 36. Assignment #10: Pictures of Space PURPOSE ● The purpose of this assignment is to build a point cloud and panorama using available software to learn the usage and capabilities of the basics learnt earlier. ● A point cloud is a set of data points in a three-dimensional coordinate system and are used to represent three dimensional objects or capture a location in three dimensions. ● The point cloud and panorama was captured in Central Park, Santa Clara, CA which is a public park with a great water fountain filled with ducks and a trail around the fountain. LEARNINGS / OUTCOME ● This assignment helped learn the different projection systems for creating a panorama and helped identify my attempt as Cylindrical panorama with a planar reprojection ● The point clouds also related to the Microsoft Photosynth product help learn the usage of multiple images to form a novel image and capability to provide crowd sourced image clouds.
  • 37. Used 44 images for my panorama to get a 360o FOV and stitched using Adobe Photoshop The full version of this image is available at KUULA and Google Drive Assignment #10: Pictures of Space
  • 38. Assignment #10: Pictures of Space Images + Mesh for 300 images available here. Created using OpenSFM ● The images to form the point cloud was captured in Central Park, Santa Clara, CA. I wanted to create the point cloud as a "Walk" and attempt to capture various image along the path. ● To provide a wider view of the scene, I took three overlapping picture for every step which is evident from the camera angles in the point cloud.
  • 39. Assignment #10: Pictures of Space Different images of the point cloud
  • 40. Assignment #10: Pictures of Space ● For my A&B work, i attempted different spatial projections as earlier ones were only horizontal. ● So i took images of the top and bottom sections of the view (vertical projections) and stitched them all together using Photoshop to a FOV of 170o ● The image is available at Google Drive and at Kuulu
  • 41. PURPOSE ● The purpose of this assignment is to identify a part of a video which can generate an infinite loop video with minimal or no visible break. ● As part of this assignment, we build a pipeline to compute the similarity matrix between frames of a video and use them to identify and create a video texture. ● The assignment focuses on finding the biggest loop possible from the similarity matrix and generating the infinite looping video. LEARNINGS / OUTCOME ● Being the first assignment using a video instead of images, this assignment helped understand the structure and how multiple frames compose a video. ● The building blocks of the assignment provided insight into how the pipeline would function to create a video texture. Assignment #11: Video Textures
  • 42. Assignment #11: Video Textures The assignment provides a set of candle images to demonstrate the video texture. Link to the candle video texture gif: https://imgur.com/a/SG2OT
  • 43. Assignment #11: Video Textures I used a video clip that i recorded at home to create a video texture. Link to the water drops video texture gif: https://imgur.com/a/SG2OT
  • 44. Assignment #11: Video Textures For my A&B, I experimented with few other video clips available from the internet Source https://www.youtube.com/watch?v=5lCK4jkqQxA Link to video texture gif http://imgur.com/a/OauHQ Source https://www.youtube.com/watch?v=WE-nFpUScEU Link to video texture gif http://imgur.com/a/V2BgY
  • 45. FINAL PROJECT GOAL ● The final project is based on this paper (Whiteboard scanning and image enhancement) which discusses the method of extracting usable documentation out of whiteboard images which is a useful artifact for many groups. ● The scope of this project is to take a picture of whiteboard in any perspective and produce a perspective corrected, trimmed and enhanced image which can make it so much easier to save and process the whiteboard images further. ● As part of the project, I also included the capability to take in multiple images forming a panorama of the whiteboard and produce a corrected output. ● I also ported the code to Android and create an app to perform the similar activity as described.
  • 46. FINAL PROJECT - pipeline
  • 47. FINAL PROJECT - showcase INPUT Output with Python Output with Android App (zoomed in)
  • 48. FINAL PROJECT - Results Extraction under favorable conditions
  • 49. FINAL PROJECT - Results Extraction for a wide image
  • 50. FINAL PROJECT - Results Extraction from combined images
  • 51. FINAL PROJECT - Pipeline Extraction pipeline and intermediate results
  • 52. FINAL PROJECT - Results Video of the app usage - Using a Camera Video of the app usage - Using an existing image ● This was a great learning experience about uses of feature detection and customization to address specific image types, generate novel images and find useful data. ● This project helped me learn about different contrast and image data enhancement techniques.
  • 53. ● Georgia Tech for creating this great platform for learning - OMSCS ● Prof. Irfan Essa for the lectures and such an enriched course ● The TAs who are always available to discuss and help our questions and thoughts on both Piazza and Slack. ● Stackoverflow which helped with all the questions i had regarding Numpy operations, OpenCV errors etc., Thanks!
  • 54. And as always, Have fun computing with photographs !