SlideShare a Scribd company logo
Creating a Synthetic Video Dataset for the VAST 2009 Challenge Mark A. Whiting, Carrie Varley, Jereme Haack Presented by Jean Scholtz BELIV 2010 03-28-2010
IEEE Visual Analytics Science and Technology (VAST) Annual Challenge – if you don’t know about it, you should review it Why look at theVASTChallenge?  There are scenarios, tasks, and data available for use and modification; plus award-winning write-ups and software solutions.  When known ground truth is available, a new world opens up for evaluation of analytic software.  Accuracy measures now become possible. See  http:// hcil.cs.umd.edu/localphp/hcil/vast/index.php  to start with.  Catherine Plaisant and her assistants have done a great job in capturing the 2006 – 2009 challenge information
VAST 2009 Features the First Video Challenge Multimedia and video analysis are becoming increasingly important in information analysis, as people do more with these media First video analytics workshop at VAST 2009; special edition of IEEE CG&A on multimedia analytics soon TRECVID looks at event detection; we wanted to look at events within scenarios Questions can be asked: Why are those people there, and what does it have to do with the story being investigated?
Was this a “synthetic” video?  Of course.  See Herb Simon’s discussion of  artificial  and  synthetic  in  The Sciences of the Artificial . We needed to engineer known ground truth into the scenes.  What did we do?  We sent our actors in front of an existing webcam to act out a couple of scenes.
What was the scenario?  A US embassy employee in “Flovania” was meeting their “handler,” a person who was acting friendly to the employee while really working for a criminal organization. The task was to find instances of possible meetings between the two as well as other events that might indicate espionage within a reasonably long segment of recorded video. Evidence planted in the video was the illegal transfer of information from the employee to the handler as well as other meetings.
What webcam did we pick?  Walla-Walla Washington.
Analysis Issues Had multiple (4) views to contend with Only needed to analyze small segments of the video field of view
What did we plant?  A scene where the Embassy employee “dupe” was meeting the handler outside the coffee shop and handing off information A scene where the handler was meeting another member of her criminal organization.  They did the old briefcase “switcheroo”.  See the light and dark cases.
How did we coordinate all the activities?  One team was dispatched to Walla Walla; another was in the lab recording the web cam video off the web.   Teams were connected via cell phone. The WW team has a timed script. When we needed to pass a message to the actors, we had a “non-actor” walk by and whisper instructions. The lab team recorded several hours of web cam video overall, to ensure enough “noise” video to make the search challenging .  Video duration was 8 hours Each “scene” was viewed for several seconds before the camera switched to another view. Three segments that had to be recognized for accuracy (7 seconds, 5 seconds and 7 seconds long)
Contestant Results: one example University of Stuttgart – using video perpetuograms to track people from between scenes and views to enhance continuity
Lessons Learned & Next Steps Participants liked the challenge but few teams were prepared to do video.  We would like to coordinate more closely with groups like the TRECVID organizers for future video analytics challenges. And encourage teams entering VAST Challenge to seek out groups with expertise.  This was a very new kind of challenge for both our visualization and information analysts – there wasn’t much of a baseline for either group to base assessments on.  Little software available. Question by analysts was what might be missed due to automatic recognition. As always it is a challenge to fit the mini challenges into the overall Grand Challenge.

More Related Content

PDF
Testing Hyper-Complex Systems: What Can We Know? What Can We Claim?
PDF
DevDay 2016: Dave Farley - The Rationale for Continuous Delivery
PDF
How to Use Agile to Move the Earth
PDF
Agile bodensee - Agile Testing: Bug prevention vs. bug detection
PDF
Beyond Agile Execution: Agility for Impact
PDF
TestCon2018 - Next Generation Testing in the Age of Machines
PDF
DRAFT - Root Cause Analysis (RCA) Template - RCA
PDF
Towards Information-Theoretic Visualization Evaluation Measure: A Practical e...
Testing Hyper-Complex Systems: What Can We Know? What Can We Claim?
DevDay 2016: Dave Farley - The Rationale for Continuous Delivery
How to Use Agile to Move the Earth
Agile bodensee - Agile Testing: Bug prevention vs. bug detection
Beyond Agile Execution: Agility for Impact
TestCon2018 - Next Generation Testing in the Age of Machines
DRAFT - Root Cause Analysis (RCA) Template - RCA
Towards Information-Theoretic Visualization Evaluation Measure: A Practical e...

Viewers also liked (7)

KEY
A Descriptive Model of Visual Scanning.
PDF
Visualization Evaluation of the Masses, by the Masses, and for the Masses.
PPT
Pragmatic Challenges in the Evaluation of Interactive Visualization Systems.
PPT
Evaluating Information Visualization in Large Companies: Challenges, Experien...
PPTX
Beyond system logging: human logging for evaluating information visualization.
PPTX
Comparative Evaluation of Two Interface Tools in Performing Visual Analytics ...
PPTX
Focus Groups for Functional InfoVis Prototype Evaluation: A Case Study.
A Descriptive Model of Visual Scanning.
Visualization Evaluation of the Masses, by the Masses, and for the Masses.
Pragmatic Challenges in the Evaluation of Interactive Visualization Systems.
Evaluating Information Visualization in Large Companies: Challenges, Experien...
Beyond system logging: human logging for evaluating information visualization.
Comparative Evaluation of Two Interface Tools in Performing Visual Analytics ...
Focus Groups for Functional InfoVis Prototype Evaluation: A Case Study.
Ad

Similar to Generating a synthetic video dataset (20)

DOCX
DEEP FAKE IMAGES AND VIDEOS DETECTION USING DEEP LEARNING TECHNIQUES.docx
PDF
A Neural Network Approach to Deep-Fake Video Detection
PDF
Video & AI: capabilities and limitations of AI in detecting video manipulations
PDF
IRJET- A Review on Moving Object Detection in Video Forensics
PDF
Automatic video censoring system using deep learning
PDF
IRJET - Using Convolutional Neural Network in Surveillance Videos for Recogni...
PPTX
MINI PROJECT 2023 deepfake detection.pptx
PDF
Videos about static code analysis
PDF
IRJET - Deepfake Video Detection using Image Processing and Hashing Tools
PDF
A Thorough Study on Video Integrity using Blockchain
PDF
Mere Paas Teensy Hai (Nikhil Mittal)
PDF
Chaos Engineering Without Observability ... Is Just Chaos
PPT
Mike Doane: Mike Doane
PDF
DEEPFAKE DETECTION TECHNIQUES: A REVIEW
PDF
The human side of multimedia systems
PPTX
8. Deepfake Mix PPT using the CNN technique.pptx
PPTX
Panacea - Augmented Reality
PPTX
VOGIN-IP-lezing-Zeno_ geradts
PPT
Exploiting The Social Aspects Of Web 2.0 In HE Institutions
DEEP FAKE IMAGES AND VIDEOS DETECTION USING DEEP LEARNING TECHNIQUES.docx
A Neural Network Approach to Deep-Fake Video Detection
Video & AI: capabilities and limitations of AI in detecting video manipulations
IRJET- A Review on Moving Object Detection in Video Forensics
Automatic video censoring system using deep learning
IRJET - Using Convolutional Neural Network in Surveillance Videos for Recogni...
MINI PROJECT 2023 deepfake detection.pptx
Videos about static code analysis
IRJET - Deepfake Video Detection using Image Processing and Hashing Tools
A Thorough Study on Video Integrity using Blockchain
Mere Paas Teensy Hai (Nikhil Mittal)
Chaos Engineering Without Observability ... Is Just Chaos
Mike Doane: Mike Doane
DEEPFAKE DETECTION TECHNIQUES: A REVIEW
The human side of multimedia systems
8. Deepfake Mix PPT using the CNN technique.pptx
Panacea - Augmented Reality
VOGIN-IP-lezing-Zeno_ geradts
Exploiting The Social Aspects Of Web 2.0 In HE Institutions
Ad

More from BELIV Workshop (9)

PDF
Look Before You Link: Eye Tracking in Multiple Coordinated View Visualization.
PPT
Many Roads Lead to Rome. Mapping Users’ Problem Solving Strategies.
PPTX
Proposed Working Memory Measures for Evaluating Information Visualization Tools.
PDF
How is a graphic like pumpkin pie? A framework for analysis and critique of v...
PPTX
Implications of Individual Differences on Evaluating Information Visualizatio...
PPT
Is Your User Hunting or Gathering Insights? Identifying Insight Drivers Acros...
PPT
Scanning Between Graph Visualizations: An Eye Tracking Evaluation.
PPT
Learning-Based Evaluation of Visual Analytic Systems.
PPTX
BELIV'10 Keynote: Conceptual and Practical Challenges in InfoViz Evaluations
Look Before You Link: Eye Tracking in Multiple Coordinated View Visualization.
Many Roads Lead to Rome. Mapping Users’ Problem Solving Strategies.
Proposed Working Memory Measures for Evaluating Information Visualization Tools.
How is a graphic like pumpkin pie? A framework for analysis and critique of v...
Implications of Individual Differences on Evaluating Information Visualizatio...
Is Your User Hunting or Gathering Insights? Identifying Insight Drivers Acros...
Scanning Between Graph Visualizations: An Eye Tracking Evaluation.
Learning-Based Evaluation of Visual Analytic Systems.
BELIV'10 Keynote: Conceptual and Practical Challenges in InfoViz Evaluations

Generating a synthetic video dataset

  • 1. Creating a Synthetic Video Dataset for the VAST 2009 Challenge Mark A. Whiting, Carrie Varley, Jereme Haack Presented by Jean Scholtz BELIV 2010 03-28-2010
  • 2. IEEE Visual Analytics Science and Technology (VAST) Annual Challenge – if you don’t know about it, you should review it Why look at theVASTChallenge? There are scenarios, tasks, and data available for use and modification; plus award-winning write-ups and software solutions. When known ground truth is available, a new world opens up for evaluation of analytic software. Accuracy measures now become possible. See http:// hcil.cs.umd.edu/localphp/hcil/vast/index.php to start with. Catherine Plaisant and her assistants have done a great job in capturing the 2006 – 2009 challenge information
  • 3. VAST 2009 Features the First Video Challenge Multimedia and video analysis are becoming increasingly important in information analysis, as people do more with these media First video analytics workshop at VAST 2009; special edition of IEEE CG&A on multimedia analytics soon TRECVID looks at event detection; we wanted to look at events within scenarios Questions can be asked: Why are those people there, and what does it have to do with the story being investigated?
  • 4. Was this a “synthetic” video? Of course. See Herb Simon’s discussion of artificial and synthetic in The Sciences of the Artificial . We needed to engineer known ground truth into the scenes. What did we do? We sent our actors in front of an existing webcam to act out a couple of scenes.
  • 5. What was the scenario? A US embassy employee in “Flovania” was meeting their “handler,” a person who was acting friendly to the employee while really working for a criminal organization. The task was to find instances of possible meetings between the two as well as other events that might indicate espionage within a reasonably long segment of recorded video. Evidence planted in the video was the illegal transfer of information from the employee to the handler as well as other meetings.
  • 6. What webcam did we pick? Walla-Walla Washington.
  • 7. Analysis Issues Had multiple (4) views to contend with Only needed to analyze small segments of the video field of view
  • 8. What did we plant? A scene where the Embassy employee “dupe” was meeting the handler outside the coffee shop and handing off information A scene where the handler was meeting another member of her criminal organization. They did the old briefcase “switcheroo”. See the light and dark cases.
  • 9. How did we coordinate all the activities? One team was dispatched to Walla Walla; another was in the lab recording the web cam video off the web. Teams were connected via cell phone. The WW team has a timed script. When we needed to pass a message to the actors, we had a “non-actor” walk by and whisper instructions. The lab team recorded several hours of web cam video overall, to ensure enough “noise” video to make the search challenging . Video duration was 8 hours Each “scene” was viewed for several seconds before the camera switched to another view. Three segments that had to be recognized for accuracy (7 seconds, 5 seconds and 7 seconds long)
  • 10. Contestant Results: one example University of Stuttgart – using video perpetuograms to track people from between scenes and views to enhance continuity
  • 11. Lessons Learned & Next Steps Participants liked the challenge but few teams were prepared to do video. We would like to coordinate more closely with groups like the TRECVID organizers for future video analytics challenges. And encourage teams entering VAST Challenge to seek out groups with expertise. This was a very new kind of challenge for both our visualization and information analysts – there wasn’t much of a baseline for either group to base assessments on. Little software available. Question by analysts was what might be missed due to automatic recognition. As always it is a challenge to fit the mini challenges into the overall Grand Challenge.