The Machine Vision Learning (MVL) search tool enables users to search historical video collections to find unique moments or objects. MVL also contributes to the archival community by adding to the metadata through object tagging. The Media Ecology Project (MEP) and the Visual Learning Group at Dartmouth are building a system to provide access to primary moving image materials and motivate new forms of scholarly research. This Machine Vision Learning tool uses a mix of machine learning methods and Google search to find specific moments and objects within an archived film, such as a handshake, a phone call, or a martini. To make the system smarter, users can also tag metadata easily. The MVL search tool was mostly built, but was divided into two separate elements, and needed a way to explain the goals and usability of the system to funders and scholars.
DALI's work answered the question: How might we make the MVL system easier to understand and use in order to generate interest from scholars who can tag and use data, and funders who can advance the development and deployment of this tool.
When we began, there were essentially two tools--a film search engine and a tagging system. We merged these into a single prototype to tell the story of the potential of this tool. The new MVL website establishes search and annotation interfaces, includes a user tutorial, and a robust "About" page that explains the neural network technology and project.
We completed the website, funded by the Knight Foundation and the DALI Lab, and provided a demonstration tool the Media Ecology Project can use to raise interest and funding.