Mostrar el registro sencillo del ítem

dc.contributor.advisorNiebles Duque, Juan Carlos
dc.contributor.authorCaba Heilbron, Fabian David
dc.date.accessioned2019-10-01T21:56:05Z
dc.date.available2019-10-01T21:56:05Z
dc.date.issued2013
dc.identifier.urihttp://hdl.handle.net/10584/8644
dc.descriptionRecent e orts in computer vision tackle the problem of human activity understanding in video sequences. Traditionally, these algorithms require annotated video data to learn models. In this work, we introduce a novel data collection framework, to take advantage of the large amount of video data available on the web. We use this new framework to retrieve videos of human activities, and build training and evaluation datasets for computer vision algorithms. We rely on Amazon Mechanical Turk workers to obtain high accuracy annotations. An agglomerative clustering technique brings the possibility to achieve reliable and consistent annotations for temporal localization of human activities in videos. Using two datasets, Olympics Sports and our novel Daily Human Activities dataset, we show that our collection/annotation framework can make robust annotations of human activities in large amount of video data.es_ES
dc.formatapplication/pdfes_ES
dc.language.isoenges_ES
dc.publisherUniversidad del Nortees_ES
dc.subjectVisión por computador -- Investigaciones.es_ES
dc.subjectVideo digital -- Investigacioneses_ES
dc.subjectAlgoritmoses_ES
dc.titleRetrieving, annotating and recognizing human activities in web videoses_ES
dc.typemasterThesises_ES
dc.rights.accessRightsopenAccesses_ES
dc.type.hasVersionacceptedVersiones_ES
dc.publisher.programMaestría en Ingeniería Electrónicaes_ES
dc.publisher.departmentDepartamento de Ingeniería Electrónicaes_ES
dc.creator.degreeMagíster en Ingeniería Electrónicaes_ES


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem