Human-Assisted Pattern Classification

Background

We are interested in enhancing human-computer interaction in applications of pattern recognition where higher accuracy is required than is currently achievable by automated systems, but where there is enough time for a limited amount of human interaction. This topic has so far received only limited attention from the research community.

This project is a continuation of earlier work.

References

[1] A. Schur and C. Tappert, "Combining Human and Machine Capabilities for Improved Accuracy and Speed in Visual Recognition Tasks", Proc. HCI International (HCII 2014), Crete, Greece, Jun 2014.
[2] Amir Schur, Sung-Hyuk Cha, and Charles C. Tappert, "Comparative Analysis of Feature Extraction Capabilities between Machine and Human in Visual Pattern Recognition Tasks Utilizing a Pattern Classification Framework", Proc. Research Day Conference, Pace University, May 2013.
[3] Kathryn Durfee, Neville Kapoor, Matthew Muccioli, Richard Smart, David Wilkins, and Amir Schur, "An Evaluation of the Effect of Human Interaction on the Accuracy of the Interactive Visual System", Proc. Research Day Conference, Pace University, May 2012.
[4] A. Evans, J. Sikorski, P. Thomas, S.-H. Cha, C. C. Tappert, J. Zou, A. Gattani, and G. Nagy, "Computer Assisted Visual Interactive Recognition (CAVIAR) Technology," Proc. 2005 Electro/Information Technology (EIT) Conf., EIT 2005, Lincoln, NE, May 2005.
[5] G. Nagy, C. Tappert, J. Zou, and S. Cha, "Combining human and machine capabilities for improved accuracy and speed in critical visual recognition tasks," NSF Proposal, 2003.

Project

Phase 1: Training data creation.
New training data will be obtained using a customer-provided, Android-based tool. A new database of flower images will be created using high-resolution pictures taken with the camera in an Android phone (6+ Mega Pixel, typically sized 2-3 MB each). There are already 30+ species with 5 pictures each that can be used. Students should collect at least 10 more species to learn how to use this new tool. Experience using the tool and related feedback will be appreciated by the customer.

Phase 2: Flower recognition test.
After the photo data is available, the team will run an experiment with test data consisting of a random selection of flower photos. The team will then enter the test photos using a new Android-based flower recognition tool (proposed IVS improvement). We will then collect the results. Classification uses knn algorithm. Accuracy and time to complete will be recorded. All data (images, human interaction results, etc) from the android app can be stored to Google cloud with one click (accessible by the researchers only at this time, but could open to wider audience in the future). The results will be compared to data collected on previous projects.

Phase 3: Re-run test with two different color spaces.
The existing tests used the RGB color space. We will add columns for HSI and CIELab to the training data. We will then compare accuracy (using data collected in Phase 2) with classification performed using HSI, then another comparison using CIELab. The best result will be used as the final fix/delivery of the 2nd android app.