Visualization for solving non-image problems and Saliency Mapping

Student: Divya Chandrika Kalla (School of Graduate Studies)

Mentor: Boris Kovalerchuk


Integration between visualization, visual analytics, machine learning, and data mining are the key aspects of data science research. This project proposes a new CPC-R algorithm used to convert non-images into images by visualizing data using paired coordinates. Powerful deep learning algorithms open an opportunity and solve the problem of transforming non-image machine learning problems into image recognition. The main idea of CPC is splitting attributes of an n-D point to consecutive pairs of its attributes.
High-dimensional data play an important role in knowledge discovery. This experiment is performed by using the Ionosphere and Glass datasets from the UCI machine learning repository. Reported the results obtained in the computational experiments with Ionosphere and Glass data with CPC-R for different CNN architectures, and a different number of pixels per cell, which represents each pair of attributes. The Accuracies for the Ionosphere and Glass dataset are 94.44% (2 classes and 34 dimensions). and 95.90% (6 classes and 10 dimensions).
The second technique for this project is Saliency Mapping. The saliency models take an input test image and generate a saliency map that predicts which regions of the image will be most likely to draw a human viewer’s attention. The efficiency of the CPC-R algorithm is tested, and further optimization needs to be performed.

References: Kovalerchuk B. Visual Knowledge Discovery and Machine Learning, Springer, 2018.


1 thought on “Visualization for solving non-image problems and Saliency Mapping”

  1. Divya,

    Why you are not talking louder? It seems you are afraid! It is your work, so just do it! I know that such videos are new and awkward to all of us, but we have to learn it to used them in our advantage.
    Some page numbers would be good on the slides. It is quite unfortunate that the slide number are not there as I can not refer to particular pages. However, in the video at 3:10 you are explaining the data 34D and 17 pixels, I think it would be nice to provide an example rather than just explaining it abstractly. Nothing is mentioned about the Glass and ionosphere data. People might not be familiar what those data are in the UCI repository. Some link at least would be helpful. Nothing is mentioned about the size of the data just some accuracies. It is difficult if those numbers are relevant or not. At 11:52 you are mentioning a CNN. What this CNN looks like and what is the purpose of the CNN? What is the input and output of this CNN?
    Very interesting research and I look forward to discuss this topic with you further once we are back in school in fall.

Leave a Reply

Your email address will not be published. Required fields are marked *