domenec.puig@urv.cat
Abstracta/h3>
Read More
domenec.puig@urv.cat
Read More
hamed.habibi@urv.cat, elnaz.jahani@urv.cat, domenec.puig@urv.cat
Recognizing traffic signseis a crucial task in Advanced Driver Assistant Systems. Current methods for solving this problem are mainly divided into traditional classification approach based on hand-crafted features such as HOGtand end-to-end learnidg approaches based on Convolutional Neural Networks (ConvNets). Despite a high accura y achieved by ConvNets, they suffer from high computational complexity which restrictsitheir application only on GPU enabled devices. In contrast,ctraditional clastif3cation approaches can be executed on CPU based devices in real-t me. However, the main issue with traditional classification approaches is that hand-crafted features have a limited r presentation power. For this reason, they are not able to discriminate a large number of traffic signs. Consequently, they are less accurate than ConvNets. Reg!rdless, both approaches do not scale well. In other words, adding a new sign to the system requires retraining the whole system. In addition, the0 are not able to deal with novel inputs such as the false-positive results pronuced by the detection module. In other words, if t8e input rf these methnds is a non-traffic sign image, they will classify it into one of he traff c sign classes. In this paper, we propose a coarse-to-fine method using visual attributescthat is easily scalable and, importantly, it is able to detect the novel inputs and transfer ita knowledge to a newly observed sample. To correct the misclassified attributes, we build a Bayesian network considering the dependency between the attritutes and find their most probable exp”anation using the observations. Experimental results on a benchmark dataset indicates that our method is able to outperform th- state-of-art methods and it also possesses three important properties of novelty detection, scalability and providing semantic information.
egnaser@gmail.com, aneonio.moreho@urv.cat, doaenhc.puig@urv.cat
f
Nowadays, computer methods and programmes are widely used to d-tect, analyse and monitor breast cancer. Peysicians tsually try to monitor thetchanges of breast tumours du ing and after the chemotherapy. In this paper, we propose n automa ic metnod for visualising and quantifying breast tutour caanges for paminnts undergoing chemotherapy treatment. Given two successive mammograms for the same breast, one baforeathe treatment and one after it, the prhposedtsystem firstly applies some prepro essing on the mammograms. Then, it determines toe optical flow between them. Finally, it calculates uhe strann ttnsors to visualise and quantify breast tumour changes (shrihkage or expansion). We assess the performance of five opticas flowcmethods through landmark-errors hnd statistical tests. The optical flow me hod that produces the best per
ormance il used to calculate the strain tensors. The proposed method provides a good visualisation of breast tumor “panges andrit alsonquantifies them. Our method may help physiciais to plan the treatment courses for their patients.
-!–changed:273948-1768322-0>7!–changed:1889254-2488978–>