Grokbase Groups Perl ai January 2001
FAQ
Greetings All,

Recently this email, (Below) came to a list I host. I really don't know much
about fourier descriptors, and I thought that possibly someone in Perl-AI
might know more about this gentlemans subject than I. If anyone can help
with the below message, I'm sure it would be greatly appreciated.

~ Josiah Bryan


P.S. If anyone replies, could you CC it to Perl-AI and/or
ai-neuralnet-backprop@egroups.com?
Thankyou!


----- Original Message -----
From: vikram ramaswamy <vikram.ramaswamy@usa.net>
To: <ai-neuralnet-backprop@egroups.com>
Sent: Saturday, January 20, 2001 8:02 AM
Subject: [ai-neuralnet-backprop] respected sir



Respected Sir,

I am vikram an undergraduate student from india. I am currently working
on a
project on shape classification using a feed forward neural network
trained by
standard backprop algorithm.

I initially detect the edges of the shape to be classified and find the
fourier descriptors(fds) of the edge image. I use these fds as the input
to
the neural network.

This approach has been followed before and a paper in this topic has been
published. We are infact trying to implement what the below listed
authors
have done before (only that the objects to be classified are different):

Hongbong Kim, Kwanhee Nam, 'Object Recognition of one DOF by
back-propagation
neural net', IEEE transactions on Neural Networks Vol.6 No.2, March 1995.



I have a few doubts about the fourier descriptors. Can you please kindly
excuse the trouble and answer my questions?

In my study , I have used only 16 fds as mentioned in the above paper.

1) I take an edge image of an (eg. circular) object and find its fourier
descriptors. I now rotate the original image , detect the edges and get
the
fourier descriptors(fd) of this image. It is mentioned that fd's are
insensitive to rotation. Does this mean the 2 sets of fd's mentioned
above
must be identical? If not, what is the way in which the fd's of the
rotated
images related?

2)Also, when we use an object of a different shape eg. rectangle, and get
the
fd's , I would expect that fd values be drastically different from the
values
obtained for the circular object. Is this assumption justified? Moreover,
I
took a circular object got its fd's; rotated the object obtained another
set
of fd's. I computed the difference between the two sets of 16 fd . I then
found the difference between the fd of a circular object and one for a
rectangular object. I am troubled by the fact that the 2 differences are
comparable. I thought (fd for circle) - (fd for rotated img. of circle) <
(fd
for circle) - (fd for rectangle).

By difference I mean foll.: we calculate 16 fd. take (1st fd value for
circle
- 1st fd value for rotated image of circle). This is done for all the 16
values.

Max difference between circles must (i guess)be < Max difference between
circle and rectangle.

Since this is not so, can you kindly clarify this situation?


IF ANY OTHER APPROACH IS POSSIBLE FOR SHAPE CLASSIFICATION USING FEED -
FWD
NEURAL NETS KINDLY INFORM ME OF THE SAME.

Am anxiously awaiting your reply,

Yours respectfully,

vikram


To unsubscribe from this group, send an email to:
ai-neuralnet-backprop-unsubscribe@egroups.com

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupai @
categoriesperl
postedJan 23, '01 at 7:02p
activeJan 23, '01 at 7:02p
posts1
users1
websiteperl.org

1 user in discussion

Josiah Bryan: 1 post

People

Translate

site design / logo © 2021 Grokbase