Joe Schulman wrote:
I primarily focus on doing cgi programming, and although I do that
fairly well for my little website, I find the possibilities of the work
that you guys present do exciting (not to mention I envy you for having
such fun jobs).
Don't assume that it has anything to do with my job. =) In fact, for
the past couple of years even Perl has had very little to do with my
job. Perl is mostly a recreation thing for me right now.
Of course, my skewed understanding of AI derives from sci-fi movies and
much pop culture. So, this is a simple and painless question (I
hope!): In what direction is the majority of this field moving as far
as research is concerned? I remember that Mr. Williams released his
latest categorize module not to long ago. Are “you” pushing for
automatic recognition and categorization (like Mr. William’s) or a
broader ability to adapt and learn (like the more recent movie, The
Funny, I thought it looked kind of neat...
And if so, what kind of application do you personally hope for this
newfound technology (of course limitless possibilities exist, but all
software has a desired implementation). Or is there another aspect to
this field altogether?
In the spirit of Socrates (and to qualify myself according to Paris
Sinclair's criteria :-), I'll also claim that I really don't know
anything about AI either. My academic background is in other stuff
(math & music), and I don't do AI professionally. Furthermore, IMO the
subject "AI" isn't really a subject which can be defined very succinctly
anyway. Several times I've heard a definition of AI that I like: "AI is
the study of any process we don't understand. Once we understand it,
it's no longer an AI topic."
Many people in "the public" think of AI as a bunch of people trying to
make a robot that can think like a human, but there's no way people can
do that until they can solve other, much smaller problems like "whether
that thing in front of the camera is an egg or a baseball". The most
famous criterion for "success" in AI is the Turing Test, but people
don't seem to be thinking about that much anymore, because it's not
clear how many people you'd have to fool in order to pass the test.
Also, the most successful Turing Test programs to date have been
decidedly un-interesting, consisting of just a bunch of set patterns of
responses to set patterns of input.
Many researchers seem to eschew the term "AI", and describe their work
more specifically by application (Machine Learning, Natural Language
Processing, Scheduling, etc.) or mechanism (Neural Networks, Bayesian
Inference, Decision Trees, etc.). Of course, these areas are neither
disjoint nor orthogonal.
Personally, I've been enjoying learning categorization methods just
because I think they're neat, and because they have broad applicability
to lots of problems. I don't mention it in the AI::Categorize docs yet,
but categorization methods can certainly be applied to problems other
than text documents - one current application is in genomics, where
people need some heavy methods to deal with massive amounts of unseen
data that they can't afford to process manually.
In my experience, many of the ideas people work with are quite simple at
their core, but can get very complicated when working out the details.
For example, a neural network is a simple concept - just connect a bunch
of nodes to each other and aassign weights to the connections, then
learn the best weights to produce a certain output - but when you try to
nail down the fuzzy stuff in that concept and actually implement things
that do their job effectively, it can take a lot of paper-reading and a
lot of trial-and-error.
I didn't mean to write such a tome, but there you have it.
Ken Williams Last Bastion of Euclidity
firstname.lastname@example.org The Math Forum